By Kannan Agarwal

In early 2022, the European Banking Authority (EBA) called for increased vigilance on the possibilities of fake news triggering a bank run. The warning was issued based on data from its Q4 2021 EBA Risk Dashboard, a quarterly report of the major risks and vulnerabilities in the EU banking sector, which had detected the rumblings of a significant liquidity and funding threat arising from ongoing conflict in the Ukraine.

The EBA report wrote: “As market sentiment remains highly volatile and driven by news flow, banks’ liquidity levels can become vulnerable due to spread of inaccurate information. Such campaigns that spread inaccurate information may result in deposit outflows from targeted banks.”

An ominous warning that has since escalated in intensity and urgency.

On 19 July 2023, Joachim Nagel, President of the Bundesbank, echoed this warning when speaking to local journalists, saying that the veracity of fake news on social media could plunge banks into another crisis, citing the Twitter-fuelled collapse of Silicon Valley Bank run as an example. Xinhua News Agency reported that Germany’s central bank president suggested that “banking supervision could be extended to social media, so that supervisors could identify the risk of a bank run at an early stage”.

The news agency commented: “Social media platforms have emerged as an increasingly important tool for investors hunting for information. Industry insiders have warned that information on social media can spread so fast that a bank run can happen overnight. This means supervisors have to act faster, Nagel said. A banking supervisory task force to monitor social media and rapidly detect emerging risks could be set up in Europe, as in South Korea, he suggested.”

How well versed are banks on the mechanics of these structured campaigns to spread inaccurate information, also known as ‘information disorder’, and are they equipped to counter these threats?

The Lay of the Land

In the first-ever research examining the elements of information disorder, Claire Wardle and Hossein Derakhshan’s report for the Council of Europe, Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making, maps the terrain (Figure 1) and makes the distinction between three types of ‘fake news’:

Figure 1
Source: Wardle and Derakhshan, Council of Europe, Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making.
  • Misinformation: This is when false information is shared, but no harm is meant. It covers:
  • False connections: When headlines, visuals or captions do not support the content.
  • Misleading content: When an opinion is presented as a fact.
  • Disinformation: When false information is knowingly shared to cause harm. This includes:
  • False context: Factually accurate content is combined with wrong context, for instance, click baits.
  • Imposter content: Impersonation of genuine sources, e.g. using deepfakes or AI voice mimicry to pass off as someone else.
  • Manipulated content: Genuine information or imagery that has been altered, such as photoshopped images with factually incorrect captions.
  • Fabricated content: Completely false information.
  • Malinformation: When genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere. This covers leaks, harassment, and hate speech. In this instance, the intent of the sender is important and could, at times, border into the realm of seditious content.

A Reuters Institute survey estimates that 88% of misinformation comes from social media; messaging services and/or platforms such as WhatsApp or Facebook Messenger are also significant sources of mis/disinformation campaigns.

Although the recent Silicon Valley Bank collapse garnered headlines as the world’s first Twitter-fuelled bank run, this is factually incorrect. The world’s first social-media-fuelled bank run predates it by more than a decade.

In 2011, the Stockholm-based Swedbank fell prey to disinformation which was first detected on Twitter (now known as X). In under 140 typed characters, rumours stated that its ATMs in Sweden were closed; the bank would exit its key market, Latvia; and Swedbank’s Latvian CEO had been arrested. These were all untrue. Within hours, however, Swedbank’s ATMs were emptied of money and public statements had to be issued to assure depositors that the machines would be constantly replenished with notes as a way to stem the panic. It was later discovered that the bank was targeted by disinformation-for-hire actors as part of a campaign to destabilise operations of financial institutions in Eastern Europe due to the Ukraine-Russia conflict.

Since then, the frequency, speed, tactics, and scale of such mis/disinformation campaigns on social media and messaging platforms have intensified. Banks should therefore not just exercise vigilance but stay a step ahead of threat actors by knowing how tech, tactics, and targets are evolving every day.

Finger on the Pulse

According to the United Nations High Commissioner for Refugees, the fast-growing methods and techniques in which social media is being used to manipulate the narrative or online conversations are becoming increasingly sophisticated. Its Factsheet 4: Types of Misinformation and Disinformation, published in 2022, lists some of the newer methods:

  • A sockpuppet is an online identity used to deceive. The term now extends to misleading uses of online identities to praise, defend or support a person or organisation; to manipulate public opinion; or to circumvent restrictions, suspension, or an outright ban from a website. The difference between a pseudonym and a sockpuppet is that the sockpuppet poses as an independent third party, unaffiliated with the main account holder. Sockpuppets are unwelcome in many online communities and forums.
  • Sealioning is a type of trolling or harassment where people are pursued with persistent requests for evidence or repeated questions. A pretence of civility and sincerity is maintained with these incessant, bad-faith invitations to debate.
  • Astroturfing masks the sponsors of a message (e.g. political, religious, advertising, or PR organisations) to make it appear as though it comes from grassroots participants. The practice aims to give organisations credibility by withholding information about their motives or financial connections.
  • Catfishing is a form of fraud where a person creates a sockpuppet or fake identity to target a particular victim on social media. It is common for romance scams on dating websites. It may be done for financial gain, to compromise a victim or as a form of trolling or wish fulfilment.

Currently, there is no regulatory toolkit to counter mis/disinformation. However, innovations in suptech – or supervisory technology – are increasingly relied upon by financial institutions and supervisory authorities to monitor markets and detect emerging risks. Whether real-time or lagged monitoring of social media, websites, and messaging platforms, these innovations assist companies in processing the deluge of digital information in the following ways:

> Web scraping: Using bots (coded autonomous programmes) to extract or ‘scrape’ content and data from websites has been around for decades. However, vast improvements in technological hardware have opened up opportunities for such technology to be deployed en masse at lower prices and at more granular levels of data. Scraper bots can now automatically harvest, analyse, and rank data from all digital platforms and raise the red flag at signs of potential risks that could impact financial stability.

> Social media monitoring: Also known as brand monitoring, it alerts organisations whenever a specific brand or keywords are mentioned in the digital space. It is easy to quantify and allows for quick response to complaints or negative sentiment.

> Social listening: Often confused with social media monitoring, social listening is a more proactive tool and lends insight into positive/negative sentiments or arising risks for not just the brand but broad-based industries based on online conversations unfolding in real time. Financial institutions are increasingly turning to intelligent social listening in order to enhance engagement with customers, detect potential risks, and inform strategic decision-making.

> Consumer sentiment analysis: Gauging customer sentiment and how it impacts their experience with the brand require independent, no-holds-barred input. Using algorithms to discover, measure, and infer how customers feel about your brand, product, or service over the long term automates the process of inferring patterns and trends from surveys, social media posts, and other data sources in real time.

> Reputational analysis: A key tool to assess intangibles such as brand value, reputational analysis today has evolved to incorporate analysis and risk matrices for social media value. Many public-listed corporations today regularly track and disclose metrics such as the net promoter score or earned growth in their annual report or website.

> Deepfake detection: The threat of artificial-intelligence (AI) powered deepfakes – video or audio of a person which has been digitally manipulated to pass them off as someone else in order to maliciously spread false information – is the latest wave in fraud that’s only just begun. In early 2020, the voice of a company director was allegedly deepfaked to demand that a Hong Kong-based bank manager sign off on a USD35 million transfer. Such voice-spoofing is just the tip of the iceberg as deep learning technology continues to refine itself in a variety of fields, including natural language processing and machine vision. Some early stage solutions, including MIT’s open-source Detect Deepfakes research project, provide free tools and share techniques to identify and counteract AI-generated misinformation.

> Dark web monitoring: The dark web is content on the internet that only exists on darknets, i.e. networks with encrypted content that does not appear using conventional search engines (Google, Mozilla, etc.) and can only be accessed using an anonymising browser, Tor. In a 2019 study, Into the Web of Profit: Understanding the Growth of the Cybercrime Economy, a researcher at the University of Surrey, Dr Michael McGuire, reported that the number of dark web listings which could potentially harm companies had risen by 20% since 2016 and included:

  • data trading platforms, where private information such as stolen credit/debit card details, social security information, residential addresses and credit reports are traded or bought for as little as several dollars per record; and
  • Cybercrime-as-a-Service platforms, where specific tools such as banking trojans, targeted distributed denial of service attacks, or even services such as hackers on call, can be procured or rented to target specific customers.

Do You Care Enough to Act?

Media news organisation Newsguard reported in June this year that thousands of the world’s leading brands unintentionally run their advertising on misinformation websites to the tune of USD2.6 billion annually. This ad revenue doesn’t just keep the lights on for these malicious websites; it incentivises them to keep churning more and more false narratives, clickbait content, and hoaxes. However, many executives remain apathetic, brushing off the issue as too trivial for any meaningful action.

Gordon Crovitz, Co-CRO of Newsguard and ex-columnist for The Wall Street Journal, draws from personal experience in his op-ed on the future of journalism in 2023: “To my surprise, however, most brand managers shrug when they learn where their ads are running. They may figure that, with so many brands advertising on so many misinformation sites, no one brand will get blamed for the group problem.”

This flies in the face of logic as studies prove that “when brands stop advertising on misinformation sites and instead advertise on quality news sites,” writes Crovitz, “their CPM (cost per 1,000 impressions) price for ads goes down and engagement with their ads goes up.”

As the push for greater environmental, social, and governance (ESG) gains momentum in financial circles, institutions must come to the realisation that funding misinformation campaigns – whether knowingly or unknowingly – is a critical governance issue because it directly impacts trust and public perception.


Shields Up

The mis/disinformation economy is a shadow industry that is set to only grow in size. Banks can and should do their part in taking it down. The following are three simple yet impactful ways in which companies can place guardrails to stop their brands from inadvertently funding dis-, mis-, mal-information websites, writes Claire Atkin, CEO and co-founder of advertising technology watchdog, Check My Ads, in her article, Are Your Ads Funding Disinformation?, which ran in a recent issue of the Harvard Business Review:

Check your ad campaigns. Forget high-level performance reports from ad tech companies. Instead, ask them for log-level data, which is the real source of truth of your ad placements, because it includes specific data about on which websites where your ad appeared. Supply chain research firms can help you audit your campaigns.

Avoid brand safety technology. The leading ad verification companies only provide high-level reports, keeping you unaware of which websites your ads are being placed on and blocked from. As mention[ed] above, this isn’t sufficient for ensuring that your company’s ads aren’t supporting bad actors. If you are using brand safety technology, ensure that that data, too, gets audited regularly — brand safety technology is often ineffective, and sometimes even harmful.

Demand cash refunds. There will often be a discrepancy between your log-level data and the campaign standards you were promised. When this happens, demand a cash refund — not a make-good. You are entitled to your money back and an explanation of how the discrepancies will be avoided in the future. If not, ditch the vendor.


Kannan Agarwal is a content analyst and writer at Akasaa, a boutique content development and consulting firm.