Don’t Feed the Disinformation Machine
Your ad spend could be fuelling this shadow economy.
By Kannan Agarwal
In early 2022, the European Banking Authority (EBA) called for increased vigilance on the possibilities of fake news triggering a bank run. The warning was issued based on data from its Q4 2021 EBA Risk Dashboard, a quarterly report of the major risks and vulnerabilities in the EU banking sector, which had detected the rumblings of a significant liquidity and funding threat arising from ongoing conflict in the Ukraine.
The EBA report wrote: “As market sentiment remains highly volatile and driven by news flow, banks’ liquidity levels can become vulnerable due to spread of inaccurate information. Such campaigns that spread inaccurate information may result in deposit outflows from targeted banks.”
An ominous warning that has since escalated in intensity and urgency.
On 19 July 2023, Joachim Nagel, President of the Bundesbank, echoed this warning when speaking to local journalists, saying that the veracity of fake news on social media could plunge banks into another crisis, citing the Twitter-fuelled collapse of Silicon Valley Bank run as an example. Xinhua News Agency reported that Germany’s central bank president suggested that “banking supervision could be extended to social media, so that supervisors could identify the risk of a bank run at an early stage”.
The news agency commented: “Social media platforms have emerged as an increasingly important tool for investors hunting for information. Industry insiders have warned that information on social media can spread so fast that a bank run can happen overnight. This means supervisors have to act faster, Nagel said. A banking supervisory task force to monitor social media and rapidly detect emerging risks could be set up in Europe, as in South Korea, he suggested.”
How well versed are banks on the mechanics of these structured campaigns to spread inaccurate information, also known as ‘information disorder’, and are they equipped to counter these threats?
In the first-ever research examining the elements of information disorder, Claire Wardle and Hossein Derakhshan’s report for the Council of Europe, Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making, maps the terrain (Figure 1) and makes the distinction between three types of ‘fake news’:
A Reuters Institute survey estimates that 88% of misinformation comes from social media; messaging services and/or platforms such as WhatsApp or Facebook Messenger are also significant sources of mis/disinformation campaigns.
Although the recent Silicon Valley Bank collapse garnered headlines as the world’s first Twitter-fuelled bank run, this is factually incorrect. The world’s first social-media-fuelled bank run predates it by more than a decade.
In 2011, the Stockholm-based Swedbank fell prey to disinformation which was first detected on Twitter (now known as X). In under 140 typed characters, rumours stated that its ATMs in Sweden were closed; the bank would exit its key market, Latvia; and Swedbank’s Latvian CEO had been arrested. These were all untrue. Within hours, however, Swedbank’s ATMs were emptied of money and public statements had to be issued to assure depositors that the machines would be constantly replenished with notes as a way to stem the panic. It was later discovered that the bank was targeted by disinformation-for-hire actors as part of a campaign to destabilise operations of financial institutions in Eastern Europe due to the Ukraine-Russia conflict.
Since then, the frequency, speed, tactics, and scale of such mis/disinformation campaigns on social media and messaging platforms have intensified. Banks should therefore not just exercise vigilance but stay a step ahead of threat actors by knowing how tech, tactics, and targets are evolving every day.
According to the United Nations High Commissioner for Refugees, the fast-growing methods and techniques in which social media is being used to manipulate the narrative or online conversations are becoming increasingly sophisticated. Its Factsheet 4: Types of Misinformation and Disinformation, published in 2022, lists some of the newer methods:
Currently, there is no regulatory toolkit to counter mis/disinformation. However, innovations in suptech – or supervisory technology – are increasingly relied upon by financial institutions and supervisory authorities to monitor markets and detect emerging risks. Whether real-time or lagged monitoring of social media, websites, and messaging platforms, these innovations assist companies in processing the deluge of digital information in the following ways:
> Web scraping: Using bots (coded autonomous programmes) to extract or ‘scrape’ content and data from websites has been around for decades. However, vast improvements in technological hardware have opened up opportunities for such technology to be deployed en masse at lower prices and at more granular levels of data. Scraper bots can now automatically harvest, analyse, and rank data from all digital platforms and raise the red flag at signs of potential risks that could impact financial stability.
> Social media monitoring: Also known as brand monitoring, it alerts organisations whenever a specific brand or keywords are mentioned in the digital space. It is easy to quantify and allows for quick response to complaints or negative sentiment.
> Social listening: Often confused with social media monitoring, social listening is a more proactive tool and lends insight into positive/negative sentiments or arising risks for not just the brand but broad-based industries based on online conversations unfolding in real time. Financial institutions are increasingly turning to intelligent social listening in order to enhance engagement with customers, detect potential risks, and inform strategic decision-making.
> Consumer sentiment analysis: Gauging customer sentiment and how it impacts their experience with the brand require independent, no-holds-barred input. Using algorithms to discover, measure, and infer how customers feel about your brand, product, or service over the long term automates the process of inferring patterns and trends from surveys, social media posts, and other data sources in real time.
> Reputational analysis: A key tool to assess intangibles such as brand value, reputational analysis today has evolved to incorporate analysis and risk matrices for social media value. Many public-listed corporations today regularly track and disclose metrics such as the net promoter score or earned growth in their annual report or website.
> Deepfake detection: The threat of artificial-intelligence (AI) powered deepfakes – video or audio of a person which has been digitally manipulated to pass them off as someone else in order to maliciously spread false information – is the latest wave in fraud that’s only just begun. In early 2020, the voice of a company director was allegedly deepfaked to demand that a Hong Kong-based bank manager sign off on a USD35 million transfer. Such voice-spoofing is just the tip of the iceberg as deep learning technology continues to refine itself in a variety of fields, including natural language processing and machine vision. Some early stage solutions, including MIT’s open-source Detect Deepfakes research project, provide free tools and share techniques to identify and counteract AI-generated misinformation.
> Dark web monitoring: The dark web is content on the internet that only exists on darknets, i.e. networks with encrypted content that does not appear using conventional search engines (Google, Mozilla, etc.) and can only be accessed using an anonymising browser, Tor. In a 2019 study, Into the Web of Profit: Understanding the Growth of the Cybercrime Economy, a researcher at the University of Surrey, Dr Michael McGuire, reported that the number of dark web listings which could potentially harm companies had risen by 20% since 2016 and included:
Media news organisation Newsguard reported in June this year that thousands of the world’s leading brands unintentionally run their advertising on misinformation websites to the tune of USD2.6 billion annually. This ad revenue doesn’t just keep the lights on for these malicious websites; it incentivises them to keep churning more and more false narratives, clickbait content, and hoaxes. However, many executives remain apathetic, brushing off the issue as too trivial for any meaningful action.
Gordon Crovitz, Co-CRO of Newsguard and ex-columnist for The Wall Street Journal, draws from personal experience in his op-ed on the future of journalism in 2023: “To my surprise, however, most brand managers shrug when they learn where their ads are running. They may figure that, with so many brands advertising on so many misinformation sites, no one brand will get blamed for the group problem.”
This flies in the face of logic as studies prove that “when brands stop advertising on misinformation sites and instead advertise on quality news sites,” writes Crovitz, “their CPM (cost per 1,000 impressions) price for ads goes down and engagement with their ads goes up.”
As the push for greater environmental, social, and governance (ESG) gains momentum in financial circles, institutions must come to the realisation that funding misinformation campaigns – whether knowingly or unknowingly – is a critical governance issue because it directly impacts trust and public perception.
The mis/disinformation economy is a shadow industry that is set to only grow in size. Banks can and should do their part in taking it down. The following are three simple yet impactful ways in which companies can place guardrails to stop their brands from inadvertently funding dis-, mis-, mal-information websites, writes Claire Atkin, CEO and co-founder of advertising technology watchdog, Check My Ads, in her article, Are Your Ads Funding Disinformation?, which ran in a recent issue of the Harvard Business Review:
• Check your ad campaigns. Forget high-level performance reports from ad tech companies. Instead, ask them for log-level data, which is the real source of truth of your ad placements, because it includes specific data about on which websites where your ad appeared. Supply chain research firms can help you audit your campaigns.
• Avoid brand safety technology. The leading ad verification companies only provide high-level reports, keeping you unaware of which websites your ads are being placed on and blocked from. As mention[ed] above, this isn’t sufficient for ensuring that your company’s ads aren’t supporting bad actors. If you are using brand safety technology, ensure that that data, too, gets audited regularly — brand safety technology is often ineffective, and sometimes even harmful.
• Demand cash refunds. There will often be a discrepancy between your log-level data and the campaign standards you were promised. When this happens, demand a cash refund — not a make-good. You are entitled to your money back and an explanation of how the discrepancies will be avoided in the future. If not, ditch the vendor.
Kannan Agarwal is a content analyst and writer at Akasaa, a boutique content development and consulting firm.