By Angela SP Yap

Collusion.

It’s almost as old as business itself. To cop a line from that old Cole Porter tune, “Birds do it, bees do it. Even educated fleas do it.”

In the digital age, that ‘educated flea’ looks very much like a computer. Quick, efficient, and devoid of values, the widespread use of self-learning machines has introduced a deluge of new-age antitrust dilemmas in business.

Code to Collude

Think of David Topkins, founder of an online poster store, whose self-coded algorithm was designed to fix the prices for certain posters in collusion with other suppliers on Amazon Marketplace. The US government slapped Topkins with a USD20,000 fine – which he promptly settled – but the case garnered headlines because it involved a disproportionate amount of force against a nondescript online retailer for an inconsequential sum of money.

In a public statement, the US prosecutor, Assistant Attorney General Bill Baer, said, “We will not tolerate anticompetitive conduct, whether it occurs in a smoke-filled room or over the internet using complex pricing algorithms.” Word on the street though was that the real motivation behind the case was to gain access to Topkins’ code.

Such algorithms are deeply embedded in today’s commerce. There’s rush-hour surge pricing when hailing a Grabcar and automated price-bidding on platforms like eBay and Amazon.

In 2011, a price war ensued when two competing online booksellers used Amazon’s algorithmic pricing tool to automatically change their retail price based on the other store’s price. The retail price for an out-of-print biology textbook, The Making of a Fly, was pushed up to USD23.7 million before the algorithm was disabled.

The code was simple. Keen to undercut its competitor, the first online seller, profnath, set its price for the book at 0.9983 times the price of its closest competitor, bordeebook. At a pre-set time each day, Amazon’s algorithm would automatically reprice the book on profnath’s website, slightly undercutting bordeebook’s price. However, bordeebook had set its algorithm at 1.27 times of profnath’s retail price, resulting in an out-of-control machine-driven price war with no human at the wheel.

Bots Call the Shots

Such bot-driven price fixing is just the tip of the iceberg.

Devoid of ethical and legal parameters, the collusive potential of algorithms can be disastrous. Not many of us can fork out USD23.7 million for a textbook.

More sophisticated algorithms can automate pricing predictions, incorporate behavioural decisions in computations, automate real-time analysis in response to geopolitical events. These learning machines can, and are, deployed to perform complex game-theory-like computations in order to arrive at the optimal solution for the firm. In financial markets, when even a nanosecond advantage can result in huge profits, algorithms are a big leg-up.

In a recent interview, Patrick Chang, PhD candidate at the Oxford-Man Institute of Quantitative Finance, summarises his research findings from Algorithmic Collusion in Electronic Markets: The Impact of Tick Size, co-authored with Álvaro Cartea and José Penalva: “In our setting we consider when these algorithms are market-makers, they basically provide liquidity to the market, and there are two main outcomes that can appear.

“First is that they learn to be competitive, which is expected according to economic theory; but one of the unintended consequences is they learn to raise prices, they act together and they achieve algorithmic collusion, which is of course detrimental to the market in terms of liquidity takers because they have to pay a higher price to acquire a share or equity.

“This is still an emerging field; there is a lot we still need to understand. One of the main things we’re looking at is to really understand the mechanisms as to what drives their behaviour…and from there, once we understand that, build some possible regulation or design the market in a better way so that these sorts of outcomes are less likely to occur or can be avoided entirely.

“A research agenda going forward is to really help detect when there is collusion in the market and one of the ideas we have is to use inverse reinforcement learning to understand the strategy these algorithms are learning and go back to test to see if these are collusive strategies or not.”

At this point, it is necessary to point out that although bot-pricing collusion generally results in a loss of welfare for society, there are instances when pricing algorithms are a benefit. More fluid and responsive price adjustments can make the goods markets more efficient and as long as the underlying code isn’t a black box, algorithms can, in fact, increase price transparency.

A Reliable Cartel?

For Drs Hans-Theo Normann and Martin Sternberg, Research Affiliates at the Max Planck Institute for Research on Collective Goods in Bonn, the concern is not whether algorithms collude but whether they’re more efficient at it than us.

Do Machines Collude Better than Humans?, published in the December 2021 issue of the Journal of European Competition Law & Practice, examined under laboratory conditions whether algorithms are capable of tacit collusion by comparing the outcome of prices under these three Scenarios:

1: A benchmark outcome involving only human players.

Each player’s behaviour represents the behaviour of a firm and participants take home, in cash, the profits of their respective firm at the end of the experiment.

2: Involves algorithm-only players with each algorithm representing the behaviour of a firm.

Algorithms were powered by Q-Learning, a feedback-based reinforcement learning technique in which the machine learns over time to find the best course of action through continuous interaction with other agents. Normann and Sternberg stress that the algorithms were not instructed to collude, merely to maximise the profit of their own firm, taking away any trace of collusive intent.

3: The outcome when both human and machine players interact.

Under the first scenario, there was some level of collusion found in duopolies (two-firm interactions) and occasionally when three firms interact. No collusive outcome was detected in experiments involving four firms and more, indicating that tacit collusion by humans become less likely as the number of competitors increase.

Where tacit collusion was detected, some groups managed to sustain the joint profit maximum whilst others fully compete. No player took home more than half of the full collusive gains, indicating that purely human interactions achieve only a moderate degree of supracompetitive pricing (prices above what can be sustained in a competitive market).

Under Scenario 2, the algorithms achieved supracompetitive pricing in a ‘super human’ way, far exceeding the levels of collusion seen under Scenario 1.

In the third scenario – a hybrid collusion involving a number of human players and one machine – supracompetitive prices at a ‘super-human’ level only occur in simulations with three firms or less. When four firms (three humans + one algorithm) or more interact, there was no significant difference in the outcome and the competitive play between three humans was enough to nullify the algorithm’s play. This indicates that in hybrid markets, many algorithms are needed in order to facilitate collusion and firms that employ the algorithm earn significantly less than their rivals.

The authors sum it up thus: “If firms are willing to accept set-up costs (for algorithmic pricing machines), our data do indicate the anticompetitive potential algorithms have, even when interacting with humans.”

So, are algorithms the ideal cartel members?

It depends. Norman and Sternberg elaborate that in order for any stable collusion to emerge, firms must have the ability to communicate and coordinate moves in order to reach price equilibria. They posit that whilst it is theoretically possible, further research is needed on whether pricing algorithms like Q-learning models would ultimately evolve to develop a language-like communication with each other in order to achieve significant anticompetitive behaviour.

This is crucial because “express communication is typically the smoking gun in cartel cases”. If algorithms can effectively collude without developing a language of their own, it would be near impossible for authorities to detect or prove anticompetitive behaviour.

The authors state: “[In all Scenarios,] there are no exchanges of communication of information, posing a challenge for competition authorities. It would probably be beyond the reach of the competition laws of most jurisdictions to penalise such behaviour, even though it does result in welfare losses.”

A Forest Divided

Such concerns about the impact of algorithms on market dynamics are far from new. In 2017, the Organisation for Economic Co-operation and Development’s (OECD) Competition Committee held a roundtable on ‘Algorithms and Collusion’ as part of its wider work stream on competition in the digital economy.

It is one of the earliest global forums to address the questions of whether traditional antitrust concepts of agreement and tacit collusion used by enforcement agencies are sufficient to include algorithmic collusion, and if it is possible for antitrust liability to be imposed on the algorithms’ creators and users.

The proceedings from the OECD roundtable identified two sources of market failure from algorithmic collusion:

  • A lack of transparency due to complex software and trade secrecy. This can lead to information asymmetry which hinders consumers and regulators from making fully informed decisions.
  • A barrier to entry for potential competitors. By its very nature, algorithms will drive rapid economies of scale for firms that have it. For the ‘have-nots’, it is another barrier to entry that could lead to an overall loss of welfare for society.

Several solutions have been thrown into the ring, which run the gamut from ludicrous to plausible.

One is to outlaw explicit algorithmic collusion and tackle tacit collusion using existing competition laws and procedures. This works under the premise that existing laws adequately define collusion to include the possible dimensions of algorithmic collusion, which they do not. Current antitrust laws require communication as the “smoking gun” for prosecution; this needs to be redefined as algorithmic collusion does not seem to require any language-like communication and technically cannot be prosecuted.

Another is to subject new algorithms to regulatory tests and audits in order to prohibit those that support collusive strategies. Aside from being restrictive, the biggest hurdle here is whether enforcement agencies are equipped with the right talent and sufficient manpower for the job. Case in point is the US Congress’ painful and off-the-mark interrogation of TikTok CEO John Shou, an indication of the chasm that separates market-makers from lawmakers.

A third suggestion is to regulate the data input to pricing algorithms in a manner that would enhance competition. Although more plausible, this cannot work in a vacuum. It needs to be coupled with other market enforcement measures, such as demand-steering and separating decision-makers from algorithms.

Solving The Right Problem

Management guru Tom Peters comes to mind. “Trust, not technology,” he says, “is the issue of the decade.”

Technology is, and has always been, agnostic.

Don’t look to the machine. Look to the one who wields it.


Angela SP Yap is a multi-award-winning social entrepreneur, author, and financial columnist. She is Director and Founder of Akasaa, a boutique content development and consulting firm. An ex-strategist with Deloitte and former corporate banker, she has also worked in international development with the UNDP and as an elected governor for Amnesty International Malaysia. Angela holds a BSc (Hons) Economics.