Home / Publications / Liability for Machine to Machine Cartels

Liability for Machine to Machine Cartels

14 April 2020

The use of pricing algorithms and other Artificial Intelligence (AI) software has become commonplace in a great number of markets, bringing a set of new competition law issues to the attention of competition authorities. In the past, enforcement authorities would seek evidence of collusion in a smoke-filled room, but it has now become possible for companies to use AI as a means of colluding, with limited or no human involvement. This article considers the different scenarios regarding the liability for machine to machine cartels within the existing framework of EU competition law and highlights recent developments in the area.

Pictogram of two people with a speech bubble


Implementing and facilitating cartels through AI

In this scenario, humans agree to collude and pricing algorithms and other AI platforms are used to facilitate the implementation and operation of the cartel. It would be the human agents that are guilty of agreeing to fix prices, quota or other commercial terms, while the machines are used to merely facilitate the process by identifying deviations in the agreed terms and applying retaliation measures. Essentially, the use of AI to help execute the purpose of a cartel will have the same effect from a legal enforcement point of view as a cartel executed by humans.


‘Hub and spoke’ cartels 

In the ‘hub and spoke’ scenario, competing firms (the spokes) use the same developer of an algorithm (the hub) to determine market price and react to changes in the market. Competing firms and developers of pricing algorithms can face cartel allegation, as the use of the same pricing algorithm by the competitors in the market may result in price fixing.

The Court of Justice of the European Union (CJEU) not long ago confirmed in the Eturas1 case the possibility of the existence of a collusive hub-and-spoke structure in the digital world that is facilitated by the use of online platforms. In the Eturas case, the CJEU considered concerted practices between travel agents through the use of an online platform. The alleged coordination between 30 travel agents in Lithuania had taken place via the commonly used E-Turas online travel booking system. The system sent a message to the travel agents asking them to cap their discount rates for travel bookings.

The CJEU clearly stated that the travel agents who knew of the message could be presumed to have participated in a concerted practice. Nevertheless, the presumption could be rebutted if the travel agents had publicly distanced themselves from the message or had informed the competition law authorities. The Eturas case demonstrates how online platforms can facilitate collusion amongst competitors, even without any direct contact. Competitors using third party’s online algorithms will need to carefully monitor any correspondence and bear in mind that turning a blind eye to correspondence regarding price setting may not be sufficient to avoid cartel allegations. In addition, businesses must be aware of the steps that could be taken to rebut the presumption of unlawful cooperation, namely: by furnishing evidence of public distancing or by informing the competition authorities of the conduct.


Machines colluding autonomously

The idea of algorithms colluding secretly, without the knowledge or the help of their developers or the businesses that employ them, used to sound like a science fiction. This possibility has now become real and has turned into a concern for competition authorities. A recent sector inquiry of the European Commission into e-commerce found that two thirds of retailers who monitor competitor’s prices use automatic systems, and some of them also use that same software to adjust prices automatically.

In this scenario competing firms unilaterally design an algorithm to achieve a predetermined goal – usually  profit maximisation – and the machines collude autonomously. Self-learning algorithms work through the mechanism of trial and error, and they can experiment with different pricing strategies and adjust their prices in real time to the ones of rivals. This could easily lead to increased market transparency and tacit collusion (conscious  parallelism).  Undoubtedly, it would be difficult for algorithms to sustain collusion in markets that are highly competitive,  ith differentiated products, or with low barriers to entry. However, in markets that are already susceptible to collusion, it would be relatively easy for algorithms to allow competitors to move towards a coordinated equilibrium.

In 2016, the UK’s Competition and Markets Authority (CMA) brought a case against two competing online sellers for using collusive pricing algorithms to automatically adjust the prices of posters and frames they sold on Amazon in response to the live prices of the competitor. The CMA found that the parties operated a cartel, but there was a clear evidence of an anti-competitive agreement made between the directors of the companies. Thus, the ruling was not strictly based on the collusion between the algorithms. Regardless, it opened up the question of  whether companies would be held liable for the anti-competitive conduct of their machines, when the ‘meeting of the minds’ took place at the machine level.



Competition Authorities are on track 

Margrethe Vestager, the current Executive Vice President of the European Commission for a Europe Fit for the Digital Age, commented at a conference of the German Cartel Office back in 2017 when she was the Commissioner or Competition that if ‘businesses decide to use an automated software, they will be held responsible for what it does. So they had better know how that system works.’ Vestager then stressed that competition enforcers should not  let companies escape responsibility for collusion by hiding behind a computer programme. Vestager retains her previous role of overseeing competition policy, but now her competencies are enlarged to include the “A Europe fit for the digital age” portfolio. Essentially, Vestager will be responsible for two of the most important EU portfolios – competition and digital – and she will have remarkable resources to enforce competition legislation. This sends a very strong message to the market on how resolutely the EU is taking the regulation of the digital economy. 

Traditional enforcement practice may encounter challenges when presented with machine to machine collusion, wholly unaided by humans. In the face of such uncertainty, competition authorities have already begun to set out their stall:

  • The CMA commissioned an economic research paper on pricing algorithms, which was published in 2018. The paper described how such algorithms are used by firms and whether, and under what conditions, their use could lead to market collusion. In the case of autonomous machine to machine collusion, the CMA hinted that it would ‘appear difficult to categorise this as falling within’ the relevant prohibition and set out areas for further research.
  • The German Monopolies Commission (Monopolkommission), which is an advisory body to the German Federal  Government and the legislature in the area of competition law and policy, has suggested the implementation of EU-wide supplementary rules to prevent pricing algorithms from colluding in its Biennial Report 2018. It also advocated reversal of the burden of proof in competition proceedings with regard to the damage caused, meaning that companies will have to show that the algorithm they used did not contribute to the collusive outcome. Another suggestion is the extension of liability for infringements to software developers.
  • In July 2019, the Portuguese Autoridade da Concorrência (AdC) published its own study into the competitive impact of algorithms. The study found that 37% of the sample of companies active online used software to monitor the prices of their competitors and 79% of those adjusted their prices in response to the output of the algorithm. The AdC warned that ‘companies are responsible for the algorithms they use and employing them with the aim of coordinating prices, or otherwise weaken competition in the market, is incompatiblewith Portuguese competition law’.
  • On 6 November 2019, the French Autorité de la concurrence and the German Bundeskartellamt published a joint study on algorithms and their implications on competition. The study focused on pricing algorithms and collusion, but also looked at potential linkage between the use of algorithms and market power as well as the practical challenges that could be faced when investigating the use of algorithms. The study ultimately concludes that the current legal framework is sufficiently flexible to address competition law infringement caused by the use of algorithms. Nonetheless, the competition authorities stress that competition rules may need to be reconsidered in the future, since it is difficult to predict what types of cases competition enforcers will be faced with in the years to come. During the conference held in Paris to present the joint study, the President of the French authority asserted that companies should regard themselves as responsible for the algorithms they use. Thus, it was hinted that the French competition authority may lean towards a strict liability approach to enforcement if necessary, which clearly echoes the approach advocated by Vestagerback in 2017.
Pictogram - TMC


Competition compliance by design

At the end of November 2019, the European Commission President Ursula Von der Leyen stated that Europe should lead the world in setting the rules and standards for the responsible use of data, including AI. Europe set the framework for the world in respect of data protection by introducing the GDPR, now the same can be done with regards to AI. The activity of the European Commission and competition authorities across Europe makes it clear that the current framework for enforcement in the digital economy can and may need to be reconsidered to  ake account of the risk of machine-led collusion. What companies can do at present to protect themselves from potential anti-trust liability is to ensure competition compliance by design. AI software used by companies should be programmed in a way that does not allow for collusion with rival software. Businesses using  automated software will also need to closely monitor its actions once it has been implemented, since turning a blind eye seems unlikely to shield them from liability under EU competition law.


Liability for Machine to Machine Cartels
PDF 94.2 kB


Portrait of Nevena Radlova
Nevena Radlova