The emergence of sophisticated commercial applications of AI raises the question whether a business will be liable when it deploys AI capable of making independent decisions on its behalf and those decisions breach competition law. This article considers whether the UK and EU competition regime, as it stands, is ready to tackle an AI decision maker.
Why AI matters
The development of ‘machine-learning’, complex algorithms and systems capable of processing vast quantities of data have led to innovative commercial applications for AI. One such application that has received attention from competition authorities is ‘algorithmic pricing’. That is, the automated re-calibration of prices based on internal or external factors, such as supply and demand variables, competitors’ prices or other market data.
The European Commission’s E-commerce Sector Inquiry (2017) found that of the online retailers surveyed who actively monitored competitor prices, 67% use automatic software programs to track and report on competitors’ prices. Most of those businesses subsequently adjusted their prices manually. However, a significant number also relied on software to implement automatic adjustments.
A number of other international authorities, including the UK Competition and Markets Authority, have recognised the potential of pricing algorithms to facilitate coordinated behaviour.
‘Algorithms can provide a very effective way of almost instantly coordinating behaviour, possibly in an anticompetitive way.’ - David Currie, Chairman, UK Competition and Markets Authority
It is not difficult to envisage such software evolving into fully-fledged AI capable of interacting with a competitor’s AI when setting prices.
Readiness to investigate
The first issue is whether competition authorities are ready to investigate businesses using AI technologies. Adapting to the latest technology or medium for business communications has, to date, not presented an enduring difficulty for competition authorities. Generally, it has required a broadening of investigative tools and expertise, for example, through the recruitment of technology specialists. Indeed, the UK Competition and Markets Authority recently set up a new data unit, which will explore, inter alia, how firms use algorithms in their business models and the implications for consumers.
The concern that the use of AI will, itself, make cartel detection harder, is probably overstated. The task of detecting secret cartels has long-plagued enforcers – principally because they are secret. Various regimes have countered this successfully through the development of ‘first-to-report’ leniency programmes, which reward businesses that expose their cartel collaborators. It ought to be similarly effective at breaking down cartels using AI tools. One can even imagine a scenario whereby it is the AI who, based on a leniency algorithm, suggests that a business blow the whistle on a cartel that hitherto wasn’t known to exist by any human.
If the long-term result is that tacit collusion becomes more likely (and harder to enforce) in a world of super-intelligent AI decision makers, rather than take enforcement action, competition authorities may instead look to beef up their merger control regimes as a preventative measure, i.e. to avoid allowing market conditions where AI coordination could thrive.
Readiness to sanction infringements using AI
A longstanding issue in competition law enforcement relates to who should be ultimately liable for anti-competitive conduct. Should a company be responsible for the conduct of its employees or directors? Should a parent company bear responsibility for the anti-competitive conduct of its subsidiary? Should the partners to an autonomous joint venture be liable for infringements by the joint venture business?
When attributing liability in the case of AI decision makers, there are two scenarios to consider. The first is where AI is merely used to implement the parties’ real-world agreement. The second is where the infringement is committed by the AI itself, without the consent or knowledge of a business’ human employees.
When AI is used to implement a secret agreement
Where businesses simply implement or conceal their ‘offline’ anticompetitive agreement using AI, the UK and EU competition authorities have clearly noted that this will be treated no differently to any other agreement.
In 2016, a UK case involving online sellers of posters and frames, were found by the CMA to have used automated re-pricing software to implement an agreement not to undercut each other’s online marketplace prices. This included jointly calibrating their re-pricing software to monitor each other’s prices and respond accordingly.
There was considerable fanfare over this case given the novelty of the software, but it is a largely unremarkable price-fixing case that originated (like many other investigations) from a leniency application. The medium in which a cartel is implemented has never really mattered.
‘Whether it is phone calls, text messages, algorithms or Morse code, the underlying legal rule is the same – agreements to set prices among competitors are always unlawful.’ - Maureen K. Ohlhausen, Acting Chairman, U.S. FTC (2017)
When rogue AI infringes competition law
The second scenario is a more serious test of the regime’s readiness. Specifically, can the competition regime enforce competition law in the case of an AI decision maker who has acted entirely independently of the business using the AI. While the question is somewhat speculative given the present state of technology, it is an issue in the spotlight of competition authorities. Several competition authorities accept that algorithms, in the absence of human intervention, may themselves learn that collusion leads to improved business outcomes. Remarks made by leading figures already suggest a possible divergence between the approaches such authorities may take.
The European Commission
On the issue of whether EU competition law is fit for purpose in an AI world, views expressed by senior European Commission figures suggest the regime is perfectly capable of addressing anti-competitive conduct carried out by AI.
When it comes to attributing liability, the European Commission adopts the hardest line – treating the AI decision maker no differently to the conduct of an errant human employee. In short, the buck stops with the business concerned. The expectation set by the Commission is that businesses must pre-empt the possibility of a rogue AI decision maker – and take steps to curb its freedom by design. In much the same way that businesses are expected to roll out competition law compliance training to prevent anti-competitive conduct
by its staff.
‘…businesses also need to know that when they decide to use an automated system, they will be held responsible for what it does. So they had better know how that system works.’ - Margaret Vestager, European Commission (2017)
In contrast to the European Commission, when considering the possibility for machine-led coordination without any human involvement, Chairman of the CMA, David Currie considers the possibility of a scenario in which the AI is intelligent enough to subvert the compliance protocols intended to limit its behaviour, i.e. an AI truly on a frolic of its own. In such circumstance, a valid question arises over whether the business should still be vicariously liable for its conduct. However, if not the business, then who?
In any event, this conveys a more reserved position on the attribution of liability under the UK regime. One inference is that, should technology advance to such a state, the regime may require reform in order to resolve the issue of liability – akin to the reforms being implemented to deal with the attribution of liability for car accidents involving the next generation of ‘driverless’ cars.
As recognised by Currie, the ace in the CMA’s sleeve is its Markets regime. This is where the CMA can investigate entire markets (or even an issue affecting multiple markets) in order to assess whether the use of AI in one or more markets is harming competition and requires intervention. The Markets regime allows the CMA to understand the competition implications of AI in a holistic way and, importantly, skirt around the problem of enforcing competition law against a computer.