Key contacts
AI is transformational technology for most enterprise business models. Smart adopters will be the big winners.
Even so, most survey respondents still view AI as a future concern. When respondents were asked how important AI is for their organisation today, fewer than a third currently describe it as ‘highly strategically significant’. Content providers are most likely to identify it as such (43%), and respondents working in energy and infrastructure are the least (17%).
The strategic significance of AI will increase rapidly in the coming years, respondents expect: 58% of respondents expect AI to be highly significant in three years’ time, including 68% of platforms and online intermediaries (see Figure 4).
But the time to grasp the AI opportunity is now, says Charles Kerrigan, a partner in CMS’ Finance team. As consumers and business buyers become more familiar with AI, the more they will expect the companies they buy from to adopt the technology, he adds. “We’re in a time when all customers use AI, so their expectations in dealing with businesses will be raised.”
AI and the future of financial regulation
The banking and finance sector expects AI to be a gamechanger within just a few years: 55% of in-house counsel from the industry predict that it will be ‘highly strategically significant’ in three years’ time.
“This is transformational technology for most enterprise business models,” says Kerrigan. “Smart adopters will be the big winners.
”In fact, regulated firms already have thousands of AI deployments, he adds, with applications in HR, risk management, fraud prevention, trading and more.
Not only will this change the business, Kerrigan expects, it will change the way in which it is regulated. “Just as firms can’t use analogue methods to check compliance with digital systems, nor can regulators,” he explains.
As a result, ‘regtech’ – technology that supports or measures regulatory compliance – and ‘suptech’ – technology that enables supervisors to check compliance – will become increasingly vital to the financial services sector, Kerrigan predicts. “Digital economies need digital regulators.”
Leading the way in AI regulation
As excitement about the potential of AI has gathered pace in the last decade, so too have calls for its regulation.
The EU has made most progress towards regulating AI. Its AI Act is among the first concerted efforts to builda comprehensive framework for managing the risks and maximising the opportunities that AI presents. In November 2023, both the UK 9 and the US 10 also announced new institutes to lead their efforts on making AI innovation safe for all.
The EU AI Act categorises AI applications according to the risk they pose. Applications that present an “unacceptable risk” will be banned outright. Those deemed high risk will be subject to stricter controls than low-risk AI tools. Businesses will need to understand what risks their AI applications pose, requiring internal expertise that many organisations may currently lack.
Of the four areas of digital regulation included in our survey, respondents see the most widespread benefits in AI regulation, with 41% anticipating ‘significant’ and 53% ‘moderate’ opportunities. However, survey respondents also seem to show a certain fear of overregulation of AI. While only 14% see AI regulation as presenting ‘significant’ commercial threats, 66% envision moderate threats (see Figure 5).
The most widely anticipated opportunity from AI regulation is ‘improved resilience and security of technology systems’, which 45% of respondents rank in their top three expected opportunities. By imposing risk-based controls of AI applications, this suggests the EU’s AI Act may reassure businesses that their systems are stable and secure.
Another widely anticipated opportunity from AI regulation is the increased ability to compete in digital markets, which is ranked in the top three by 41% of respondents. This implies that by providing legal certainty, AI regulation will give businesses the confidence to innovate.
AI Act: the costs of compliance
Still, AI regulation is not without its drawbacks, respondents say. The chief commercial threat posed by AI regulation, the survey suggests, is a reduced ability to compete compared with Big Tech (this is ranked in the top three threats by 42% of respondents). Smaller organisations may fear that larger rivals are better equipped to meet the regulatory requirements of the AI Act, although it is the risks posed by an AI application that determine how it will be regulated, not the size of the organisation behind it.
Meanwhile, the majority of respondents (81%) expect the AI Act to have legal or compliance implications for their companies, including 28% who expect these to be significant. The most widely anticipated legal implications are an increased complexity of contracts (69% rank this in their top three) and increased legal costs to ensure compliance (62%) (see Figure 6).
But while compliance may incur some costs, it is usually cheaper than litigation in the event of a dispute. To date, there have been relatively few legal claims arising from AI technologies, but there is a general expectation that there will be a considerable increase in claims of this nature over the next few years, warns Lee Gluyas, a partner at CMS who specialises in IT-related disputes.
Like all legal claims, these will be heavily dependent on the facts. Many will be complex, especially in circumstances where several factors contribute to the dispute. For example, there may be contention around whether the failure of an AI system is the fault of the producer, the user or a party that supplied or input the data, or a combination of those parties.
This added complexity will no doubt lead to increased legal costs of managing the dispute, and businesses should take steps to understand technologies they are deploying and the associated commercial, legal and financial implications.
“Our experience suggests that whilst businesses recognise the importance of AI in a competitive market, many are yet to understand the risks involved, and in particular the increased risk of disputes,” Gluyas says. “We recommend that businesses adopting AI technologies consider carefully the legal and commercial risks.”
By being proactive in their engagement with AI regulation, businesses can greatly reduce the commercial and legal risks of AI adoption. If they haven’t already done so, in-house legal should be drawing up plans for compliant AI adoption, so their business has the confidence to innovate ahead of its competitors.