AI and Funds
AI is generally seen as a way of bringing many benefits including streamlining processes, services and costs as well as creating new jobs and opportunities. There are many areas including trading, investment advice, portfolio and fund management, reporting and supporting functions where AI tools can potentially be used to the advantage of the funds sector.
What is AI?
A key challenge for legislators and policy-makers is to determine what constitutes AI for legal and policy purposes and how to define it.
In general terms, AI involves systems that are both adaptive (i.e. they change over time and can “learn”) and autonomous (i.e. they make decisions without human intervention).
The Organisation for Economic Cooperation and Development (OECD) defines AI as “a machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The EU AI Act similarly defines an AI system as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
In contrast to this approach, the UK Government has (to date) been reluctant to introduce a legal definition for AI on the basis that such a move could stifle innovation, by being overly prescriptive, and capturing some systems but not others. Any such definition also risks going quickly out of date as the technology continues to rapidly develop.
It is also important to distinguish AI technologies from traditional 'automation', the latter term referring only to techniques that make a process run automatically, without any aspect of machine learning (see definition below). In this sense, tokenisation is closer to automation than to AI.
Types of AI
These include:
- Machine Learning (“ML”), where example data is used to identify patterns and relationships in data in order to make decisions or predictions about new data.
- Natural Language Processing (“NLP”), where computers aim to process, analyse and respond to text and speech data in a manner similar to humans.
- Generative artificial intelligence (“Generative AI”), where large language models (“LLMs”), such as ChatGPT, use ML and NLP and are pre-trained on large sets of data to respond to requests and generate human-like text and images.
Two selected use cases of AI
The following are two simple examples of potential AI use cases in financial services:
A. Robo-advisers:
This use case involves deploying an “intelligent” robot to provide investment advice to clients directly, or to inform the investment decision-making process of human managers based on the analysis of data. Some forms of automated advice already exist today, including robo-advisers that provide simplified, generic forms of advice to retail clients. Some firms, including hedge funds and market makers, use in-house robo-advisers to guide or lead their trading strategies. Robots may not be 100% reliable in making investment decisions, and so robo-advisers must be subject to proper risk controls, both in the development and training phase, and in monitoring the outputs.
B. AML/KYC processes:
This use case involves deploying AI systems to enhance AML/KYC processes for financial institutions. AI models can be trained on an extensive data history, thereby enabling the AI system to better anticipate cases of money laundering by analysing transaction patterns and behaviour and identifying those that may indicate fraudulent activities. The self-learning capability of AI technology also allows these systems to evolve and adapt over time to new techniques of financial crime. Such AI systems can thus provide AML-regulated entities with a much more robust tool to meet their obligations. Today, some AML/KYC processes are already automated - for example, companies can use tools to scan sanctions lists or the internet for references to individuals or entities. However, AI tools can go further, by providing a human, or greater than human, level of nuance when interpreting hits across vast volumes of financial data, resulting in reduction of false-positives and better identification of issues that require further, human investigation. In other words, AI tools can make the work of humans easier, allowing them to spend more time on the most complex and critical cases.
For more examples tailored to the financial services sector, see the recent ESMA guidance to MiFID firms using AI in the provision of investment services
1
and the Luxembourg CSSF white paper.
2
What are the main considerations?
AI relies heavily on data. It can be quick, versatile and produce effective outputs but it can also lack understanding, be incomplete or even have bias through lack of training data or incomplete or missing data. There are a number of actions and issues to be addressed both to deploy AI and sufficiently manage its unique risks.
AI is not itself a legal person, capable of suing and being sued. There are many legal implications to consider such as who owns, provides, or deploys an AI system (and, with regard to the EU AI Act, how to classify such AI systems,) liability for its actions, how contracts such as smart contracts are made, tort, copyright and rights to use data for training models and generating outputs. Besides the EU AI Act, the legal position, regulatory principles, guidance and standards are still evolving, although regulators are increasingly publishing papers setting out their thinking and emerging expectations.
Fund managers and others in the sector may be tempted to use AI to improve their investment decisions and/or streamline their administrative processes. To do this, some may be inclined to develop their own in-house AI models, for which they will need to ensure that these models comply with the applicable laws and regulations (e.g. the risk management rules and guidance under the applicable regulatory framework) and are correctly configured to ensure an adequate level of reliability.
Those who do not have the internal resources to develop their own AI models, or where it is not efficient or advantageous to do so, will need to turn to AI solution providers. It will still be essential for those turning to such providers to include a risk analysis of these new technologies in their processes.
Evolution of the ethical and legal framework
This is currently evolving and combines multilateral and voluntary initiatives, legal regulation, consultations and sandboxes.
The EU AI ACT
Introduction and scope
The EU AI Act is a pioneering piece of legislation with a risk-based approach, which seeks to ensure responsible and ethical development and deployment of AI which respects human values and includes safeguards for harmful effects such as bias, discrimination and privacy.
It sets out legal requirements for AI in the EU and its provisions are extensive from fundamental requirements to putting in place an EU legal infrastructure, codes of practice and standards. There are various exemptions and sanctions, including fines. Please see further below in this briefing and our briefing Looking ahead to the EU AI Act (cms.law) for more detail.
The EU AI Act is expected to be published in the Official Journal of the EU on 12 July 2024 and is expected to come into force on 1 August 2024. Generally, the EU AI Act applies 24 months after it comes into force but the requirements relating to prohibited AI practices and AI literacy will apply after 6 months and for certain general-purpose AI models after 12 months.
It uses a specific definition of “AI systems” as mentioned earlier in this briefing, and classifies AI in different ways, depending on the use case. The EU AI Act also regulates “general-purpose AI models”, which are AI models, including those trained with a large amount of data using self-supervision at scale, that display significant generality and are capable of competently performing a wide range of distinctive tasks, regardless of the way they are placed on the market, and can be integrated into a variety of downstream systems or applications.
The EU AI Act will apply to different stakeholders in the supply chain. Like other EU law, it can apply whether or not those stakeholders are established or located in the EU. Two of the key stakeholder roles that will be regulated by the EU AI Act are “providers” and “deployers”:
- a “provider” is a natural or legal person that develops (or has developed on its behalf) an AI system or general-purpose AI model that it places on the market or that puts an AI system into service under its name or trademark in the EU;
- a “deployer” is a natural or legal person that uses an AI system under its authority, other than in a non-professional capacity, which is established or located in the EU (this would include, for example, a financial institution deploying a system to evaluate eligibility of individuals to receive credit but it would not include the institution’s employees acting as end users of such system).
The EU AI Act will also regulate providers and deployers that are established or located in a third country (i.e. not established or located in the EU) if the output produced by the applicable AI system is used in the EU.
Where providers are located in a third country, before making any high-risk AI systems (see further below) or general -purpose AI models available in the EU, they must appoint an “authorised representative” that is established or located in the EU to carry out delegated tasks. Authorised representatives will be regulated by the EU AI Act.
There are also provisions relating to an “importer”, being a person located or established in the EU, which places an AI system with the name or trademark of a person outside the EU on the market, and to a “distributor” (not being a provider or importer) which makes an AI system available in the EU.
Deployers, importers or distributors will be considered to be providers of high-risk AI systems (and become subject to the provider obligations under the EU AI Act) if they do any of the following:
- put their own name or trademark on a high-risk AI system that is already available in the EU,
- make a substantial modification to such a high-risk AI system such that it remains a high-risk AI system, or
- modify the intended purpose of an AI system that is not classified as high-risk in such a way that it becomes a high-risk AI system.
Determining roles under the EU AI Act will be an area for careful analysis by the different stakeholders in the supply chain because providers have substantially more obligations under the EU AI Act.
Prohibited AI Practices
The EU AI Act prohibits certain AI practices. This includes the use of AI systems that deploy subliminal techniques to manipulate or materially distort people making decisions they would not otherwise make or use of systems that exploit individuals’ vulnerabilities to distort their behaviour, in each case in a way which is likely to cause them or others significant harm. There are also prohibitions on profiling for criminal offences, in relation to facial recognition databases through untargeted scaping of facial images, inferring emotions in the workplace or education settings, and certain uses of biometrics systems to categorise people.
High-risk AI systems
An AI system may be classified as “high risk” where it is:
- a safety component of a product or is itself a safety product covered by EU harmonisation legislation; or
- one of the AI systems listed in the EU AI Act relating to:
- biometrics,
- critical infrastructure;
- education and vocational training;
- employment, work management and access to self-employment;
- access to and enjoyment of essential public services and benefits (e.g. healthcare and life and health insurance);
- law enforcement;
- migration, asylum and border control management; and
- administration of justice and democratic processes.
High-risk AI systems will have to meet certain requirements under the EU AI Act. Providers will be responsible for ensuring those are met. These requirements include: risk management systems (including in relation to testing); data quality and governance; technical documentation to be provided to national competent authorities; record keeping of events (logs); information required to be given to deployers; human oversight; and accuracy, robustness and cybersecurity. Providers which are financial institutions will also be required to meet internal governance and other arrangements and processes requirements under EU financial services law.
There are also obligations for deployers, importers and distributors of high-risk AI systems. Whilst these are significant, they are not as extensive as for providers. Deployers, for example, will have obligations requiring them to implement governance measures for their use of high-risk AI systems and to monitor and report on risks associated with their use.
Other AI systems and general-purpose AI models
There will be transparency requirements for providers and deployers of certain types of other AI systems, including systems that are intended to interact directly with natural persons or which generate synthetic audio, image, video or text content images (e.g. ‘deep fakes’).
There are additional requirements for general-purpose AI models, including for those models which are classified as being general-purpose AI models with systemic risk.
Innovation and Framework
Measures in support of innovation include national regulatory sandboxes, the establishment of an EU AI Office, a European Artificial Intelligence Board, national AI bodies and an EU database for high-risk AI systems.
United Kingdom
The UK Government has concluded its Consultation outcome: “A pro-innovation approach to AI regulation”. The UK has been looking at policy rather than rushing to introduce centralised, general AI legislation and there will be a pause pending the General Election. However, the UK Government has to date pursued a principles-based framework for existing regulators to interpret and apply within their sector-specific domains. This involves the Government issuing voluntary/non-binding cross-sector principles to the UK regulators: (1) safety, security and robustness of AI systems; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress (the “Principles”).
The area is under continuing review including whether “highly capable general-purpose AI” will require its own, targeted approach. As discussed above, this decision has already been made in the EU. Some legislative action will be required eventually.
Multilateral Initiatives
There is a long list of multilateral initiatives looking at the development of AI and associated safeguards and human rights including:
Name /Organisation | Subject and date |
---|
White House, Biden Administration | Executive Order on Safe, Secure and Trustworthy Artificial Intelligence 30 October 2023 |
---|
First AI Safety Summit, | Bletchley Park Declaration November 2023 |
---|
G7 | Hiroshima AI Process Comprehensive Policy Framework December 2023 |
---|
G20 | New Delhi Leaders’ Declaration December 2023 |
---|
United Nations (UN) and associated agencies | Various including: AI Advisory Body Interim Report: Governing AI for Humanity . December 2023 The United Nations General Assembly (UNGA) passing its first global resolution on artificial intelligence (AI) on 21 March. Urging states to protect human rights and personal data and to monitor AI for potential harms 21 March 2024 |
---|
Organisation for Economic Co-operation and Development (OECD) | OECD AI Principles (2019) Due to be revised in 2024 |
---|
Council of Europe | The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (adopted 17 May 2024) |
---|
Conclusion
It is undeniable that AI offers an unparalleled range of opportunities for the funds industry. However, these promises are still far from being fully understood, and while it is more than opportune to explore them, it is also very important to identify and mitigate risks before implementing AI.
If you want If you want advice on how AI may apply to you please contact us in the usual way or any of the authors below or your funds contact. There are also a number of specialists across CMS countries dealing with both related regulatory and technology specific topics.