Home / Publications

Publications

Discover thought leadership and legal insights by our legal experts from across CMS. In our Expert Guides, written by CMS lawyers from across the jurisdictions where we operate, we provide you with in-depth legal research and insights that can be read both online and offline. You can also find Law-Now articles with focused legal analysis, commentary and insights to help you anticipate future challenges and much more.



Media type
Expertise
15/03/2024
Next steps
Following the release of the pre-final text of the AI Act and its adoption by the European Parliament’s Internal Market and Civil Liberties Committees in February 2024, the torch was passed to the European Parliament plenary. Voting took place in the European Parliament on 13 March 2024 and approval was given by a large majority. The text is now being revised by the legal linguists of the European Parliament. The final text is then formally approved once again in the European Parliament. This is expected to take place on 10 / 11 April. This final text will then have to be approved by the Council of the European Union. A clear date for this has not yet been defined, but it can be assumed that this will happen soon after the final text has been approved by the European Parliament, most likely end of April/early May 2024. The AI Act will enter into force on the 20th day after publication in the EU Official Journal and will be applicable after 24 months. However, some specific provisions will have different application dates, such as prohibitions on AI, that will apply 6 months after entry into force; or General Purpose AI models already on the market, which are given a compliance deadline of 12 months. The AI Office was established on 21 February 2024 and the European Commission will oversee the issuance of at least 20 delegated acts. The AI Act’s implementation will be supported by an expert group formed to advise and assist the European Commission in avoiding overlaps with other EU regulations. Meanwhile, Member States must appoint at least one notifying authority and one market surveillance authority and communicate to the European Commission the identity of the competent authorities and the single point of contact. The next regulatory step appears to be focused on AI liability. On 14 December 2023, EU policymakers reached a political agreement on the amendment of the Product Liability Directive. This proposal aims to accommodate technological developments, notably covering digital products like software, including AI. The next proposal in line in the AI package is the Directive on the ad­apt­a­tion/har­mon­iz­a­tion of the rules on non-contractual civil liability to Artificial Intelligence (AI Liability Directive). Addressing issues of causality and fault related to AI systems, this directive proposal ensures that claimants can enforce appropriate remedies when suffering damages in fault-based scenarios. The draft was published on 28 September 2022 and is still pending to be considered by the European Parliament and Council of the European Union . Once adopted, EU Member States will be obliged to transpose its provisions into national law within a likely two-year timeframe. The enactment of the AI Act represents a pivotal step towards fostering a regulatory landscape, not only in the EU but worldwide, that balances innovation, trust, and accountability, ensuring that AI serves as driver of progress while safeguarding fundamental rights and societal values.
15/03/2024
Codes of conduct, confidentiality and penalties, delegation of power and...
Codes of conduct (Currently Title IX, Art. 69)In order to foster ethical and reliable AI systems and to increase AI literacy among those involved in the development, operation and use of AI, the new AI Act mandates the AI Office and Member States to promote the development of codes of conduct for non-high-risk AI systems. These codes of conduct, which should take into account available technical solutions and industry best practices, would promote voluntary compliance with some or all of the mandatory requirements that apply to high-risk AI systems. Such voluntary guidelines should be consistent with the EU values and fundamental rights and address issues such as transparency, accountability, fairness, privacy and data governance, and human oversight. Furthermore, to be effective, such codes of conduct should be based on clear objectives and key performance indicators to measure the achievement of these objectives. Codes of conduct may be developed by individual AI system providers, deployers, or organizations representing them and should be developed in an inclusive manner, involving relevant stakeholders such as business and civil society organisations, academia, etc. The  European Commission will assess the impact and effectiveness of the codes of conduct within two years of the AI Act entering into application, and every three years thereafter. The aim is to encourage the application of requirements for high-risk AI systems to non-high-risk AI systems, and possibly other additional requirements for such AI systems (including in relation to environmental sustainability).
14/03/2024
Governance and post-market monitoring, information sharing, market surveillance
Governance (Currently Title VI, Art. 55b-59)The AI Act establishes a governance framework under Title VI, with the scope of coordinating and supporting its application on a national level, as well as build capabilities at Union level and integrate stakeholders in the field of artificial intelligence. The measures related to governance will apply from 12 months following the entry into force of the AI Act. To develop Union expertise and capabilities, an AI Office is established within the Commission, having a strong link with the scientific community to support its work which includes the issuance of guidance; its establishment should not affect the powers and competences of national competent authorities, and bodies, offices and agencies of the Union in the supervision of AI systems. The newly proposed AI governance structure also includes the establishment of the European AI Board (AI Board), composed of one representative per Member State, designated for a period of 3 years. Its list of tasks has been extended and includes the collection and sharing of technical and regulatory expertise and best practices in the Member States, contributing to their harmonisation, and the assistance to the AI Office for the establishment and development of regulatory sandboxes with national authorities. Upon request of the Commission, the AI Board will issue recommendations and written opinions on any matter related to the implementation of the AI Act. The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues related to market surveillance and notified bodies. The final text of the AI Act also introduces two new advisory bodies. An advisory forum (Art. 58a) will be established to provide stakeholder input to the European Commission and the AI Board preparing opinions, recommendations and written contributions.A scientific panel of independent experts (Art. 58b) selected by the European Commission will provide technical advice and input to the AI Office and market surveillance authorities. The scientific panel will also be able to alert the AI Office of possible systemic risks at Union level. Member States may call upon experts of the scientific panel to support their enforcement activities under the AI Act and may be required to pay fees for the advice and support by the experts. Each Member State shall establish or designate at least one notifying authority and at least one market surveillance authority as national competent authorities for the purpose of the AI Act. Member States shall ensure that the national competent authority is provided with adequate technical, financial and human resources and infrastructure to fulfil their tasks effectively under this regulation, and satisfies an adequate level of cybersecurity measures. One market surveillance authority shall also be appointed by Member States to act as a single point of contact.
14/03/2024
EU Parliament positions itself in favor of a strong Green Claims Directive
March 2024
13/03/2024
General purpose AI models and measures in support of innovation
General purpose AI models (Currently Title VIIIA, Art. 52a-52e)The AI Act is founded on a risk based approach. This regulation, intended to be durable, initially wasn’t associated to the characteristics of any particular model or system, but to the risk associated with its intended use. This was the approach when the proposal of the AI Act was drafted and adopted by the European Commission on 22 April, 2021, when the proposal was discussed at the  Council of the European Union on 6 December, 2022. However, after the great global and historical success of generative AI tools in the months following the Commission’s proposal, the idea of regulating AI focusing only on its intended use seemed then insufficient. Then, in the 14 June 2023 draft, the concept of “foundation models” (much broader than generative AI) was introduced with associated regulation. During the negotiations in December 2023, some additional proposals were introduced regarding “very capable foundation models” and “general purpose AI systems built on foundation models and used at scale”. In the final version of the AI Act, there is no reference to “foundation models”, and instead the concept of “general purpose AI models and systems” was adopted. General Purpose AI models (Arts. 52a to 52e) are distinguished from general purpose AI systems (Arts. 28 and 63a). The General Purpose AI systems are based on General Purpose AI models: “when a general purpose AI model is integrated into or forms part of an AI system, this system should be considered a general purpose AI system” if it has the capability to serve a variety of purposes (Recital 60d). And, of course, General Purpose AI models are the result of the operation of AI systems that created them.“General purpose AI model” is defined in Article 3.44b as “an AI model (…) that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications”. The definition lacks quality (a model is “general purpose” if it “displays gen­er­al­ity”1Re­cit­al 60b contributes to clarify the concept saying that “generality” means the use of at least a billion of parameters, when the training of the model uses “a large amount of data using self-supervision at scale”. footnote) and has a remarkable capacity for expansion. Large generative AI models are an example of General Purpose AI models (Recital 60c). The obligations imposed to providers of General Purpose AI models are limited, provided that they don’t have systemic risk. Such obligations include (Art. 52c) (i) to draw up and keep up-to-date technical documentation (as described in Annex IXa) available to the national competent authorities, as well as to providers of AI systems who intend to integrate the General Purpose AI system in their AI systems, and (ii) to take some measures in order to respect EU copyright legislation, namely to put in place a policy to identify reservations of rights and to make publicly available a sufficiently detailed summary about the content used. Furthermore, they should have an authorised representative in the EU (Art. 52ca). The most important obligations are imposed in Article 52d to providers of General Purpose AI models with systemic risk. The definition of AI models with systemic risk is established in Article 52a in too broad and unsatisfactory terms: “high impact capabilities”. Fortunately, there is a presumption in Article 52a.2 that helps: “when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25”. The main additional obligations imposed to General Purpose AI models with systemic risks are (i) to perform model evaluation (including adversarial testing), (ii) to assess and mitigate systemic risks at EU level, (iii), to document and report serious incidents and corrective measures, and (iv) to ensure an adequate level of cybersecurity. Finally, an “AI system” is “an AI system which is based on a General Purpose AI model, that has the capacity to serve a variety of purposes” (Art. 3.44e). If General Purpose AI systems can be used directly by deployers for at least one purpose that is classified as high-risk (Art. 57a and Art. 63a), an evaluation of compliance will need to be done.
12/03/2024
Prohibited AI practices and high-risk AI systems
Prohibited Artificial Intelligence practices (Currently Title II, Art. 5) 1. Introduction to the unacceptable risk category Article 5 categorises certain AI technologies as posing an “unacceptable risk” (Unacceptable Risk). Unlike other risk categories outlined in the AI Act, the use of AI technologies that fall within this category is strictly prohibited ("Prohibited AI Systems"). It is therefore necessary to distinguish between:those technologies that are clearly prohibited; andthose AI applications that are not clearly prohibited but may involve similar risks. The most challenging problem in practice is to ensure that activities, which are not prohibited, do not become Unacceptable Risk activities and therefore prohibited. 2. Unacceptable Risk: Prohibited AI practices Article 5 explicitly bans harmful AI practices: The first prohibition under Article 5 addresses systems that manipulate individuals or exploit their vulnerabilities, leading to physical or psychological harm. Accordingly, it would be prohibited to place on the market, put into services or use in the EU:AI systems designed to deceive, coerce or influence human behaviour in harmful ways; andAI tools that prey on an individual’s weaknesses, exacerbating their vulnerabilities. The second prohibition covers AI systems that exploit these vulnerabilities, even if harm is not immediate. Examples include:AI tools that compromise user privacy by collecting sensitive data without consent; andAI algorithms that perpetuate bias or discrimination against certain groups. The third prohibition focuses on the use of AI for social scoring. Social scoring systems assign scores to individuals based on their behaviour, affecting access to services, employment or other opportunities. Prohibited practices in­clude:AI-driv­en scoring mechanisms that lack transparency, fairness or accountability; andSystems that discriminate based on protected characteristics (e.g. race, gender, religion). The fourth prohibition covers biometric real-time identification in publicly accessible spaces for law enforcement purposes. This includes:AI systems that identify individuals without their knowledge or consent; andContinuous monitoring of people’s movements using biometric data. 3. Clearly listed: Best practices and compliance Transparency and accountability are essential in complying with the prohibitions under Article 5. Firms using AI must design and continuously test systems, be transparent about their intensions and avoid manipulative practices. They should also disclose AI systems functionality, data usage, and decision-making processes. Companies should conduct thorough impact assessments to identify unintended vulnerabilities and implement specific safeguards to prevent exploitation. This should form part of assessments of AI systems to understand their impact on individuals and society. Companies should develop clear guidelines for scoring systems to prevent the development of social scoring characteristics, and prioritise ethical design, fairness and non-dis­crim­in­a­tion. Privacy impact assessments should be pursued to ensure compliance with the various prohibitions. In particular, firms should be very careful using any real-time identification systems. In all cases, companies should maintain comprehensive records of AI system design, training, and deployment. Any critical decision made by AI systems should be overseen by a human. 4. Not clearly listed: Categorisation Unacceptable Risk AI systems cover systems that are deemed inherently harmful and are considered a threat to human safety, livelihoods, and rights In contrast, high-risk AI systems cover systems designed to be applied to specific use cases, including using AI for hiring and recruitment that may cause harm but are not inherently harmful. High risk AI systems are legal, but subject to important requirements under the AI Act. It is therefore crucial to determine the difference between high risk and unacceptable risk AI systems. In essence, any high risk activity can escalate to Unacceptable Risk under the following cir­cum­stances:Bi­as and Discrimination: if AI perpetuates bias or discriminates against protected groups. Privacy Violations: when AI systems compromise user privacy or misuse sensitive data. Psychological Harm: if AI manipulates individuals, causing psychological distress. AI systems that are able to perform generally applicable functions and are able to have multiple intended and unintended purposes (being General Purpose AI models) are not inherently prohibited under the AI Act, but must be used with care since in certain scenarios they lead to Unacceptable Risk activities. To assess whether a General Purpose AI Model poses an Unacceptable Risk, it is necessary to consider the context in which the model operates. If it influences critical decisions (e.g. hiring, credit scoring), perpetuates bias or discriminates, compromises user privacy (e.g. by collecting sensitive data without consent), the risk increases, and the model may need to be adapted. 5. Best practice and compliance While the AI Act provides examples of explicit prohibitions under the AI Act, it cannot cover all possible situations as the technology is, through updated versions and by definition, constantly evolving. As a guide, legal and compliance teams should ask the following questions when considering high- risk AI systems:Risk assessment:What is the evidence that the categorisation of the AI application is minimal, limited, high or Unacceptable Risk?Does the application in any circumstances use or act on sensitive data or influence critical de­cisions?Con­tex­tu­al analysis:Does the application operate in a sector that has a presumption of increased risk, for example, (a) financial services, or (b) healthcare?In what ways does the deployment of the application impact (a) individuals, and (b) society?Specific criteria:Can any decisions of the application be considered to give rise to manipulation, exploitation, discriminatory scoring, or biometric iden­ti­fic­a­tion?Does the application operate or have access to data that could give rise to the exploitation of subliminal techniques or vulnerabilities related to protracted characteristics, such as age or dis­ab­il­ity?Trans­par­ency and Documentation:In what ways is the AI system transparent about its inherent functioning and de­cision-mak­ing?In what ways does the user’s documentation of the design, training and deployment of the application demonstrate compliance with the various rules? 6. Conclusion Unacceptable Risk AI activities are those practices that pose inherent harm to people and are strictly forbidden under the AI Act. The potential for reputational damage and regulatory sanctions serve as strong deterrents for firms to avoid breaching these provisions of the AI Act. It is essential for companies to take proactive measures to ensure compliance and prevent harm to individuals and society.
11/03/2024
Looking ahead to the EU AI Act
Introduction The European Union is preparing for the imminent adoption of the world’s most significant legislation on Artificial Intelligence, solidifying its position as a pioneer among global legislators. This initiative aims to establish and reinforce the EU’s role as a premier hub for AI while ensuring that AI development remains focused on human-centered and trustworthy principles. To expedite the achievement of these goals, on 8 December 2023, after three days of debate, the European Parliament and the Council of the European Union finally reached a provisional agreement on the “Proposal for a Regulation laying down harmonised rules on artificial intelligence” (the so-called AI Act), which aims to ensure that AI systems placed on the European market are safe and respect the fundamental rights and values of the EU. Subsequent to this provisional agreement, technical refinement of the AI Act continued to finalise the regulation’s details and text. The final vote of the European Parliament on the AI Act will take place at 13 March 2024. Since the European Parliament's Committees on the Internal Market and Consumer Protection (IMCO) and on Civil Liberties, Justice and Home Affairs (LIBE) have endorsed overwhelmingly the proposed text, the approval of the European Parliament can be expected. After a long and complex journey that began in 2021 with the European Commission’s proposal of a draft AI Act, this new regulation is expected to be passed into law in spring 2024, once it has been approved by the European Parliament and the Council of the European Union . The AI Act aims to ensure that the marketing and use of AI systems and their outputs in the EU are consistent with fundamental rights under EU law, such as privacy, democracy, the rule of law and environmental sustainability. Adopting a dual approach, it outright prohibits AI systems deemed to pose unacceptable risks while imposing regulatory obligations on other AI systems and their outputs. The new regulation, which also aims to strike a fair balance between innovation and the protection of individuals, not only makes Europe a world leader in the regulation of this new technology, but also endeavours to create a legal framework that users of AI technologies will be able to comply with in order to make the most of this significant development opportunity. In this article we provide a first overview of the key points contained in the text of the AI Act1This article (including the relevant citations below) is based on the latest draft available on the Council’s website. The AI Act remains subject to possible further refinement, but not as regards content, and the text referred to for this article should be considered as the closest to the one that will be voted on by the EU Parliament. footnote that companies should be aware of in order to prepare for the implementing regulation.
08/03/2024
Harmonisation of insolvency laws in the EU and in non-EU countries: "insolvency...
The series are dealing with the topic of the draft directive "insolvency avoidance actions" and show how this topic is being discussed in the Member States and what the situation is like in individual non-Member States.
07/03/2024
EU: First directive to combat greenwashing adopted and published in Official...
March 2024
06/03/2024
Limited Qualified Investor Fund
Switzerland is not very well known as a fund jurisdiction, in particular when it comes to alternative investment funds. One reason has been the lack of a fund vehicle, which does not require approval by the local supervisory authority, such as, for instance, the Luxembourg Reserved Alternative Investment Fund (RAIF). Therefore, Switzerland has introduced the Limited Qualified Investor Fund (L-QIF), which is available from 1 March 2024. The L-QIF is a fund for qualified investors which does not require approval by the Swiss Financial Market Supervisory Authority (FINMA). The present briefing gives an overview on the new Swiss fund vehicle, by providing an executive summary and addressing, in more detail, the following topics:Legal for­m­Ad­min­is­tra­tion and investment man­age­mentIn­vest­ment re­stric­tionsIl­li­quid investments or investments difficult to valuateTaxThe L-QIF may be of interest to Swiss Managers and others looking at being active in the Swiss market. Executive summary The L-QIF is a new Swiss fund for qualified investors, which does not require approval by FINMA. The L-QIF is available from 1 March 2024 (date of entry into effect of the respective legal provisions). The L-QIF can take on the form of a Swiss contractual fund, a Swiss investment company with variable capital, or a Swiss limited partnership for collective in­vest­ments. L-QIFs must be administered and managed by a licensed fund management company or, under certain circumstances, a manager of collective assets. These institutions must, as a rule, be Swiss. However, to a certain extent also licensed foreign managers may be involved, which presents an opportunity for them to enter the Swiss market. The standard risk diversification rules and investment restrictions applicable to regulated vehicles do not apply to L-QIFs. L-QIFs may thus be attractive for alternative or illiquid investments. Despite the new vehicle, the main disadvantages of Swiss funds in an international context, which are the lack of EU market access and Swiss withholding tax, remain. A L-QIF will thus likely be most suitable if mainly Swiss investors shall be targeted, or for asset pooling by sophisticated investors when an active distribution is not required to raise funds. Which legal forms are available to structure L-QIFs? L-QIFs can be structured as Swiss contractual funds (SCFs), Swiss investment companies with variable capital (SICAVs), or Swiss limited partnerships for collective investments (Swiss LPs). They will not need FINMA approval, which does, however, not imply the absence of any regulation. Rather, the L-QIF remains subject to the provisions of the Swiss Collective Investment Schemes Act and Ordinance unless they are expressly disapplied. In particular, L-QIFs must be administered or managed by a supervised entity.  What are the requirements concerning the administration and management of L-QIFs? L-QIFs structured as SCFs can only be managed by a Swiss fund management company, which may, in turn, delegate the investment decisions to a manager of collective assets.L-QIFs structured as SICAVs will be required to delegate both the administrative and investment decisions to a fund management company, which may sub-delegate the portfolio management to a manager of collective assets.L-QIFs structured as Swiss LPs must delegate their executive management, including investment decisions, to a manager of collective assets. There will be no such requirement, however, if the general partners of the Swiss LPs are banks, insurance companies, securities firms, fund management companies, or managers of collective assets. Managers of collective assets, as defined in the Financial Institutions Act (FinIA), must be fully regulated investment managers under FinIA. De minimis collective asset managers with a license as portfolio manager are thus not eligible. When it comes to the delegation of the investment decisions, also foreign managers with an equivalent license may be involved. This may in particular be of interest to foreign managers with experience in managing similar vehicles, such as RAIFs. Which investment restrictions apply? The standard risk diversification rules and investment restrictions applicable to regulated vehicles do not apply to L-QIFs. This increased flexibility, however, will not exempt L-QIFs from defining in their documentation the applicable investment restrictions in line with the applicable limitations on leverage, collaterals and total exposure of net assets of L-QIFs. In addition, specific provisions, restrictions and limitations apply to open-ended L-QIFs investing in real estate, such as rules on co-ownership, risk diversification, transactions with related parties and requirements for experts in charge of valuation. Specific rules also apply to L-QIFs structured as Swiss LPs (i.e. closed-end vehicles) in terms of transactions with related parties. Requirements for valuation experts will also apply to such L-QIFs. Furthermore, the partnership agreements of Swiss LPs must expressly mention investment restrictions and authorised investment tech­niques.L-QIFs will be able to use the model-based approach as a risk measurement method (i.e. value at risk, VaR). The method will not be examined by FINMA, but ex post by the auditor. Finally, requirements applicable to securities lending, repo transactions, derivatives and security management for regulated vehicles apply also to L-QIFs. Are L-QIFs suitable for illiquid or investments difficult to valuate? L-QIFs may in particular be considered for alternative or infrastructure portfolios with illiquid investments. The most suitable legal form for such purposes is the Swiss LP. The open ended structures (SCFs and SICAVs) are generally less suitable for such type of investments, even if open ended L-QIFs with investments that are difficult to value or not negotiable may provide for termination possibilities only at specified intervals, but at least every five years.L-QIFs in the form of Swiss LPs may for instance be used for investments in private (early stage) companies. Open-ended L-QIFs, on the other hand, may be relevant for real estate investments as well as investments in digital or other alternative assets. For real estate investments, the Swiss LPs may, however, again be the better alternative depending on the type of real estate in question. Eventually, we may point out that side pockets require a specific approval by FINMA, in the absence of existing international standards or other points of comparison as well as empirical values. As a result, it will currently – unfortunately – not be possible for L-QIFs to create side pockets. What is to be considered from a tax perspective? L-QIFs are not treated differently in terms of tax, compared to other regulated funds/collective investment schemes. On the one hand, the L-QIF thus enjoys favourable treatment in terms of stamp duty and VAT. To the extent the L-QIF does not hold direct real estate, it is also treated transparently for income tax purposes. On the other hand, distributions and reinvested net income are subject to 35% withholding tax. This is an evident obstacle to the use of the L-QIF once also foreign investors shall be addressed. Coupled with the limited EU market access, the L-QIF will thus – like other Swiss funds – probably remain most suitable if (mainly) Swiss investors shall be targeted, or for asset pooling by sophisticated investors when an active distribution is not required to raise funds. Despite these limitations, the L-QIF is an attractive new fund vehicle, in particular when it comes to alternative investments. It also offers possibilities for foreign managers to enter into the Swiss market considering their experience with similar foreign vehicles. Looking ahead It will be interesting to see how L-QIFs evolve and if they become a popular investment vehicle for fund managers looking to enter the Swiss market. If you are interested in popular investment vehicles across Europe, please also see our popular investment vehicles guide. The information in this publication is for general purposes and guidance only and does not purport to constitute legal or professional advice.
06/03/2024
The Mobile Century 2024
CMS is delighted to support The Mobile Century, a publication written by women in the digital space, published by the Global Telecom Women’s Network (GTWN). The Mobile Century provides a global perspective on the most important issues facing the digital technology sector, while championing the role and contribution of women leaders in bringing about meaningful change. These characteristics align closely with the professional and cultural values of CMS’ Technology, Media and Communications Practice. The promise and anticipation around Artificial Intelligence has captivated worldwide attention over the past year like no other recent technological revolution. Governments around the world have rushed to understand how they can respond to generative AI, ensuring that their industries are well placed to capture maximum value from this innovation, whilst also not exposing their populations to undue risks. This edition of The Mobile Century includes an insightful essay by CMS Partner and Co-Head of the TMC Sector Group, Dóra Petrányi on finding the appropriate balance between AI ethics and AI regulation. It also includes an inspiring fireside chat between Dóra and Francesca Rossi, who is a computer scientist, an IBM Fellow and the IBM Global AI Ethics Leader. At the same time, society is facing other new challenges, as the digital natives – those who only know a digital world – see all aspects of their lives transformed. As certain jobs and even professions are being transformed by digital technology, what does the future look like for those who are inheriting our digital world? What do governments, regulators and industry itself need to do to ensure the benefits of these technologies outweigh the risks that have emerged?At CMS, we continue to be honoured to support the GTWN and its flagship magazine The Mobile Century, which, once again, is dense with thought-pro­vok­ing articles from inspiring leaders. We hope the articles motivate you, as they do us, to think about our responsibilities and the wider impact of our companies on the world around us.
01/03/2024
UK REITs - refocus for funds and investors
This Back to Basics note follows our key concepts briefings, which intend to provide high-level insights regarding funds fundamentals, funds vehicles and operational considerations, available here. In this Back to Basics, we look at UK Real Estate Investment Trusts (“REITs”), including their requirements, benefits and growing use for fund structures and investment by institutional type investors in UK real estate.  Background and relevance Recent changes in UK tax legislation – most recently in the Finance Act 2024 – as well as changing investor attitudes, have bolstered investor and manager appetite for UK REITs as part of a fund or as holding vehicles (“Private REITs”). They were originally introduced in 2007 for listed companies but can now be more “private” in nature and it is no longer necessary for the REIT to be traded or listed on a recognised stock exchange if the relevant ownership requirements are met. What is a UK REIT? A UK REIT is a UK tax resident company limited by shares (or a group of companies of which the principal company is a UK tax resident) that has an HMRC approved tax status for its property rental business and associated investment in real estate assets. What are the key benefits? Prominent benefits of using UK Private REITs to hold and invest in UK property include:tax efficient structure – see Tax benefits further below;use for real estate – this can be single or multi sector asset classes and includes student accommodation, private rented sector and life sciences;private REITs can be used by funds and institutional investors – see Private REIT structures below;possible use for a single asset – a REIT can hold a single commercial property of £20 million or more; and­flex­ib­il­ity – a UK REIT can be an overseas entity, provided it is a UK tax resident, and can be a group structure. It can be managed internally or externally. REITs are also permitted to hold non‑UK assets, which will be subject to local taxes, and to carry out a limited amount of non‑real estate investment activity. REIT requirements Set out below is a diagram illustrating some of the key “qualifying conditions” of RE­ITs:me­di­umThere are other conditions such as for financing, maximum holdings of shares by single corporates as well as continuing requirements. If these are not met, a tax charge can arise, or even loss of REIT status. Tax benefits A key incentive for using Private REITs is the tax efficiencies that are offered. These efficiencies include:no UK corporation tax is payable on tax derived from the property rental business: this has particularly become more favourable for investors following the uplift in main rate corporation tax from 19% to 25% in 2023. It is required to distribute at least 90% of its rental income profits;no capital gains tax is payable on profits arising from its property investments: this includes gains on the sale of qualifying UK property rich com­pan­ies;abil­ity to eliminate tax on latent gains: REITs can eliminate latent gains in property holding companies they acquire that hold UK property and can sell such property holding companies free of latent gains. This is highly desirable for purchasers when bidding for property holding companies at a gain;tax is levied at the shareholder level than at the REIT level itself: this enables certain institutional investors to claim exemptions on profits received from the property;a company acquiring REIT status is not subject to any additional tax charges for becoming a REIT; andability to reclaim withheld tax: while distributions out of its ring-fenced profits (otherwise referred to as property income distributions (“PIDs”)) made by the REIT are subject to a 20% withholding tax, these payments can be made gross where a shareholder is a UK corporate, pension fund, local authority or charity. UK companies will be liable to corporation tax on the PIDs at the current rate of UK corporation tax. Non-resident investors may be eligible to a reduced (or nil) withholding tax rate. Recent legislative amendments of REIT law Further changes to the REIT regime have recently been made by successive Finance Acts, with the most recent in the Finance Act 2024. Many of the recent amendments seek to reduce barriers to entry which we anticipate will heighten investor interest and participation in the REIT regime and create greater flexibility in fund structuring as it can accommodate REIT subsidiaries. These include:amending the non-close condition, which applies where a company is only close because it has an institutional investor as a participator, so that it is possible to trace through the intermediate holding companies to an institutional investor which is an ultimate beneficial own­er; an­dal­low­ing more fund structures to meet the “genuine diversity of ownership” condition, by allowing the fund structure to be looked at as a whole, rather than just the investing vehicle. The amendments made in the Finance Act 2024 include a change to the definition of “institutional investors” such that authorised unit trusts, open-ended investment companies and collective investment scheme limited partnerships must meet either:a “genuine diversity of ownership” condition (i.e. it is widely marketed) can be fulfilled by looking at the fund structure as a whole rather than just the investing vehicle; ora “not a close company” condition (i.e. not controlled by 5 or fewer participators). Private REIT structures It is possible for institutional investors to hold the Private REIT directly or through a fund structure if the relevant investor and other requirements are met (see 70% institutional ownership requirement in the diagram and proposed legislative changes above). Institutional investors include relevant authorised unit trust schemes, pension schemes, sovereign wealth funds, open-ended investment companies, collective investment scheme limited partnerships, other UK REITs (or overseas REIT equivalents), UK charities and certain insurance companies. Private REITs often do not need a listing or to be traded on a recognised stock exchange. An example of a UK Private REIT structure set up as a fund is provided below. medium Luxembourg vehicles The new Luxembourg-UK double tax treaty (as explored in our separate briefing (The new Luxembourg-UK double tax treaty: key points for investors in UK real estate (cms-lawnow. com))) with its new taxing right taking effect from 1 January 2024 in respect of withholding tax, and from 6 April 2024 applying to other taxes on income and gains in Luxembourg, is anticipated to have a knock-on effect on how existing UK real estate holdings should be most effectively structured where shares or interests are held by Luxembourg holding structures. This Luxembourg-UK double tax treaty, in conjunction with the UK corporation tax increasing to 25% on 1 April 2023, is expected to propel UK real estate investors further in considering the use of Private REITs in their own structuring.  Conclusion If you would like to discuss public and private REITs and their usage in funds, joint ventures or other investment structures, please contact a member of the CMS UK Funds Group. For further information on our REITs expertise, please see our separate brochure (CMS REITs | Corporate | United Kingdom | International law firm CMS).