US FCC issues USD 6 m fine for illegal robocalls – the takeaways and parallels in the EU AI Act
Key contacts
On 23 May 2024 the Federal Communications Commission (FCC) issued a penalty of USD 6 million against a political adviser for illegal robocalls using deepfake generative artificial intelligence (AI) voice messages in a political campaign. This case shows that even without explicit deepfake and AI regulations, it is possible to take serious action against such conduct. The following article summarises the takeaways of this US case and outlines the parallels and relevant provisions in the European Union’s AI Act.
Background
During the 2024 Democratic Presidential Primary Election, a political consultant perpetrated an illegal robocall campaign targeting potential voters two days before the primary. The political adviser’s illegal robocalls carried a deepfake generative AI voice message that imitated the US President’s voice and encouraged potential voters not to vote in the upcoming primary. Following a thorough investigation, the FCC found that the political consultant intentionally caused thousands of illegal prerecorded voice robocalls to be transmitted using misleading and inaccurate caller identification information in an apparent violation of the Truth in Caller ID Act and the FCC’s implementing regulation.
Key takeaways concerning the challenge of tackling deepfake solutions
Although the EU’s AI Act is the first comprehensive AI regulation, the need to prevent and act against unlawful AI practices also exists outside the EU. The present case shows that even without explicit deepfake and AI regulation, enforcement action against such activities is already available. At the same time, strong enforcement action by public authorities often requires specific legislation. Even though the problem of using fake caller IDs (i.e. caller ID spoofing) had plagued the US for decades and the FCC has had the federal authority to take action since 2007, it was necessary to amend the applicable act and regulations several times before any enforcement action became effective (e.g. see the extensive work behind the TRACED Act of 2019). In this case, the FCC used its existing powers to discourage similar future deepfake use by doubling the basic forfeiture so that it reflects the gravity of the violations of deepfake.
Parallels and relevant provisions of the EU AI Act
The EU's AI Act contains explicit rules on deepfakes, which do not completely prohibit the use of them, but imposes transparency obligations. The EU's AI Act addresses deepfakes in two ways:
- it defines the term of deepfakes in Article 3(60) as AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful;
- it provides transparency obligations for AI system deployers in recital 134 and Article 50, obliging them to indicate if the content has been artificially generated or manipulated.
The capability of an AI system or model to create deepfakes does not make the provider or deployer subject to wide-ranging obligations, such as the case in “high-risk” systems (e.g. a general-purpose social scoring or real-time remote biometric identification systems) or general-purpose AI models with systemic risks. AI systems with such capabilities are currently categorised as “limited risk” AI systems under the AI Act, and therefore the only transparency obligation has been imposed on deployers. Imposing such a transparency obligation, however, may not mitigate all the possible significant harmful impacts of deepfakes. Only time will tell whether the AI Act can, in practice, live up to the expectations in this field, and answer the following questions. Will national authorities across the EU have the proper tools to take action against deployers that do not comply with Article 50? Will such transparency obligations have any dissuasive effect on making such tools available to the general public or on the use of such deepfakes by end users?
Next steps
The publication of the EU's AI Act in the EU's Official Journal has been delayed until 12 July or later. The AI Act will become binding law 20 days after it is published in the Official Journal. Because of the publication delay, this is now expected to be in early August.
For more information on the EU AI Act and its potential impact on your business, contact your CMS client partner or your local CMS experts.
The article was co-authored by Daniella Huszár.