AI laws and regulation in Sweden
Key contacts
jurisdiction
- Albania
- Austria
- Belgium
- Bosnia and Herzegovina
- Brazil
- Bulgaria
- Chile
- China
- Colombia
- Czech Republic
- France
- Germany
- Hong Kong
- Hungary
- India
- Italy
- Kenya
- Kingdom of Saudi Arabia
- Luxembourg
- Mexico
- North Macedonia
- Norway
- Peru
- Poland
- Portugal
- Romania
- Serbia
- Singapore
- Slovakia
- Slovenia
- South Africa
- Spain
-
Sweden
- Switzerland
- The Netherlands
- Türkiye
- Ukraine
- United Arab Emirates
- United Kingdom
Risk Rating
Medium.
AI regulation in your jurisdiction
As Sweden is an EU Member State, Regulation (EU) 2024/1689 - Artificial Intelligence Act (EU AI Act) the EU AI Act applies. The EU AI Act establishes harmonised rules for AI and adopts risk-based approach, bans harmful practices, and imposes strict obligations on high-risk and general-purpose AI systems.
Sweden does not have any local dedicated AI law in addition to the EU AI Act.
Existing Regulatory Frameworks Applicable to AI
Other relevant laws that govern AI-related activities are:
Data protection
- The General Data Protection Regulation (EU) 2016/679 (GDPR) governs personal data processing and interacts closely with automated decision-making and AI deployments.
- The Data Protection Act (2018:218) supplementing the GDPR.
Product safety and liability:
- Product Liability Act (1992:18)
Governs liability for defective products, establishing the conditions under which suppliers can be held liable for damages caused by defective products. The Product Liability Act is relevant where AI is a product or safety component.
- Union harmonisation legislation is included in Annex I of the EU AI Act and is relevant for the definition of high-risk systems.
- Tort Liability Act (1972:207)
Governs general liability for damages, including personal injury, property damage, and financial losses.
Cybersecurity and operational resilience:
- NIS2 Directive, as implemented by the Cybersecurity Act that is to enter into force 15 January 2025, intersects with AI risk management, especially for critical services and infrastructures using AI.
- Financial sector operational resilience (DORA) overlaps with AI obligations where financial institutions deploy AI in critical functions.
Copyright and IPR as applied to general-purpose AI models (GPAI):
The EU AI Act imposes copyright-related transparency and compliance obligations on GPAI providers (e.g., training data summaries), complementing EU copyright rules that include a text and data mining (TDM) exception.
- Act on Copyright in Literary and Artistic Works (1960:729)
Governs copyright law in Sweden and sets out the principles for ownership and use of creative works, such as books, music, films, artworks etc.
Securities
- Securities Markets Act (2007:528) Imposes restrictions on algorithmic trading.
Among the sectors that govern AI-related activities, the following should be considered:
- Critical infrastructure and transport: AI used as safety components in regulated products (e.g., aviation, rail, cars) and for managing critical infrastructure (water, gas, electricity) is high‑risk and subject to strict requirements.
- Medical and health technologies: AI in medical devices and robot‑assisted surgery is treated as high‑risk; data protection, medical device rules, and product safety/liability also apply. The Swedish Medical Products Agency has also adopted guidelines in 2023, on use of AI in the healthcare sector, found (only in Swedish) here.
- Education and vocational training: Systems determining access, scoring, or outcomes in education are high‑risk.
- Employment and worker management: AI for hiring, management, and access to self‑employment is high‑risk; EU member states can adopt worker‑protective rules alongside the AI Act.
- Essential private and public services (including finance): AI for credit scoring, access to benefits, or other essential services is high‑risk; financial services also face DORA and sectoral compliance.
- Biometrics and identity systems: Remote biometric identification (RBI), biometric categorization, and emotion recognition face prohibitions or stringent controls; certain law enforcement uses are narrowly excepted.
- Law enforcement, migration, asylum, and border control: Multiple high‑risk uses are listed (e.g., evidence reliability evaluation, automated visa examination), with safeguards and judicial authorization requirements where applicable.
- Justice and democratic processes: AI used to assist legal interpretation or influence democratic processes (e.g., elections) is high‑risk.
- General-purpose AI (GPAI) models across sectors: Transparency, documentation, copyright compliance, and, where “systemic risk” exists, additional duties (standardized evaluations, incident reporting, cybersecurity). These rules cut across all sectors that integrate GPAI.
- Online services, platforms, and content: Transparency duties for chatbots and deepfakes, plus broader DSA or DMA obligations for online intermediaries and gatekeepers.
Regulatory Oversight of AI
Currently there is no Swedish authority tasked with overseeing AI use. However, on October 6, 2025, the Swedish government published a government inquiry (SOU 2025:101) setting out national legislative initiatives required to comply with the EU AI act. The referral designates multiple market control authorities, but suggests that the Swedish Post- and Telecoms Authority (Sw: Post- och Telestyrelsen) should be Sweden's primary national coordinating and market surveillance authority for the EU AI Act.
AI Guidance, Policies, and Strategic Frameworks
There are several documents approved by the European Commission that are relevant to AI development and use:
- Guidelines on prohibited AI practices established by AI Act provide non‑binding legal explanations and practical examples to support uniform application of the AI Act´s banned practices (e.g., manipulation, social scoring, untargeted facial image scraping, certain biometric uses).
- Guidelines on the AI system definition clarify the legal concept under the AI Act and assist stakeholders in scoping coverage; these are non‑binding and will evolve with practice.
- For General‑Purpose AI models (GPAI), the Commission published a package of support instruments: Guidelines on the scope of GPAI obligations, and a Template for public summaries of training content; these are designed to help providers meet transparency, copyright, safety and security duties under the AI Act.
- The GPAI Code of Practice was developed through an iterative process and is intended to become a recognized way to demonstrate compliance with GPAI obligations (including for models with systemic risk); it was finalized in July 2025 following drafts and consultations.
Furthermore, the European Data Protection Supervisor has issued its Guidance for Risk Management of Artificial Intelligence Systems.
Local Swedish documents:
The Swedish Agency for Digital Government (Sw: Myndigheten för digital förvaltning) has published Guidelines for the use of Generative AI in Public Administration only available in Swedish). The guidelines aim to provide a comprehensive framework for the use of AI in public administration to ensure that use of generative AI complies with applicable law on a national and EU level. The guidelines cover the following areas: leadership, data protection, labour law, procurement, information security, IPR, and ethical use.
The Swedish Authority for Privacy Protection (Sw: Integritetsskyddsmyndigheten) has issued guidelines (only available in Swedish) on the use of AI in the context of data protection and the GDPR. The guidelines aim to align AI development and use with robust data protection standards.
There are also strategic frameworks issued by the AI partnership network AI Sweden found here and the Swedish government research institute RISE will set the framework for AI and satellite imagery.
In addition, The Swedish government is expected to publish a national AI strategy in 2026, which will, among other things, institute an AI secretariat at the Swedish Finance Department, as well as an AI coordinator within the Prime Minister’s Office.
International AI Standards and Guidelines
The EU AI Act strengthens the EU´s role to shape global norms and standards and promote trustworthy AI and provides the EU with a powerful basis to engage further with third countries and at international fora on issues relating to AI. As an example, the EU was closely involved in developing the OECD´s ethical principles for AI.
The EU has been influenced by or has influenced other international texts, including:
- Council of Europe: the EU has signed the Council of Europe Framework Convention on AI (opened for signature on 5 September 2024), a binding international instrument aligned with the AI Act´s risk‑based and transparency principles on human rights, democracy, and rule of law.
- United Nations Educational Scientific and Cultural Organization (UNESCO): Recommendation on the Ethics of Artificial Intelligence (2021).
- United Nations: Europe followed-up its report of the High-Level Panel on Digital Cooperation (24 April 2020), including its recommendation on AI.
- Organisation for Economic Co-operation and Development’s (OECD): the OECD AI principles were adopted in 2019 and updated in 2024 and provide practical and flexible guidance for several stakeholders, including policymakers and AI actors.
- World Trade Organisation: Trading with intelligence, How AI shapes and is shaped by international trade (2024).
- International Telecommunications Union (ITU): Recommendation ITU-T Y. 3142 (04/2024). Requirements and framework for AI/ML-based network design optimization in future networks including IMT-2020.
The Swedish Agency for Digital Government (Sw: Myndigheten för digital förvaltning) Guidelines for the use of Generative AI in Public Administration references UNESCO’s Recommendation on the Ethics of Artificial Intelligence Recommendation found here and its Ethical Impact Assessment found here
Forthcoming AI Legislation
The EU AI Act is operative in Sweden, but some parts of the EU AI Act aren’t yet applicable (see art. 113).
As part of the government inquiry setting out national legislative initiatives required to comply with the EU AI act, the Swedish government has proposed a law supplementing the EU AI Act as well as several changes to other laws and lower level regulations.
The Swedish government’s digitalisation strategy for 2025-2030 suggests that excessively complex rules may impact innovation and entrepreneurship, and that there is a “lack of sufficient knowledge about and understanding of” current regulations. As such, the government may wish to analyse the market impact of current regulations before proceeding with national legislative initiatives concerning AI.
Useful links
- The Swedish government’s digitalisation strategy for 2025-2030
- EU AI Act - Questions and Answers