Purpose and Scope
The Regulations aim to promote the responsible, ethical, and transparent use of Artificial Intelligence (AI) within the framework of the country’s digital transformation.
Applicable to: public entities, state-owned enterprises (including those under FONAFE), the private sector, academia, and civil society organisations that form part of the National Digital Transformation System.
Excluded: personal uses and those related to national defence/security (subject to reinforced principles of proportionality, oversight, and human rights).
Guiding Principles
The Regulations reinforce principles aligned with international standards (OECD, UNESCO, EU AI Act):
- Non-discrimination and prevention of algorithmic bias.
- Privacy and protection of personal data.
- Mandatory human oversight in critical decision-making.
- Transparency and explainability of processes and outcomes.
- Social, economic, and environmental sustainability.
- Accountability and responsibility.
- Respect for copyright and intellectual property rights.
Institutional Governance
- PCM – SGTD: consolidated as the national authority on AI, with powers to issue guidelines, technical standards, binding opinions, and to monitor improper and high-risk uses.
- CNIDIA: the National Centre for Digital and AI Innovation, serving as a hub for projects, partnerships, and training.
- Other stakeholders: Digital Governance and Transformation Committees within each entity, as well as Digital Security Officers, Personal Data Officers, and Data Governance Officers.
- Multisectoral coordination: involving academia, businesses, civil society, and international organisations.
Risk Classification
Inspired by the European Union’s AI Act, the framework is structured into three levels:
- Misuse (prohibited): deceptive manipulation of decisions, real-time biometric surveillance in public spaces (except under narrowly defined exceptions), predictive policing, autonomous lethal capabilities, and mass surveillance without a legal basis.
- High risk: AI applied in sensitive sectors such as health, education, recruitment and workforce management, credit scoring, social programmes, and critical infrastructure including energy, telecommunications, banking, water, and transport.
- Acceptable risk: all other uses, subject to general principles and best practices.
Transparency, Privacy, and Ethics
- Labelling and explainability: high-risk systems must inform users of their purpose, functionalities, and decision-making processes; additionally, they must explain outcomes where human rights may be affected (algorithmic transparency).
- Personal data protection: strict compliance with Law No. 29733 and its Regulations (Supreme Decree No. 016-2024-JUS), under the supervision of the National Authority for Personal Data Protection (ANPDP).
- Ethical guidelines: the SGTD must issue guidelines within 180 days concerning the ethical development and use of AI.
Innovation and Promotion
- National AI Sandbox: controlled regulatory testing environments for startups, micro and small enterprises (MSEs), and strategic projects.
- Public cloud computing for AI projects of public interest.
- Promotion of open-source development and collaborative AI communities.
- AI laboratories in universities and research centres.
- Attraction and retention of national and international talent.
Oversight and Citizen Participation
- The SGTD will coordinate with Indecopi, the National Authority for Personal Data Protection (ANPDP), the Police, and other sectoral bodies.
- A digital channel on AI is available at gob.pe/iaperu for citizen reports and alerts regarding improper uses.
- The SGTD is empowered to conduct preventive monitoring and to request technical information from developers and implementers.
Implementation Timeline
Entry into force: 90 working days following publication → January 2026.
Public sector: gradual implementation over 1 to 3 years, depending on the entity (Executive, Legislative, and Judicial Branches; Regional and Local Governments; Universities, etc.).
Private sector:
- Health, education, justice, security, finance: 1 year.
- Transport, commerce, labour: 2 years.
- Production, agriculture, energy, mining: 3 years.
- Other sectors: 4 years.
- Micro and small enterprises (MSEs) and startups: differentiated timelines of 2 to 3 years.
Specific Obligations for the Private Sector
- Internal policies and protocols for the ethical and safe use of AI.
- Human oversight in high-risk systems, with trained personnel authorised to halt or correct decisions.
- Registration and traceability: documentation on system functionality, data sources, and social and ethical impacts (minimum retention of 3 years for impact assessments).
- Impact assessment (high-risk): not mandatory, but strongly recommended; may lead to recognition by the SGTD.
- Adoption of international standards (ISO/IEC 42001, 27002, 27005, 23053, 38507, among others).
- Internal training for staff on AI risks and best practices.
- Compliance with personal data protection regulations (ANPDP).
| At CMS Grau, we have played an active role in the drafting process of the Regulations, participating in working groups and multisectoral dialogue forums organised by the Secretariat for Government and Digital Transformation (PCM) and other entities. |
|---|
This experience places us in a unique position of insight and proximity to the key discussions that shaped the regulatory framework — from the adoption of international standards to the definition of differentiated obligations for the public and private sectors. As a result, we have assembled an interdisciplinary team capable of supporting our clients in:
Our commitment is to provide legal advice that combines regulatory certainty, comparative insight, and practical guidance — enabling organisations in Peru to innovate with confidence and anticipate the regulatory challenges posed by Artificial Intelligence. |