Risk Rating 

Medium.

AI regulation in your jurisdiction

The Kingdom of Saudi Arabia does not yet have a standalone AI specific law. However, several binding horizontal laws and sectoral frameworks apply to AI systems, including generative AI, and the Saudi Data and AI Authority (SDAIA) continues to issue non‑binding guidance relevant to AI governance. A draft law (the “Global AI Hub Law”) has been published for public consultation, but it is not a general AI regulation and has not yet been enacted.

SDAIA Generative AI Guideline (non‑binding guidance):  Provides principles for responsible use of generative AI, including transparency, accountability, privacy protection, human oversight, and risk management. It aligns with KSA’s Personal Data Protection Law (PDPL) and international best practices.

Global AI Hub Law (Draft for Public Consultation): Commonly referenced as the AI Hub Law,  but substantively it establishes a regime for sovereign data hosting and data embassies (Private/Extended/Virtual Hubs hosted in KSA with cross‑border legal allocation). It does not regulate AI systems or model governance. It remains a draft and is not yet in force.

Existing Regulatory Frameworks Applicable to AI

AI in KSA is currently governed through a combination of cross‑sectoral laws and sector specific regulatory regimes:

  • Personal Data Protection Law (PDPL) Royal Decree M/19 of 2024, and its Implementing Regulations and Data Transfer Regulations issued by SDAIA:
    KSA’s national data protection law governing personal data collection, processing, and transfer. It also includes data subject rights, cross‑border transfer conditions, controller and processor obligations, which is directly relevant to AI training, inference, and deployment.
  • National Data Management Office (NDMO) Policies: a comprehensive suite governing data classification, sharing, quality, metadata, and lifecycle management across public entities, which is important for AI datasets, model training, and government‑sector AI use.
  • National Cybersecurity Authority (NCA) frameworks:
    includes the Essential Cybersecurity Controls (ECC) and sectoral cybersecurity profiles. Applies to AI systems and MLOps environments operating within regulated entities, covering secure development, monitoring, and risk management.
  • Communications, Space & Technology Commission (CST): Oversees cloud computing, data center operations, and ICT licencing, which indirectly shape how AI systems and services are hosted and deployed.

Regulatory Oversight of AI 

Oversight primarily sits with SDAIA, but several other bodies regulate activities relevant to AI, as follows:

  • Saudi Data & AI Authority (SDAIA): leads national AI strategy, issues AI‑related guidelines (including the Generative AI Guideline), and promotes ethical and secure AI adoption. SDAIA is highly active in policy development and guidance. Enforcement occurs mainly through the PDPL, supported by the National Data Management Office for government data governance.
  • National Data Management Office (NDMO): sets and enforces data governance policies across public‑sector entities, impacting AI training data, data quality, sharing, and lifecycle management.
  • National Cybersecurity Authority (NCA): enforces cybersecurity frameworks (e.g., ECC and sectoral profiles) that apply to AI systems and infrastructure.
  • Communications, Space & Technology Commission (CST): regulates cloud computing, data centers, and ICT services, including environments where AI systems are hosted or deployed.

NCA and CST are active in cybersecurity and cloud regulation, including compliance checks and incident response.

  • Sectoral Regulators: SAMA (finance), SFDA (healthcare), GACA (aviation), and others enforce requirements within their respective sectors that increasingly apply to AI uses.

AI Guidance, Policies, and Strategic Frameworks

SDAIA has issued a wide set of guidelines, frameworks, and strategic policy documents governing the responsible development, deployment, and oversight of AI systems across the public and private sectors. These instruments are non‑binding unless linked to other enforceable laws (e.g., PDPL), but collectively they serve as the core governance architecture for AI in the Kingdom. Below are the key national guidelines and frameworks:

  • Generative AI Guideline for Public: a national guideline outlining principles for the responsible use of generative AI across society. It establishes expectations around transparency, accountability, fairness, privacy, and human oversight, and provides detailed risk‑mitigation practices for issues like bias, hallucinations, deepfakes, and content reliability. Applies to all users, developers, and entities interacting with GenAI tools.
  • Generative AI Guideline for Government: a regulatory guide directed specifically to government entities, detailing conditions for using generative AI when processing government data. Includes strict requirements for data classification compliance, privacy and security controls, restricted data prohibition, responsible deployment, human review of all outputs, and sector aligned ethical use.
  • Deepfakes Guidelines: a comprehensive framework governing the ethical development, creation, distribution, and detection of deepfake content. It covers duties of technology developers, obligations of content creators, protections for consumers, risk categories (e.g. fraud, impersonation, disinformation), and best practices for transparency, watermarking, consent and security. It also includes recommendations for regulators and sector specific do’s/don’ts.
  • National AI Ethics Principles: a national governance framework setting out core principles for trustworthy AI. It applies to all AI stakeholders in KSA (public, private, and nonprofit) and is designed to be embedded across the full AI system lifecycle. Includes controls, checklists, and compliance expectations.
  • AI Adoption Framework: a national strategic framework guiding government and private entities on how to responsibly adopt AI technologies. It provides detailed guidance on AI strategy and governance, enterprise readiness, technical enablers, human capacity building, regulatory compliance, operational frameworks, KPIs, monitoring, and risk management.
  • National Occupational Standard Framework for Data & AI: a national competency and workforce standard defining 16 data and AI occupations, including job roles such as AI Engineer, AI Ethicist, AI Consultant, Data Scientist, and Chief AI Officer. Used to standardise recruitment, training, workforce planning, licencing, and capability development across the Kingdom.
  • Saudi Academic Framework for AI Qualifications: a  higher‑education framework defining AI‑related academic qualifications, learning outcomes, curriculum standards, and knowledge units for diplomas, bachelor’s, master’s, and PhD programmes in AI. Ensures academic alignment with global benchmarks while reflecting national priorities and Vision 2030.

International AI Standards and Guidelines

Saudi Arabia’s AI governance ecosystem does not formally adopt or incorporate by reference any specific international AI standard (e.g., OECD, ISO/IEC, NIST). However, several national frameworks explicitly align with, draw from, or reflect concepts widely used in these international instruments.

  • OECD AI Principles: not formally incorporated, but SDAIA’s AI Ethics Principles and its broader governance materials closely mirror key OECD themes such as fairness, transparency, human‑centricity, accountability, and robustness. The structure and terminology used by SDAIA strongly align with global ethical AI frameworks.
  • ISO/IEC standards: there is no binding adoption of ISO/IEC AI standards in Saudi law. However, SDAIA guidelines encourage the use of recognised international standards, and ISO/IEC standards are explicitly listed as examples in the AI Ethics Principles – Annexure (e.g. ISO 23894 AI risk management) as reference frameworks that entities may rely on.
  • NIST AI Risk Management Framework (RMF): There is no binding adoption of NIST AI RMF in Saudi law. However, NIST AI RMF is explicitly referenced in the AI Ethics Principles as an example of an international standard that entities may rely on. Additionally, SDAIA’s AI Adoption Framework and AI risk‑assessment methodology reflect NIST style concepts (lifecycle governance, continuous monitoring, risk tiering, safety testing, etc.).

Forthcoming AI Legislation

Saudi Arabia does not currently have an AI‑specific law and has not announced any formal legislative process to enact one. Internal and public materials referencing the “AI Hub Law” confirm that the only publicly consulted draft containing “AI” in its title was not an artificial intelligence regulation, but a proposed framework relating to data embassies and cloud/data sovereignty matters.

Instead of pursuing a dedicated AI law, Saudi Arabia has adopted a policy based governance model led by SDAIA. These frameworks set national expectations for responsible AI but are non‑binding and do not constitute legislation.

The Kingdom is currently enforcing the PDPL, strengthening sectoral digital regulations, building AI governance capacity through SDAIA, and prioritising soft‑law mechanisms over statutory intervention. No consultation, draft text, or indicative timeline for an AI specific law has been published.