SDAIA issues deepfakes guidelines to regulate responsible AI use

SDAIA issues deepfakes guidelines to regulate responsible AI use
Short Url
Updated 11 May 2026 05:04
Follow

SDAIA issues deepfakes guidelines to regulate responsible AI use

SDAIA issues deepfakes guidelines to regulate responsible AI use
  • SDAIA sets rules for deepfakes, balancing innovation with protection
  • Saudi authority issues landmark guidelines on AI-generated synthetic media

RIYADH: The Saudi Data and Artificial Intelligence Authority has issued “Deepfakes Guidelines: Mitigating Risks While Fostering Innovation” to address the rapid evolution of artificial media.

The report is based on the original guidelines document published in May last year under number SDAIA-P119. 

The guidelines define deepfakes as hyper-realistic synthetic media created using deep learning techniques — including generative adversarial networks, auto-encoders and face-swap algorithms — that manipulate audio, video or other digital content in ways that are increasingly difficult to distinguish from reality.

A dual-edged technology

The document draws a clear distinction between malicious and non-malicious deepfakes, emphasizing that the technology is not inherently harmful; its intent and application determine its impact. 

While acknowledging positive applications across six key sectors — marketing, entertainment, retail, education, healthcare and culture — the SDAIA warns of significant risks on the malicious side, categorized under three primary threats.

The first is impostor scams, where deepfakes are used to convincingly mimic the voices, facial expressions and mannerisms of trusted individuals to authorize fraudulent financial transactions or extract sensitive information. The guidelines cite a real-world case in which an employee at a multinational firm was deceived into transferring a large sum of money to fraudsters who impersonated a senior executive during a video conference call.

The second category is non-consensual manipulation, involving the creation of explicit or compromising content without an individual’s consent, leading to severe emotional distress, reputational damage, and potential blackmail. 

The third threat is disinformation and propaganda, where deepfake videos or audio clips are used to falsely depict political figures making statements they never made, with the potential to sway public opinion and destabilize democratic processes.

Looking ahead, the document warns of an emerging threat landscape involving near-perfect AI-generated voice scams and entirely fabricated virtual environments designed to deceive users through simulated news reports, meetings, or interviews.

Obligations for developers and content creators

For deepfake technology developers, the guidelines mandate adherence to local and international data privacy frameworks, specifically referencing Saudi Arabia’s Personal Data Protection Law and Anti-Cyber Crime Law alongside international standards such as GDPR and CCPA. 

Developers are required to implement robust data protection measures, including privacy-by-design principles, anonymization techniques and consent management systems that allow individuals to request the removal of their likeness from training datasets.

On transparency, developers must embed non-intrusive digital watermarks into synthetic content, maintain comprehensive documentation of AI models, and incorporate explainability features so that outputs can be understood by users and stakeholders. 

The guidelines also call for human-in-the-loop oversight mechanisms at critical stages of model training and deployment, alongside automated systems to detect and flag unauthorized or unethical use of deepfake tools.

Content creators face equally stringent requirements. They are prohibited from using deepfake services for fraud, impersonation or defamation, and must apply visible, tamper-resistant watermarks to all synthetic content. Creators must secure explicit consent before using any individual’s likeness, maintain auditable consent records, and distribute content exclusively through secure, controlled channels. 

The guidelines also recommend the integration of blockchain and cryptographic hashing to create immutable records of original content, ensuring that any alterations can be traced back to their source.

Guidance for regulators

Regulators are directed to establish platform monitoring mechanisms that prioritize high-risk deepfake content — particularly in the domains of finance, politics and identity impersonation — while allowing more flexibility for low-risk or educational material. The document calls for a formal approval process for deepfake technologies before commercial deployment and recommends that regulators adopt content provenance standards, such as those outlined by the Coalition for Content Provenance and Authenticity.

On enforcement, penalties for misuse are to be proportional to the severity, intent, and recurrence of violations, while provisions exist to limit sanctions for minimal or incidental uses of the technology. Annual use-case inventories, independent audits and mandatory training programs for government employees are also stipulated, along with public awareness campaigns to foster informed societal discourse.

Empowering consumers to detect deepfakes

A substantial section of the guidelines is devoted to equipping the general public with practical detection skills. The SDAIA recommends a three-step approach: assessing the message source and context; analyzing audio-visual elements for telltale signs such as irregular facial movements, lip-sync delays, unnatural blinking patterns or lighting inconsistencies; and authenticating content using AI-based detection tools such as Deepware Scanner and Sensity AI, as well as content provenance tools like Adobe’s Content Authenticity Initiative and blockchain-based verification systems.

Victims of deepfake incidents are advised to immediately document evidence, report the content to the relevant platform, and notify Saudi authorities through the Kollona Amn app or the Ministry of Interior’s Cybercrime Unit. 

Financial fraud cases should also be reported to the Saudi Central Bank. Legal counsel experienced in digital rights is recommended, alongside engagement of digital forensics experts to trace the origin of the manipulated content.

Beneficial applications and the path forward

The guidelines highlight how deepfake technology, when used ethically, holds transformational potential. In healthcare, voice reconstruction has already improved quality of life for ALS patients by restoring their ability to communicate. In education, virtual tutors and remote training tools can expand access to underserved communities. In culture, the technology can preserve endangered dialects and bring historical events to life. In entertainment, consensual de-aging of actors and digital character creation are cited as legitimate and creative applications.

The document concludes with three overarching principles: the necessity of continuous learning and skills development to keep pace with AI evolution; organizational preparedness through tailored training and strategic hiring; and a commitment to ethical, positive applications that foster innovation while safeguarding public trust.

The full Deepfakes Guidelines document is available at https://sdaia.gov.sa/en/SDAIA/about/Files/File0001.pdf. 

(This report is based on an original SPA dispatch, expanded with key findings from the full SDAIA Deepfakes Guidelines document, SDAIA-P119, May 2025).