AgenticOps . ae

Glossary · Term · 2026


DIFC Data Protection Regulation 10.



§ 01

What it is and when it took effect

DIFC Data Protection Regulation 10 is the AI-specific addition to the existing DIFC Data Protection Law (DIFC Law No. 5 of 2020). It was issued by the DIFC Commissioner of Data Protection in 2025 and took effect from September 2025, with a transitional grace period for in-flight deployments. It is binding on any DIFC-licensed entity that uses AI systems to process personal data within the centre.

The regulation does not invent new rights for data subjects beyond those already in the DIFC Data Protection Law. What it adds is a structured operational expectation for controllers and processors deploying AI: defined principles, defined documentation, and defined evidence that those principles are being upheld. It moves AI compliance from a self-declaration exercise to an examinable one.

For agentic AI specifically — systems that plan, take actions, and decide under uncertainty — the regulation is more consequential than for static analytical models. An agent that takes outbound actions on customer accounts, escalates exceptions, or triggers workflow steps is exactly the surface the regulation was designed to govern.


§ 02

The five demonstrable principles

Ethics. The deployment must serve a legitimate, articulated business purpose proportionate to the data-subject impact. "We deployed it because it was available" is not a defensible position. Use cases that affect access to financial services, pricing, or risk assessment require an explicit ethical review and a documented rationale.

Fairness. Outputs must not produce unjustified differential treatment across protected or sensitive characteristics. The controller is expected to evaluate for disparate impact, document the methodology used, and act on findings. Fairness is not a one-off launch check; it is a continuous monitoring obligation.

Transparency. Data subjects must be informed when they are interacting with an AI system, what categories of data feed it, and what role the system plays in decisions affecting them. Internally, the model logic, training-data lineage, and decision pathways must be documented to a level that a reasonably skilled examiner can follow.

Security. Standard information-security controls (access management, encryption in transit and at rest, vulnerability management) are baseline. Reg 10 adds AI-specific controls: prompt-injection defence, model-output filtering, and integrity checks on the decision chain. Third-party model APIs are in scope and must be contractually bound.

Accountability. A named senior owner is responsible for the deployment. There is a documented governance cadence, a risk register, an incident log, and an escalation path. The controller can produce, on request, evidence that each of the prior four principles is being upheld in practice.


§ 03

What examiners ask for in an AI examination

A DFSA-coordinated examination of an AI deployment is, in our experience, a documentation exercise more than a technical audit. The examiner does not run the model against held-out test data on the day. They ask for the artefacts that prove you ran the right tests yourself.

Model documentation. A model card or equivalent: provider, version, release date, intended use, known limitations, training-data summary at the level the provider discloses, evaluation results against your task, and a list of changes since last review. Updated whenever the underlying model version changes.

Decision-trace records. For a sample of decisions the examiner picks (typically a few dozen across edge cases and routine cases), you produce the input, the intermediate reasoning the agent followed, the tools it invoked, the personal-data fields it accessed, and the final output. The trace must be reproducible from logs, not reconstructed.

Evaluation harness output. The continuous-evaluation suite that runs against the deployment, with version-stamped results showing performance over time against accuracy, fairness, and safety metrics. This is what proves prompt drift is being detected and addressed.

Governance committee minutes. Records of the meetings where AI performance, incidents, and policy changes were reviewed. Names, dates, decisions, actions taken. Stale or absent minutes are a finding.

Escalation logs. The audit trail of cases where the agent escalated to a human, plus the categories of cases that should have escalated and did not. The ratio and the trend are what the examiner reads.


§ 04

How an agentic deployment satisfies it operationally

We design DIFC deployments around the evidence the examiner will eventually request, not around the principles in the abstract. Concretely, four operational practices cover most of the surface.

Structured logging at every tool call. Every API the agent invokes, every personal-data field it reads, every outbound message it sends — logged with timestamp, purpose code, data-subject identifier, and outcome. Logs are immutable and queryable by data-subject and by case.

Continuous evaluation harness. A weekly run against a maintained golden set of cases plus randomly sampled live traffic. Results published to the governance committee and version-stamped against the deployed model and prompt.

Monthly governance review in year one. Cadenced review of metrics, incidents, and proposed changes. Minuted, with a senior named owner. Quarterly from year two onward, unless a material incident moves it back to monthly.

For the agentic AI side specifically, see our agentic AI vs RPA comparison and the PDPL reference for the federal floor that sits underneath.


§ 05

Questions UAE business owners are actually asking

01 Does Reg 10 apply outside DIFC?

Directly, no — its territorial scope is the Dubai International Financial Centre. But practically, yes for many groups. If a DIFC-licensed entity processes personal data jointly with a mainland UAE affiliate, the DIFC entity must satisfy Reg 10 across the joint workflow, which usually pulls the mainland workflow up to the same standard. Mainland-only deployments fall under federal PDPL, which is less prescriptive on AI specifically. We treat Reg 10 as the working template for any UAE financial-services AI deployment regardless of where the licensed entity sits.

02 What if we use a third-party model API like OpenAI?

You remain the controller and the regulated entity — Reg 10 obligations do not transfer to OpenAI, Anthropic, or any other model provider. You must contractually bind the provider to the relevant security and processing terms, document the basis for cross-border transfer, and maintain decision-trace records on your side that survive even if the provider changes its log-retention policies. We require zero-data-retention contractual mode for any DIFC deployment, plus a fall-back evaluation harness that can reproduce decisions without the upstream provider in the loop.

03 Do we need a separate AI governance committee?

Reg 10 does not name a "committee" by that title, but the accountability principle effectively requires one. In practice, DFSA examiners want to see a named senior owner, a documented review cadence (we recommend monthly during the first year, quarterly thereafter), minutes of decisions taken, and an escalation path to the board for material AI risks. Most of our DIFC clients fold this into their existing risk and compliance committee with a standing AI agenda item rather than spinning up a separate body.



§ 07 — Begin

We translate this into a costed plan in 30 minutes.

One call. We tell you which workflows in your business should be agentic, which agent goes first, what the regulatory overlay looks like for your sector, and what 90 days of build looks like in practice. No deck. Free.