Papers
Topics
Authors
Recent
Search
2000 character limit reached

Canadian Directive on Automated Decision-Making

Updated 23 April 2026
  • Canadian Directive on Automated Decision-Making is a regulatory framework that governs AI-driven systems in the federal government by mandating risk assessment, transparency, and accountability.
  • It employs a structured, risk-tiered approach using Algorithmic Impact Assessments and categorizes systems from minimal to high risk to determine oversight levels.
  • The Directive requires both government departments and vendors to provide detailed documentation, continuous monitoring, and, for high-risk systems, independent audits.

The Canadian Directive on Automated Decision-Making (CDADM) is a regulatory and governance framework issued by the Treasury Board Secretariat (TBS) in 2019 to manage the adoption and oversight of automated decision systems (ADS) within the federal government. Its central objective is to harness the efficiency gains of AI-driven automation while safeguarding individuals’ rights, mitigating undue harm, and ensuring public sector accountability. The Directive operationalizes these goals through a structured, risk-tiered regime—mandating formal risk assessment, documentation, and ongoing monitoring at every stage of ADS development and deployment. Transparency and contestability are core stated values, though contemporary analyses highlight significant epistemic and practical limitations in its implementation (Das et al., 16 Apr 2026, Zick et al., 2024).

1. Core Structure and Key Objectives

The CDADM establishes TBS as its central steward, with ultimate accountability residing with the deputy heads of federal departments. Every procurement or material modification involving an ADS must be integrated with CDADM protocols throughout the project lifecycle. System vendors and integrators are contractually obligated to provide detailed documentation and monitoring data as specified by the Directive (Zick et al., 2024).

Key objectives include:

  • Classification of each ADS by risk category using a standardized, numeric framework.
  • Proportionate oversight scaling from minimal review to independent audit, contingent on risk tier.
  • Mandatory implementation of an Algorithmic Impact Assessment (AIA) except for minimal-risk cases.
  • Publication or documentation of key artifacts, with an emphasis on transparency.
  • Integration of human-in-the-loop controls and continuous post-deployment monitoring.

2. Risk-Categorization and Procedural Tiers

The CDADM institutes a risk-categorization methodology based on an ordinal scoring of Impact (II) and Likelihood of harm (LL):

R=I×LR = I \times L

where RR determines the procedural obligations carried by each project. The mapping between risk score, ordinal classification, and oversight tier is as follows:

Risk Category RR Range Procedural Tier Procedural Requirements
Minimal Risk R=1R=1 (I=1I=1, L=1L=1) Tier 0 Record risk, standard documentation
Low Risk 2R32 \leq R \leq 3 Tier 1 Light AIA, departmental sign-off
Medium Risk 4R64 \leq R \leq 6 Tier 2 Full AIA, TBS notification, internal report
High Risk LL0 Tier 3 Full AIA, independent audit, TBS approval, public summary

For Tier 2 and 3 systems, departments must complete comprehensive AIAs, document data provenance, demographic impact, fairness metrics, and prepare mitigation plans. Tier 3 further demands independent third-party review, mandatory TBS approval, and public release of a non-proprietary AIA summary (Zick et al., 2024). All but minimal risk systems require ongoing monitoring of model drift and human-in-the-loop audits.

3. Algorithmic Impact Assessment and Public AI Register

The Algorithmic Impact Assessment provides an evidence-based, structured protocol to quantify and characterize harm potentials (bias, privacy infringement, safety issues) and to outline mitigation strategies. Every non-minimal-risk project must submit an AIA, with depth and scrutiny corresponding to the risk tier.

To institutionalize transparency, the Directive mandates the publication of basic system metadata into a centralized Federal AI Register. The register, operational since November 2025, comprises individual records for every federal ADS, each including identifiers, functional descriptions, primary user groups, statuses, technical capabilities, data sources, developer details, and—where relevant—personal information handling under PIPEDA. As of the Register's 2025 launch, it contained 409 systems across 42 departments (Das et al., 16 Apr 2026).

Table: Example Metadata Fields in Federal AI Register

Attribute Example Value Coverage in Register
System Name “Tax Document OCR Extractor” All entries
Technical Capability NLP, LLM, OCR All entries
Developer In-house, Vendor (e.g., Microsoft) ~43.3% in-house, 38.1% vendor
Personal Data Usage PIPEDA Bank, Unspecified 21.4% specified, 24.4% unspecified

The Register’s ontology emphasizes system capabilities and procurement lineage, but omits structured fields for human-in-the-loop configuration, personnel training, or dynamic uncertainty measures (Das et al., 16 Apr 2026).

4. Operational Limitations and Bureaucratic Silences

Critical analyses such as "Bureaucratic Silences" (Das et al., 16 Apr 2026) identify systematic biases in what is represented as "accountable" AI under the CDADM. The Register privileges technical, decontextualized dimensions: system function, vendor relationships, asset status, but rarely encodes sociotechnical levers. Three major silence categories are identified:

  • Human Discretion: The register includes no structured variable for the extent or locus of human override, interpretive discretion, or user expertise calibration, despite substantial literature emphasizing the importance of these levers in risk management.
  • Personnel Training: The Directive's intent to require mitigation and capacity-building plans for high-risk systems is not operationalized in Register metadata; there is no field for training curricula, qualification standards, retraining protocols, or competence assessments.
  • Uncertainty Management: Few registrants articulate error rates, bounds on reliability, or residual uncertainty; only about 25% flag "pilot" status, while nearly half claim "accuracy" and "consistency," with the remainder silent. Statistical confidence, model drift indicators, or known blind spots are absent, which can offload risk onto frontline users without making it visible or auditable.

This selective ontology results in performative compliance: systems satisfy the letter of disclosure requirements without surfacing the substantive practices that undergird safety, contestability, and accountability.

5. Departmental and Vendor Obligations

Both departmental implementers and external vendors are bound by explicit obligations once a system is classified above Tier 0 (Zick et al., 2024):

Departmental requirements:

  • Conduct risk assessment at project inception; classify all ADS.
  • Engage technical ML experts in assessment, documentation, and monitoring.
  • Maintain comprehensive audit trails for every step from AIA completion to post-deployment performance.
  • Upload and maintain records in the public algorithmic registry.

Vendor requirements:

  • Provide detailed summaries of training data, including fairness metrics by class.
  • Supply full model documentation, from architecture diagrams to version and change logs.
  • Generate process and audit logs covering data ingestion, pipeline, and deployment.
  • Deliver ongoing performance and incident reports post-launch.

6. Implementation Challenges and Recommendations

Three recurrent challenges have emerged in the implementation of CDADM (Zick et al., 2024):

  • Expertise Shortage: Many departments lack sufficient technical staff to evaluate and operationalize AIAs, which undermines the substantive function of the Directive.
  • Risk Framework Design: Self-reported risk assessments and financial thresholds can inadvertently exclude impactful in-house systems or allow circumventing classification by procurement fragmentation.
  • Transparency Gaps: Publication requirements are often met only in a partial manner, with significant details such as demographic performance, fairness assessments, or compliance logs retained internally.

Recommendations advanced for future CDADM iterations include: creating a TBS-accredited AI audit expert registry; establishing a centralized evaluation and support unit; eliminating dollar-value thresholds for mandatory ADS registration; mandating at least Tier 0 classification for all systems (including internal builds); iterating risk reviews in tandem with AI system evolution; expanding public documentation requirements; and clarifying legal liability distribution between government and vendors.

7. Comparative and Prospective Context

Compared to voluntary best-practice frameworks such as the World Economic Forum’s "AI Procurement in a Box," the CDADM is legally binding and operationalizes a quantitative, four-tier risk matrix with formal procedural mandates, including third-party audit for high-risk systems (Zick et al., 2024). However, the lack of structured fields for sociotechnical governance elements in the AI Register constrains its capacity to foster meaningful contestability or oversight.

A plausible implication is that the future efficacy of the Directive rests on the redesign of the Registry and reporting systems to encode not only technical but also procedural, human, and epistemic aspects of deployable AI. Only by embedding structured disclosures of human-in-the-loop roles, personnel training standards, and uncertainty management protocols can the automated decision-making ecosystem in Canada move beyond formal compliance to genuinely contestable, accountable governance (Das et al., 16 Apr 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Canadian Directive on Automated Decision-Making.