Written By Stephen Burns, Sebastien Gittens, Matthew Flynn, Ahmed Elmallah
Artificial intelligence (AI) regulation in Canada may be around the corner and could affect all types of organizations involved with AI systems in commercial contexts. A new companion document, for the Artificial Intelligence and Data Act (AIDA) was recently released by Innovation, Science and Economic Development Canada (the Companion Document). This document is an important development, as it outlines a proposed roadmap for any future AI regulation.
In our previous blog, Privacy Reforms Now Back Along with New AI Regulation, we provided a comprehensive summary of Canada's new pending AI legislation: the Artificial Intelligence and Data Act. AIDA was introduced as part of Bill C-27, now at its second reading in the House of Commons.
AIDA is touted as "the first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses." Broadly, AIDA addresses regulation of two types of adverse impacts, associated with high-impact AI systems:
- safety (e.g., physical harm, psychological harm, damage to property, or economic loss to an individual caused by AI systems); and
- non-discriminatory outputs (e.g., biased, unjustified and adverse differential impact by AI systems, based on any of prohibited grounds for discrimination in the Canadian Human Rights Act.).
Roadmap for Proposed AI Regulation
If Bill C-27 receives Royal Assent, a consultation process for supplemental AI regulation will be kick-started. This Companion Document is, therefore, intended to provide a framework for any future consultation.
As noted in the Companion Document, the government intends to take an agile approach to AI regulation by developing and evaluating regulations and guidelines in close collaboration with stakeholders on a regular cycle, and adapting enforcement to the needs of the changing environment. Implementation of the initial set of AIDA regulations is expected to take the following path:
- consultation on regulations (6 months);
- development of draft regulations (12 months);
- consultation on draft regulations (3 months); and
- coming into force of initial set of regulations (3 months).
Accordingly, it is envisioned that there would be a period of at least two years after Bill C-27 receives Royal Assent, before the new law comes into force. This means that the provisions of AIDA would come into force no sooner than 2025.
What Are "High-Impact AI Systems"?
AIDA will apply to "high-impact AI systems". But what exactly "high-impact AI systems" include, is not clearly defined by AIDA. In turn, this term will require further definition by AI regulation.
The Companion Document proposes example key factors which can be used in evaluating whether a system is considered "high-impact", and therefore, regulated by AIDA. These include examining, for a given AI system:
- risk of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
- severity of potential harms;
- scale of use;
- nature of harms or adverse impacts that have already taken place;
- extent to which, for practical or legal reasons, it is not reasonably possible to opt-out from that system;
- imbalances of economic or social circumstances, or age of impacted persons; and
- the degree to which the risks are adequately regulated under another law.
Therefore, systems which present concern across these factors may be regulated by AIDA, as "high-impact systems".
Example High-Impact AI Systems
Additionally, the Companion Document provides the following example types of high-impact AI systems, that are of interest for regulation by AIDA (e.g., in terms of potential harmful and/or biased impact):
- screening systems impacting access to services (e.g., access to credit or employment);
- biometric systems used for identification and inference;
- systems that can influence human behaviour at scale; and
- systems critical to health and safety.
The Companion Documents outlines the potential for each of these systems to provide harmful and/or discriminatory outputs.
Obligations on Organizations Involved with High-Impact AI Systems
In proposed AI regulations, the Companion Document contemplates that organizations involved with high-impact AI systems will likely be guided by the following example principles and obligations. Such organizations will be expected to institute appropriate accountability mechanisms to ensure compliance with their obligations.
Proposed Regulatory Requirements | Example Obligations on Organizations |
Human Oversight & Monitoring | Identify and address risks with regards to harm and bias, document appropriate use and limitations, and adjust the measures as needed. |
Transparency | Providing the public with appropriate information about how high-impact AI systems are being used. |
Fairness and Equity | Building high-impact AI systems with an awareness of the potential for discriminatory outcomes. |
Safety | Proactively assess system to identify harms that could result from use of the system, including through reasonably foreseeable misuse. |
Accountability | Put in place governance mechanisms needed to ensure compliance with all legal obligations of high-impact AI systems in the context in which they will be used. |
Validity & Robustness | The high-impact AI system performs consistently with intended objectives. |
Different Obligations for Different Activities
The Companion Document envisions that the obligations of an organization under future AI regulation will likely proportionally depend on how it is involved with a high-impact AI system. To offer further clarity, the Companion Document provides the following examples of proposed obligations for organizations involved with different activities associated with high-impact AI system. Organizations may be involved with one or more of the listed regulated activities.
Proposed Regulated Activity | Example Obligations | Examples of Measures to Assess and Mitigate Risk |
Designing System (e.g., includes determining AI system objectives and data needs, methodologies, or models based on those objectives.) | Identify and address risks with regards to harm and bias, document appropriate use and limitations, and adjust the measures as needed. |
|
Developing System (e.g., includes processing datasets, training systems using the datasets, modifying parameters of the system, developing and modifying methodologies, or models used in the system, or testing the system.)
|
Identify and address risks with regards to harm and bias, document appropriate use and limitations, and adjust the measures as needed. |
|
Making Available for Use (e.g., includes deployment of a fully functional system, whether by the person who developed it, through a commercial transaction, through an application programming interface (API), or by making the working system publicly available.) | Consider potential uses when deployed, and take measures to ensure users are aware of any restrictions on how the system is meant to be used and understand its limitations. |
|
Managing Operations of the AI System (e.g., includes supervision of the system while in use, including beginning or ceasing its operation, monitoring and controlling access to its output while it is in operation, altering parameters pertaining to its operation in context.) | Use AI systems as indicated, assess and mitigate risk, and ensure ongoing monitoring of the system. |
|
Oversight and Enforcement
Finally, the Consultation Document suggests that, in the initial years after AIDA comes into force, the focus will be on education, establishing guidelines, and helping organizations to come into compliance through voluntary means.
Thereafter, it is expected that focus will then shift to relying on enforcement mechanisms to address non-compliance. These are envisioned to include two types of penalties for regulatory offences, as well as various types of true criminal offences:
- Regulatory administrative monetary penalties (AMPs): A flexible compliance tool that could be used directly by the regulator in response to any violation in order to encourage compliance with the obligations of AIDA.
- Prosecution of regulatory offences: More serious cases of non-compliance with regulatory obligations will be referred to Public Prosecution Service of Canada.
- True criminal offences: Separate from the regulatory obligations in AIDA, and relate to prohibiting knowing or intentional behaviour where a person causes serious harm. These may carry stronger punishments, including imprisonment. These can include knowingly using personal information obtained from a data breach to train an AI system.
The substance of a number of these enforcements mechanisms will need to be further clarified by the supplemental AI regulations. As currently drafted, however, a contravention of the AIDA may result in significant consequences. Depending on the circumstances, an organization may potentially be liable to a fine of not more than the greater of $25,000,000 and 5 percent of its gross global revenues.
Next Steps
The Consultation Document notes that, following Royal Assent of Bill C-27, the government intends to conduct a broad and inclusive consultation of industry, academia, civil society, and Canadian communities to inform the implementation of AIDA and its regulations.
If enacted as currently drafted, we anticipate that AIDA will have a substantial impact on the extent of regulatory scrutiny of organizations with respect to their use of artificial intelligence. As a result, organizations should undertake a comprehensive review of how they conduct business and manage AI systems.
The Bennett Jones Privacy & Data Protection group is available to discuss how the changes may affect an organization's privacy obligations.
Please note that this publication presents an overview of notable legal trends and related updates. It is intended for informational purposes and not as a replacement for detailed legal advice. If you need guidance tailored to your specific circumstances, please contact one of the authors to explore how we can help you navigate your legal needs.
For permission to republish this or any other publication, contact Amrita Kochhar at kochhara@bennettjones.com.