The federal government has recently released its voluntary Code of Practice (the Code) relating to advanced generative artificial intelligence (AI) systems. The code identifies measures that organizations are encouraged to adopt when they are developing generative AI systems. The Code outlines measures that are aligned with six core principles:
Organizations seeking to use, develop, and manage such systems are encouraged to integrate the principles of the Code into their operations, and by doing so, take steps to ensure that risks associated with the use of and reliance on AI are appropriately identified and mitigated.
The federal government's release of the Code occurred just after it published a Guide on the use of Generative AI for government institutions on the use of Generative AI, and opened a consultation on a proposed Code of Practice for generative AI systems. Bennett Jones has previously blogged about both—Generative Artificial Intelligence (AI): Canadian Government Continues to Clarify Use of Generative AI Systems and Artificial Intelligence—A Companion Document Offers a New Roadmap for Future AI Regulation in Canada.
In the background of these developments, Bill C-27, which includes draft legislation on AI—The Artificial Intelligence and Data Act—is expected to be passed into law relatively soon. Although, it is worth noting that Bill C27 has been under consideration since June 2022. This draft legislation includes substantial compliance obligations in connection with the design, development and deployment of AI systems in the private sector, and corresponding exposure to penalties for non-compliance. The focus of this draft legislation is on addressing potential harm (physical, psychological, damage to property, or economic loss) arising from the use of AI systems. In its current state, the draft legislation lacks clarity as to what activities involving the use of AI will be defined as "high risk" (a relevant standard for the imposition of obligations and penalties). At present, pending Bill C-27 being passed into law, regulation of AI in the private sector is governed by the federal privacy legislation (Personal Information Protection and Electronic Documents Act).
While the Code is voluntary, the principles underlying this code will likely serve as a framework for assessing regulatory compliance, and therefore provide a loose roadmap of how AI is regulated. However, the precise manner in which the principles underlying the Code will be interpreted is critical to defining more precisely what compliance looks like. Likewise, how the concept of "high risk activities" will be defined will be significant in understanding the relevant compliance standards.
In short, at present, there is no clearly defined roadmap from the federal government to guide organizations in the design, development and deployment of AI. Absent this roadmap, organizations seeking to deploy AI in their business operations may inadvertently expose the business to regulatory scrutiny and penalties. Careful navigation is required to reap the benefits of AI while effectively managing exposure.
The Bennett Jones Privacy and Data Protection group is available to discuss how your organization can effectively develop its AI compliance program.