Charting Canada's AI course: unveiling a code of conduct and legislative amendments to AIDA

Canada continues to lead the way in building legislative frameworks for the responsible development of AI

With the rapid evolution of generative AI and its worldwide adoption – there’s no doubt that AI is here to stay. We know that artificial intelligence has the potential to create efficiencies in industries and change how we work. It can also have a significant impact on human well-being and health. As it continues to transform, AI is being adopted by many companies to help create operational efficiencies. With a rapidly moving digital transformation it’s hard for governments and legislative frameworks to keep up. This past November, 28 nations, including Canada, signed the Bletchley Declaration on AI Safety. The signatories all affirm that “AI should be designed, developed, deployed, and used, in a manner that is safe, in such as way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.” 

Before signing the Bletchley Declaration, Canada also unveiled its generative AI voluntary code of conduct, as a pre-cursor to the proposed Artificial Intelligence and Data Act (AIDA), which was introduced as part of Bill C-27 in June 2022 but will not likely be in force until 2025.

A made in Canada approach – AI voluntary code: a bridge to AIDA

On September 27, the Minister of Innovation, Science and Industry released a voluntary code of conduct specific to generative AI as we wait for the proposed AIDA legislation to pass. Beyond risk mitigation, the Code of Conduct encourages its signatories to promote and build a robust and responsible AI ecosystem in Canada. The code provides a set of identified measures that support upcoming regulation pursuant to AIDA. These measures emphasize developing and managing the operations of generative AI systems. The measures focus on methodology selection, collection and processing of datasets, model building, and testing as part of responsible development of AI. While measures on the management of AI operations include putting a system into operation, controlling the parameters of its operation, controlling access, and monitoring its operation.

Organizations developing and managing the operations of these systems will implement responsible generative AI practices to mitigate adverse impacts associated with advanced generative AI systems.

Code of Conduct – a two-tiered approach

The Code of Conduct establishes important benchmarks as undertakings for organizations. These undertakings can be set into two tiers based on their application:

(1) all advanced generative systems, and

(2) advanced generative systems that are available for public use.

Each tier measures six core principles applicable to all organizations: accountability, safety risk assessments, fairness and equity considerations, transparency, human oversight and monitoring, and maintaining system validity and robustness. Tier two extends these measures to organizations making these systems broadly available for use, acknowledging the heightened risks associated with public use. This update draws a comparison between what developers and managers have to do when the generative AI system is for general use versus public use.

Regardless of your organization’s role and whether the advanced generative system is for public use or not, certain steps must be taken to align with the Code of Conduct. This includes implementing risk management policies, procedures and training, proportionate to the nature and risk profile of the activities. You will also need to disseminate best practices with organizations playing complementary roles in the ecosystem. You also must perform comprehensive assessments of reasonably foreseeable risks along with mitigation measures.

As part of their commitment to a robust AI ecosystem, code of conduct signatories also commit to several other ideals as well. This includes pledging to prioritize human rights, accessibility, environmental sustainability, and global challenges when developing and deploying AI systems. It is noteworthy the code does not require specific requirements for meaningful explainability – a prominent feature in the proposed Consumer Privacy Protection Act and General Data Protection Regulation. The code also does not allow for the possibility of “opt-out” for users even though transparency is a core principle of the code.

In essence, the Code of Conduct represents a proactive approach to AI governance. It provides a framework for organizations to act responsibly and collaboratively, serving as a crucial interim measure while awaiting the formalization of AIDA.

Further proposed amendments to AIDA

As AI evolves, proposed legislative frameworks need to keep up with any changes. On October 5, the Minister of Innovation, Science and Industry (ISED) wrote a letter to the Standing Committee on Industry and Technology proposing amendments to Artificial Intelligence and Data Act (AIDA).

The letter suggests amendments in the following areas:

  • specifying roles and obligations for different actors in the AI value chain;
  • clarifying obligations for generative general-purpose AI systems, like ChatGPT;
  • defining classes of systems that would be considered high impact;
  • strengthening and clarifying the role of the proposed AI and Data Commissioner; and
  • aligning with the EU AI Act as well as other advanced economies.  

The voluntary Code of Conduct covers the first two areas.

High-impact systems

Proposed section 7[i] of AIDA requires assessing an AI system to determine whether it is a “high-impact system.” This assessment of “high-impact system” is important as it is the threshold that triggers most obligations (i.e., obligations of assessing, mitigating and monitoring risk, keeping records, publishing description of AI systems in use, and providing notice of harm). However, AIDA does not define “high-impact system” creating further uncertainty as to which AI systems would actually be subject to these obligations.

In this letter, ISED does not suggest a definition of “high-impact systems” but instead proposes seven classes to determine whether a system is high impact or not. These classes are:

  • Employment – when an artificial intelligence system is used in order to determine employment, recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination.
  • Provision of Service – when an artificial intelligence system is used to determine whether services should be provided, the type and cost of services, and how these will be prioritized.
  • Biometric Information – when an artificial intelligence system is used to process biometric information in order to identify an individual (except for authentication with consent) or assess an individual’s behaviour or state of mind.
  • Online Content – using an artificial intelligence system in moderating content on online communication platforms, such as search engines and social media services, or in prioritizing the presentation of such content.
  • Healthcare – when used in matters relating to healthcare or emergency services. This excludes a use referred to in any of paragraphs (a) to (e) of the definition of “device” in section 2 of the Food and Drugs Act that is in relation to humans.
  • Courts – when a court or administrative body uses an artificial intelligence system to make a decision regarding an individual involved in proceedings before the court or administrative body.
  • Law enforcement – using an artificial intelligence system to support a peace officer, as defined in section 2 of the Criminal Code, in carrying out law enforcement powers, duties, and functions.

The letter also states that this list can evolve and be modified by the Governor-in-Council as technology changes. If these amendments are adopted, then they would replace the proposed requirement for high-impact system assessments of section 7[ii] of AIDA.

International alignment of AI legislation

ISED has recommended that AIDA align with international frameworks such as the EU AI Act as well as the OECD AI Principles. ISED encourages adopting OECD’s definition of “artificial intelligence,” which is:

“a technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.”

ISED has also recommended that sections of AIDA imposing risk mitigation measures (e.g. section 8 requiring measures to identify, assess and mitigate the risks of harm or biased output, or section 9 requiring monitoring compliance with the mitigation measures) be replaced with new sections that better clarify responsibilities for organizations developing, managing and putting into service high-impact systems.

These responsibilities have also been defined in the voluntary Code of Conduct. However, a new category of entity has been mentioned in the letter – “persons placing on the market or putting into service a high-impact system.” It remains to be seen what responsibilities will be placed on such players.

The letter also states any organization that “substantially modifies” a high-impact system will be responsible for ensuring that pre-deployment requirements are met. “Substantially modifies” has not been defined in the letter.

The letter further sets out that organizations that conduct “regulated activities”[iii] (a defined term in AIDA) must prepare an accountability framework that the commissioner can request at any time. The framework will entail:

  • the duties and obligations, as well as the hierarchy of reporting, for all staff who assist in making the system available for use or who support its operations management;
  • policies and procedures regarding risk management of the system;
  • policies and procedures on responding to individuals’ complaints about the system;
  • policies and procedures about the data used by the system;
  • the training provided to staff related to the system and the corresponding training materials;
  • anything else prescribed by regulation.

As Canada navigates the complex terrain of AI, its legislative initiatives and proposed amendments reflect a commitment to responsible and accountable AI development. By aligning with international frameworks and participating in global collaborations, Canada aims to contribute to a harmonized and secure future for AI. As we witness the evolving landscape of AI, Canada stands at the forefront of shaping policies that balance innovation with ethical considerations. Like the arrival of the internet, we can expect many aspects of our lives to be revolutionized, and it is important to be ready for these changes and have clear structures in place to build trust and a successful economy.

***

Imran Ahmad, Partner, Head of Technology, Co-Head of Information Governance, Privacy and Cybersecurity 

Imran Ahmad is the Canadian head of Norton Rose Fulbright’s technology group and the Canadian co-head of the information governance, privacy and cybersecurity practice. Imran advises clients across all industries on a wide array of complex technology-related matters, including outsourcing, cloud computing, SaaS, strategic alliances, technology development, system procurement and implementation, technology licensing and transfer, distribution, open source software, and electronic commerce. As part of his cybersecurity practice, Imran works closely with clients to develop and implement practical strategies related to cyber threats and data breaches. He advises on legal risk assessments, compliance, due diligence and risk allocation advice, security, and data breach incident preparedness and response. In addition, Imran has acted as "breach counsel" on some of the most complex cross-border and domestic cybersecurity incidents. He has extensive experience in managing complex security investigations and cross-border breaches. In his privacy law practice, he advises clients on compliance with all Canadian federal and provincial privacy and data management laws, with a particular focus on cross-border data transfer issues and enterprise-wide governance programs related to privacy.

***

Brian Chau, Partner

With a background in electrical and computer engineering, Brian Chau focuses on intellectual property, primarily patent prosecution, strategy, and portfolio management.

Brian specializes in examination and original drafting for patent applications in challenging subject matter areas, including software patents for computer science-based innovations relating to machine learning / artificial intelligence, distributed ledger, blockchain, and trading technologies, as well as mathematically intensive technologies such as power circuits, networking and cellular technologies. Brian has deep experience in protecting technologies from both foundational (e.g., machine learning architectures, cryptographic algorithms, and loss functions) and practical applied technology perspectives (e.g., electric vehicle drivetrains, 3D printers, smart order routers, physical network routers).

***

Maya Medeiros, Partner 

Maya Medeiros is an intellectual property lawyer, patent agent (Canada, US) and trademark agent (Canada, US) and has a degree in mathematics and computer science. She has extensive experience in artificial intelligence, blockchain, cybersecurity, cryptography, payments, graph theory, risk management, gaming, face recognition, communications, healthcare and medical devices, virtual and mixed reality, wearables, and other computer-related technologies. She is a key contributor to www.insidetechlaw.com on the ethical and legal implications of artificial intelligence. 

She advises on IP strategy and develops tailored IP policies and training programs. Ms. Medeiros prepares domestic and international IP registrations and manages international portfolios, including coordinating foreign associates for foreign application prosecution. She also drafts and negotiates agreements relating to IP assets, such as licences, confidentiality agreements, system access agreements, joint development, and collaboration agreements. Ms. Medeiros assists with due diligence, landscape and freedom-to-operate evaluations.

***

Suzie Suliman, Associate

Suzie Suliman is a corporate and intellectual property lawyer. Her practices focuses on assisting clients protect their technology assets. Suzie regularly drafts and negotiates commercial agreements relating to technology, privacy, cybersecurity and intellectual property (such as SaaS, co-development, licensing, transfer, and data sharing agreements). She also provides support on technology-related transactions and assists clients prepare for and respond to data security incidents.


[I] Section 7: A person who is responsible for an artificial intelligence system must, in accordance with the regulations, assess whether it is a high-impact system.

[ii] Section 7: A person who is responsible for an artificial intelligence system must, in accordance with the regulations, assess whether it is a high-impact system.

[iii] Regulated activity means any of the following activities carried out in the course of international or interprovincial trade and commerce: (a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system; (b) designing, developing or making available for use an artificial intelligence system or managing its operations.

Lawyer(s)

Imran Ahmad Brian Chau Maya Medeiros