Canadian businesses and AI: Opportunities & risks

A guide outlining the pitfalls and advantages companies should look for when adopting AI solutions

AI has taken the world by storm and opened up endless possibilities for computer-generated solutions that can help businesses optimize their operations. While the opportunities are vast, the risks associated with AI solutions are equally great and can expose businesses to significant threats they must mitigate. Norton Rose Fulbright's award-winning team of lawyers outlines some of AI's advantages and pitfalls based on their extensive experience advising some of Canada's largest businesses in AI adoption.

We have all heard a lot about the advantages associated with the use and proliferation of AI. What do you see from a Canadian perspective?

In Canada, we are fortunate to be well-positioned in the near-to-medium term when developing and deploying of AI and AI-related innovation. The Federal Government recently announced as part of Budget 2024 an investment of approximately $2.4 billion (CAD) in AI to build capabilities and infrastructure for leading Canadian AI businesses and R&D firms. This will help to support a workforce exceeding 100,000 AI professionals working in Canada. This number includes about 10% of the world’s top-tier AI researchers. The government support has led, in no small part, to Canada attracting significant venture capital investments in AI, including approximately $8.6 billion (CAD) in 2022 alone. A great example of this is Scale AI (Canada’s global innovation cluster), which earlier this year announced a major financing round, with more than $96 million (CAD) in investments towards 22 major AI projects.

What do you think Canadian business clients preparing for the post-AI world should keep in mind when contemplating the use of AI for their operations?

We’ve been having this conversation with our clients for a while, and these discussions have increased in both frequency and intensity over time. Our advice is consistent across industry sectors. Businesses need to establish an AI governance framework and associated processes to leverage AI’s huge potential and to understand the current AI landscape that may already exist within their enterprises. The first step towards establishing AI governance involves a review of existing governance frameworks to ensure that these functions do not inadvertently result in roles that operate at cross purposes. The second involves a review of existing risk assessment processes to determine the scope of updates required to comply with upcoming legislative and regulatory requirements – all of which should be consistent with the business’s existing risk appetite. The third step involves a business taking an inventory of its existing AI systems and technologies to minimize organizational silos and any potential compliance gaps.

Can you provide examples of known compliance gaps?

Canada is working on legislation and associated regulation to govern this space. While we await these changes, there are several well-known and inherent risks associated with the use of AI that should be mitigated. We will focus on two that appear to be broad-based in nature, namely bias and lack of transparency. Bias is of particular concern since AI systems may generate prejudicial outputs due to existing biases in the training process or data itself. This can be the case where proxy variables like gender and race are used to determine employment and credit outcomes. Income often correlates with prohibited grounds under the Canadian Human Rights Act, such as race and gender; the AI algorithm may pick up on this correlation and make determinations of creditworthiness and employability based on these grounds.  This is problematic because it perpetuates and reinforces these stereotypical biases, both in real life and in AI’s functionality. When it comes to bias, data quality and integrity are key. A business needs to ensure that its data is analyzed and scrubbed to avoid the potential for any discriminatory outcomes on the grounds that they are prohibited by existing human rights codes and legislation, as well as unfair outcomes or outcomes that have the potential to result in the unethical treatment of any individual.

Transparency is problematic due to the trust factor; for instance, if people don’t understand how something that affects their lives works at a fundamental level, this can engender a lack of trust and, therefore, a lack of confidence, which is clearly a suboptimal outcome for those organizations either deploying or using AI-based customer-facing solutions.  Due to the complexity of deep learning and some machine learning systems, it can quickly become difficult to understand how they generate their decisions. This is known as the Black Box or Explainability problem. This issue is receiving significant attention from a legal perspective, as it conflicts with the fundamental right of individuals to receive an explanation regarding decisions that may have a significant impact on their lives. An unsatisfactory explanation can further erode customer confidence in the scrutinized business and AI. Businesses should use clear, plain language when explaining their use of AI. A business should always endeavour to state how the AI models it leverages align with its stated goals and objectives. Businesses that utilize AI in customer-facing solutions or functions should ensure that plain language descriptions are readily accessible to consumers, and detailed descriptions of the data sets used to develop or train the relevant AI system.  

How do these risks tie into your practice?

Although Canada is in a pre-AI legislation era, AI is making its impact felt in several existing practice areas.

Data Privacy: AI-powered surveillance technologies, such as facial recognition systems and location tracking tools, raise concerns about mass surveillance and violations of the privacy rights of individuals. These technologies can enable pervasive monitoring and tracking of activities, behaviours and movements, leading to potentially significant erosion of civil liberties. AI tools can also indirectly infer sensitive information from seemingly innocuous data — known as 'predictive harm.' This is often done through complex algorithms and machine learning models that can predict highly personal attributes, such as sexual orientation, political views, or health status, based on seemingly unrelated data. AI systems are built upon consuming data, some of which is personal information that is protected under the Personal Information Protection and Electronic Documents Act (PIPEDA). The Office of the Privacy Commissioner of Canada is currently investigating ChatGPT due to concerns about the collection, use and disclosure of personal information without consumer consent.

Cybersecurity: We can look at a couple of areas of security threats – the first is security risks involving human targets. AI-powered social engineering represents a sophisticated threat, leveraging machine learning to analyze data and mimic human behaviour, crafting personalized and convincing phishing emails.  These advanced scams are harder to detect and can lead to unauthorized access to sensitive information. The second is the misuse of AI to hack into systems. AI has the potential to generate malware that could evade detection by current security filters, but only if it is trained on quality exploit data. There is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose. Cybersecurity risks can be found just about anywhere these days, and AI is no exception.

Commercial Contracting: Commercial contracts are the area to focus on when it comes to identifying business risks associated with the use of AI. In it, parties will need to address the risk mitigation measures that will be necessary in the context of the transaction and commercial relationship. It is through the use of tools such as representations, warranties, and indemnities that the mitigation of risks can be achieved, ranging from so-called ring-fencing measures such as the imposition of obligations relating to the necessity of avoiding discriminatory outcomes (whether on prohibited grounds or otherwise), or the unfair or unethical treatment of individuals, all the way to agreement on proactive measures to ensure that plain language descriptions of AI models are readily accessible to end-use customers, or ensuring human oversight to avoid the potential misuse of AI systems. Comprehensive and prudent contracting allows all concerned parties to avoid the pitfalls and potential liability associated with the use of AI for business.

Intellectual Property: AI models tend to require high volumes of training data to accurately capture the underlying patterns and relationships. Natural language processing takes it a step further. It requires high-quality and structured data, necessitating either significant human involvement in the data cleaning and quality evaluation process or automation. In the latter case, the technique of web scraping is often employed. Web scraping refers to a process by which data is collected and copied from the web into a database for later retrieval or analysis. This web scraping process leads to vast tracks of data being lifted from the web and fed into data lakes or other data repositories for use in AI model training. Inevitably, work products are swept up as part of this process that may form part of the output of an AI system. Arguments are being made by copyright owners alleging that AI models either infringe their copyright because they are trained using copyrighted works, or because the output of the AI models infringes, or both. Given the relative novelty of AI technologies, Canadian courts have not yet rendered decisions regarding liability for infringement that may result from the use of AI, either through the inputs used to train an AI or through the outputs generated by an AI system in the form of works.

Labour & Employment Law: In this context, risk arises where AI use cases include resume screening tools, employee performance and monitoring tools, and the automation of employee off-boarding processes. All these activities involving employees create the risk of producing discriminatory outcomes that may violate employees’ rights. 

There’s a lot of discussion and speculation about the regulation of AI in Canada. Is legislation imminent?

It is widely anticipated that the Artificial Intelligence and Data Act (AIDA) will come into force at some point in 2025. It’s designed to protect against the so-called “high impact systems” like AI, but the key to how AIDA will apply and function will be set out in the regulations, which have yet to take shape. Accordingly, the full scope of AIDA’s impact on Canadian businesses remains to be seen.

Lawyer: Imran Ahmad and Domenic Presta

Lawyer bio:

Imran Ahmad is the Canadian head of Norton Rose Fulbright’s technology group and the Canadian co-head of the cybersecurity and data privacy practice. He advises clients across all industries on a wide array of complex technology-related matters, including outsourcing, cloud computing, SaaS, strategic alliances, technology development, system procurement and implementation, technology licensing and transfer, distribution, open source software, and electronic commerce. As part of his cybersecurity practice, he works closely with clients to develop and implement practical strategies related to cyber threats and data breaches. He advises on legal risk assessments, compliance, due diligence and risk allocation advice, security, and data breach incident preparedness and response. In addition, he has acted as "breach counsel" on some of the most complex cross-border and domestic cybersecurity incidents. He has extensive experience in managing complex security investigations and cross-border breaches. In his privacy law practice, he advises clients on compliance with all Canadian federal and provincial privacy and data management laws, with a particular focus on cross-border data transfer issues and enterprise-wide governance programs related to privacy.

***

Domenic Presta is a technology lawyer with over 20 years of experience both in-house and in private practice advising clients across all industries on a wide array of technology-related matters. Domenic has advised clients in structuring, negotiating, and drafting commercial transactions involving information technology, intellectual property, privacy, and regulatory aspects. He also advises clients in matters of data security, especially concerning complex IT services agreements for large mission-critical projects and steady-state services, cloud services, and software licensing agreements.  He has negotiated commercial outsourcing arrangements, contracts for hosted applications, and software licensing agreements for multinational financial institutions, insurance companies, and pension plans. Domenic’s practice encompasses a broad array of IT agreements, including bank-fintech collaborations, AI-driven services, data sharing, and data security for institutional clients. Notably, he advised the Department of Finance Canada on liability frameworks related to open banking. He has worked with multinational private-sector clients and Canadian public-sector entities at both federal and provincial levels. Domenic also provides M&A support focused on information technology, data privacy, and data security.