Artificial intelligence, real liability? Using AI tools in recruitment and hiring

Automating employment-related decisions warrants scrutiny due to privacy, potential bias

With the emergence of generative artificial intelligence tools like ChatGPT and its competitors, many employers are exploring the use of artificial intelligence ("AI") systems in a bid to make hiring decisions more efficient and data-driven.

The term "AI" is being used to market a wide range of technologies ranging from simple automated resume screening tools to complex machine learning systems. However, the use of any type of automated tool to make employment-related decisions warrants scrutiny due to privacy considerations and the potential for these tools to exhibit biased decision-making, which could attract legal liability for employers.

As AI tools become increasingly integrated into business operations, legislators are moving to regulate their use. This article provides an overview of the current state of legislative developments related to AI in hiring and recruitment in Ontario and federally, and outlines best practices for employers who are considering the adoption of such tools.

AI in recruitment: The opportunities

Automated candidate screening tools are frequently marketed on the promise that AI's ability to process vast amounts of data accurately leads to more effective hiring decisions. AI tools may filter applications based on keywords and conduct automated screening interviews. By handling these routine tasks, AI can allow businesses to focus on relationship-building aspects of recruitment, such as face-to-face interviews and candidate engagement.

Advanced AI models are also marketed as being able to identify patterns in hiring data that humans may not detect. Predictive modelling can project candidates' suitability, retention likelihood, and growth potential—insights that contribute to a stronger workforce.

AI in recruitment: The challenges

A central challenge in using AI tools is mitigating bias. AI models can inadvertently perpetuate and even amplify existing biases because they are trained on vast datasets that are likely to contain biased content. For example, if an AI tool is trained on a historical dataset from a company that has predominantly hired individuals of a certain race or gender for a particular role, the model may “learn” to favour new candidates possessing those characteristics even if they have no bearing on an individual’s ability to perform the job.

The development of unintentional biases in AI systems could expose employers to legal liability. Where an automated recruitment tool is found to have latent biases, an employer's continued use of that software may give rise to a discrimination claim.

Defending such a claim may be challenging because the complex algorithms that fuel the analysis, ranking and decision-making performed by an AI tool may be inscrutable to the average user, or even the system's developers. This lack of transparency (sometimes referred to as the "black box" problem)[1] puts employers in a precarious position when it comes to justifying decisions made in reliance on the AI tool's recommendations. Disproving a claimant's allegation that an employer engaged in discriminatory practices would be challenging if the employer is unable to explain how the AI arrived at a particular decision. Without insight into the AI's underlying logic or reasoning, employers may struggle to demonstrate that the tool's recommendations were free of bias and in line with fair hiring practices.

Recent legislative developments in Ontario, the federal jurisdiction, and beyond

Ontario: AI disclosure requirements in job postings

Ontario-based employers using AI in hiring will soon be required to disclose such use in job advertisements. In March 2024, the Working for Workers Four Act, 2024 made several amendments to the Employment Standards Act, 2000 (the "ESA"), including a requirement that employers disclose when AI is used "to screen, assess or select applicants" for publicly-advertised job postings. These ESA amendments will come into force on a date to be announced.

In a consultation paper published by the Ontario government,[2] the government noted that the intention behind this disclosure requirement is “to strengthen transparency for job seekers given that there are many unanswered questions about the ethical, legal and privacy implications that these technologies introduce.”

While the disclosure requirement may help increase transparency, it raises new questions about the scope of the disclosure obligation. The use of "artificial intelligence" tools in hiring and recruitment can refer to a broad range of technologies, from simple keyword-based filters to “deep learning” models that aim to perform complex predictive analyses about candidates’ suitability. Some uncertainty remains when it comes to determining whether a particular tool might be captured by the new ESA provision.

In its consultation paper, the government proposes the following as a definition of AI for the purposes of the ESA:

“Artificial intelligence” means a machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

Where this definition is arguably lacking is in its failure to clarify what types of systems, tools, or programs constitute “artificial intelligence” for the purpose of triggering the disclosure obligation. This uncertainty may cause employers—believing a tool not to constitute “AI”—to inadvertently omit certain disclosures or, alternatively, forgo potential efficiency gains offered by such tools out of concern for contravening legal requirements.

More fundamentally, disclosure alone may have little impact on preventing potential biases or ensuring fairness in hiring. Simply notifying candidates that AI is involved does not address the underlying "black box" issue. It remains to be seen whether the government will attempt to regulate the ways in which these systems can be used by employers.

Canada: The Artificial Intelligence and Data Act

At the federal level, Parliament is currently working toward passing the Artificial Intelligence and Data Act (“AIDA”) as part of Bill C-27. Bill C-27 completed its second reading in the House of Commons in April 2023 and is currently being considered in committee in the House of Commons. In October 2023, Canada’s Minister of Innovation, Science and Industry proposed substantial amendments to the AIDA.

The AIDA would place a number of obligations on both the developers and end-users of an “AI system,” defined in Bill C-27 as “a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”

If the Minister’s amendments are adopted and the AIDA is passed, it would require employers who utilize AI systems to take certain steps toward accountability, including assessing and mitigating risks related to biased outputs and regularly evaluating the effectiveness of those mitigation measures. Additionally, employers would need to ensure that the system operates with appropriate human oversight and would need to promptly report any serious incidents that arise from the system’s use to both the system’s developer and an Artificial Intelligence and Data Commissioner. A detailed description of the system, including information about identified risks and the measures taken to mitigate them, would also need to be made publicly available.

It remains to be seen whether the AIDA will be passed into law. If enacted, this legislation has the potential to shape how AI tools are designed, deployed, and regulated in Canada.

The European Union

While Canadian legislation continues to develop, other jurisdictions are moving ahead with regulating the use of AI. The European Union’s Artificial Intelligence Act, which became law in 2024 and will come into force in phases between 2025 and 2027, establishes requirements for employers using “high-risk” AI systems in areas like recruitment and performance evaluation. Canadian employers who operate or recruit in the European Union may be required to comply with the law directly; moreover, the European Union’s approach to AI regulation is likely to influence subsequent legislative developments in Canada.

Privacy

It is important to note that privacy legislation in various Canadian jurisdictions may impose additional obligations on employers using AI in recruitment. While the ESA amendments require disclosure of AI use in job postings, this alone may not satisfy all applicable privacy law requirements. Depending on the jurisdiction and the nature of the AI tool being used, employers may need to obtain express consent from candidates regarding the collection, use, disclosure, and storage of their personal information. Employers should ensure that they understand their obligations under applicable privacy legislation before using AI tools that may collect or process employee personal information.

Employers must remain vigilant about AI tools

AI tools continue to be marketed as offering employers the ability to process applications more efficiently and identify top talent. However, employers must remain vigilant about these systems' potential to perpetuate biases. 

Before deploying AI tools for use in hiring and recruitment, employers should consider conducting an “algorithmic impact assessment” to evaluate the potential for the system to make biased or unfair decisions. Such assessments should examine how the tool makes decisions, what training data it relies upon, and whether the tool may inadvertently incorporate characteristics (such as race, gender, sexual orientation, etc.) or other factors prohibited by applicable law into its decision-making. Because of the “black box” problem in ensuring AI systems are operating without bias, employers should also ensure oversight mechanisms are in place to ensure that employment-related decisions are not made without human input and review.

Beyond recruitment and hiring, AI use in the workplace raises myriad other legal considerations, including in the areas of intellectual property and data governance. As Canadian jurisdictions develop regulatory frameworks around AI use, employers should stay informed of their evolving obligations.

For more information or guidance in navigating the use of AI in the workplace, contact a member of Miller Thomson's Labour & Employment team.

***

Teri Treiber is a partner in the Calgary office. She provides advice and representation to employers in all areas of labour and employment law. She assists employers with a variety of issues, including employment agreements, collective agreements, workplace policies, human rights, employment standards, privacy, workplace investigations, performance management, employee discipline, terminations, wrongful and constructive dismissal claims, grievances and arbitrations, and restrictive covenants.

***

Michael Cleveland is an associate in the Toronto office. He provides practical and timely guidance to employers on a broad range of labour and employment law issues, including employment standards, labour relations, human rights, occupational health and safety, privacy, and wrongful dismissal litigation. He represents employers before courts, arbitrators, and administrative tribunals. Michael is a member of the Canadian Association of Counsel to Employers. While in law school at the University of Toronto, Michael earned Distinction Standing for Third Year and received the course prize in Taxation.

***

Daryn Tyndale is an articling student in the Toronto office. She is excited to be completing her articling term at Miller Thomson after participating in the summer student program in 2023 and graduating from law school at the University of Toronto in 2024.

 

***

David Krebs is a partner in the Saskatoon and Toronto offices. He a business lawyer with a focus in privacy, cybersecurity, and technology law, and serves as the National Leader of the firm’s Privacy, Data Governance & Cybersecurity practice. He acts as a cyber breach counsel across Canada, leveraging his proficiency in crisis management, privacy law, and technology. David advises clients on managing cybersecurity incidents, data governance, privacy risks, and compliance with data protection laws. He provides strategic counsel on M&A transactions, system design, and breach responses, and negotiates technology agreements such as SaaS, service provider arrangements, and data sharing contracts. David often acts as a trusted advisor for technology-focused businesses, guiding them through complex regulatory landscapes, including anti-bribery laws and cybersecurity threats. His knowledge and experience make him a valuable partner for organizations navigating new business models and regulatory challenges in today’s evolving digital landscape.

David has contributed to numerous industry publications, conferences, and presentations, emphasizing his thought leadership in the fields of data protection, privacy law, and cybersecurity. David’s authoritative stance and extensive legal experience make him a trusted guide for clients navigating complex legal terrain with finesse and insight.

David holds various professional memberships, and has been ranked in several esteemed industry directories year after year, reflecting his deep understanding of his field.

Lawyer(s)

Teri Treiber

Firm(s)

Miller Thomson LLP