Publications

Main content

The Impact of AI and Employment Law

By: Amir Ghahreman

In Part 6 of RBS’s AI & The Law: Legal Insights for the Digital Age series, we explore the impact of AI on Employment Law.

Overview

In this article, we will address various practical considerations related to the intersection of AI and employment law. Specifically, we will discuss topics such as the use of AI and the legislation surrounding it in the hiring process, employer policies regarding the use of AI in the workplace, and how AI-driven changes can lead to modifications of employment duties or even employee terminations.

Introduction

While it remains to be seen how AI will transform different industries, it is certain that employees at virtually all levels of an organization will be affected by AI. It is likely that certain jobs will become redundant, other jobs will change to a small or moderate degree, and some jobs will be substantially modified requiring employees to be retrained and re-skilled.

But this is only one possible answer. Why don’t we ask AI what it thinks? We took the liberty of doing so. While its first response was not particularly strong – in our view, exaggerating the present state of affairs of AI in employment – after two further rounds of our feedback Microsoft’s CoPilot provided the following response to a request for a “description of the impact of AI on employment law in Canada”:

The integration of artificial intelligence (AI) into the workplace is poised to impact employment law in Canada, though many of these effects remain theoretical at this stage. AI’s potential to automate tasks and make data-driven decisions introduces new legal considerations for employers.

While one could argue that the above is not perfect (e.g., are they really “new legal considerations” or rather existing legal considerations on new sets of facts?) we would characterize this as a reasonable general description.

In this article, we will address certain practical considerations around the intersection of AI and employment law by focusing on the following issues which are already applicable to both employers and employees:

  1. The use of AI in the hiring process;
  2. Legislation on the use of AI in the hiring process;
  3. Employer policies regarding the use of AI in the workplace; and
  4. Modifications of employment duties or employee terminations due to AI.

AI Issues in Employment Law

1. The Use of AI in the Hiring Process

Many businesses are already using AI to assist them in the hiring process, and this trend will certainly continue to gain momentum. Businesses are increasingly using AI tools to screen and analyze resumes and cover letters, search online platforms and social media networks for potential candidates and analyze job applicants’ speech and facial expressions in interviews.[1]

From an employment law perspective, there has always been a risk of discrimination or bias in the hiring process based on human decision-making. That risk will continue to exist as AI is increasingly used in the hiring process since AI systems are (at least for now) created by humans and therefore subject to human biases.

AI systems can be biased in their decision-making due to two major reasons: biased training data and programming errors.[2] First, when the training data used to train AI systems overrepresents or underrepresents certain groups, it can cause the AI system to make biased predictions. For example, a facial recognition algorithm trained on data that overrepresents white people may result in racial bias against people of colour in the form of less accurate facial recognition results. Second, AI bias may also arise from programming errors due to a software developer’s under or over-weighting, whether intentional or inadvertent, of certain factors in algorithmic decision-making due to their own biases.[3] For example, indicators such as income or vocabulary might be used by the algorithm to unintentionally discriminate against people of a certain race or gender. The use of AI creates even greater risks of discrimination since the complexity of AI models could result in AI systems’ bias going unnoticed for longer periods of time.

This possibility of AI exhibiting or learning bias may be surprising but it has already been known to happen in the workplace and hiring process. In 2018 Amazon stopped using its AI recruitment tool after discovering it was biased against women.[4] The recruitment algorithm was trained on male-dominated resumes submitted over 10 years and thus unintentionally learned to favour male candidates by downgrading applications that included the word “women’s” and penalizing graduates of women’s colleges. Amazon engineers tried to address these biases but could not guarantee neutrality, leading to Amazon cancelling the tool.

The extent of this risk depends on the extent to which AI is used in the hiring process. For example, AI is reportedly being used to evaluate candidates’ facial expressions and body language in video interviews despite there not even being a consensus in the scientific community over how to interpret the expression of emotions.[5] Considering that studies have shown how much of what was taught to law enforcement officers for decades regarding visual hallmarks and signs of deception are now known to have been incorrect,[6] this particular use of AI clearly carries risks and has the potential to reinforce existing inequalities and stereotypes in the hiring process.

While researchers are working on ways to reduce bias in AI systems, it is too early to say whether this problem will eventually be eliminated. For the time being it is a risk that businesses must be aware of if they use AI in the hiring process and which they must try to minimize through human management oversight.

2. Legislation on the Use of AI in the Hiring Process

Currently, the only piece of Canadian legislation which directly addresses the use of AI in the employment sector is Ontario’s Working for Workers Four Act, 2024[7] (the “Act“). The Act specifically addresses artificial intelligence in the hiring process by amending the Ontario Employment Standards Act, 2000[8] to require employers to disclose the use of AI in the screening, assessment or selection process for applicants to a job position. The specific proposed language in Act reads as follows:

“Every employer who advertises a publicly advertised job posting and who uses artificial intelligence to screen, assess or select applicants for the position shall include in the posting a statement disclosing the use of the artificial intelligence.”[9]

The definition of “artificial intelligence” was not specified in the Act and was to be later defined in the regulations to the Act. The fact that the definition was not set out in the Act and will be as set out in the regulations – which are easier to amend than the Act – suggests that the definition of “artificial intelligence” is anticipated to evolve and change periodically.

On December 2, 2024, the accompanying Ontario Regulation 476/24[10] (the “Regulation“) was issued and defined “artificial intelligence” as follows:

“artificial intelligence” means a machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”[11]

The Act received Royal Assent on March 21, 2024 and the amendments related to disclosing the use of AI by employers in the hiring process will come into effect on January 1, 2026.

We expect that other provinces, including British Columbia, will pass similar (if not even broader, as AI further develops) legislation as time goes on and also that more specific requirements and/or protections regarding the use of AI will be required. The Ontario Act currently only requires disclosure by the employer of the AI usage but without any further details; there does not appear to be much protection offered by this limited disclosure obligation leading to speculation that it merely represents the foundation for more detailed legislation.

Having said all that, for now any employer using AI in the hiring process should remain alert to the possibility of bias in the system and accordingly should seek to mitigate that risk with human oversight and perhaps sample testing to check its reliability for bias-free decisions.

We note that the EU and certain states in the US are already subject to more detailed AI legislation which go beyond merely the hiring process. This is relevant not only because it may impact how Canada proceeds with its future legislation (it will undoubtedly also go beyond the hiring process), but also because such EU and US legislation will already apply to Canadian-based employers who use AI with respect to any employees they may have in such jurisdictions.

3. Employer Policies Regarding the Use of AI in the Workplace

Employers in Canada are generally allowed to enact reasonable policies including policies which prohibit or restrict the use of certain types of technology such as AI in the workplace. ChatGPT, Microsoft’s CoPilot, and Perplexity immediately come to mind as examples of AI tools which are now widely accessible. Many employers in Canada and the US have either implemented or were considering implementing bans on using these AI tools in the workplace at the time of a 2023 survey by BlackBerry[12]. However, given the rapid technological advancements in AI, employers should be mindful of how these AI tools may allow employees to be more productive.

From an employment law perspective, the important actions for employers will be to design reasonable policies regarding workplace AI usage and then to properly implement such policies.

When designing policies for AI use in the workplace, employers should ensure the policies contain several key features including the following:

  • Scope and Purpose: Clearly define what constitutes use of AI within their organization and the policy’s objectives.
  • Outline Risks: Describe the risks of using AI to employees and the company. Such risks will vary based on a myriad of factors, such as the relevant industry and even the applicable business unit of the employer; a shared understanding of risks will help to mitigate them.
  • Identify Permitted and Prohibited AI Tools and Uses: Identify the types of AI tools authorized for workplace use together with the specific permitted (and any prohibited) uses.
  • Disclosure: Establish any self-disclosure rules for when AI was used to help create work product and any internal review and/or approval protocols for such work product.
  • Data Protection and Confidentiality: Establish guidelines for safeguarding company and client data when using AI tools and set rules for inputting proprietary information into AI tools.

In terms of implementing AI policies, the same principles apply as for implementing other policies, namely:

  • Assemble a Cross Functional Team: Consult a team of representatives from various departments such as human resources, legal and IT while drafting the AI policy. This ensures a well-rounded perspective regarding the potential impacts and use cases of AI within the organization.
  • Employee Education and Training: Develop comprehensive training programs to educate employees on AI ethics, data privacy, and specific AI tools and explain risks, potential liabilities, and consequences of non-compliance to help foster a culture of responsible AI use in the organization.
  • Establish Clear Processes: Implement mechanisms for vetting and approving new AI tools, monitoring and enforcing policy compliance and addressing employee concerns and questions.
  • Regular Review and Updates: Consider how the AI policies are operating in practice in order to amend them to incorporate both practical experience and ongoing technological and regulatory development around AI. Employers should conduct periodic reviews of the AI policy to ensure its relevance and encourage gathering feedback from employees to improve the policy.

These are some key principles for employers to keep in mind when designing and implementing AI policies. If you have further questions about designing AI policies or want to ensure your company’s AI policy is legally compliant, consider seeking legal advice.

4. Modifications of Employment Duties or Employee Terminations Due to AI

Now for the elephant in the room: the issue of the impact of AI on jobs, and specifically the extent to which jobs may be fundamentally transformed or eliminated. Whether a given employer wishes to be an early adopter of AI or, at the other end of the spectrum, try to defer adoption as long as possible, there will likely come a point when adopting AI will be required simply to remain competitive and viable. From an employment law perspective, employers will face the risk of obligations for termination payments arising due to either ‘constructive dismissal’ and/or the intentional termination of employees if AI eliminates certain jobs.

Constructive dismissal occurs when an employer makes significant changes to an employee’s job, whether in respect of compensation, hours or duties without the employee’s consent. The effect of constructive dismissal is that the employer becomes liable to pay the employee the same amount of termination pay as if the employee had been terminated ‘without cause’.

Additionally, if an employee’s position is eliminated due to AI then the employer’s termination pay obligations for a termination ‘without cause’ will be triggered. Statutory minimums related to termination pay and/or severance will apply and there is the possibility for increased employer liability depending on the terms of the applicable employment agreement or the absence of an enforceable written agreement.

Accordingly, written employment agreements containing enforceable termination and job modification language will be even more important in light of these AI driven risks.

The concepts of constructive dismissal and without-cause termination have been around for many years but, where they arise from AI implementation, employers face the additional challenge of ensuring that employment modifications and termination decisions are not reached through age discrimination due to assumptions about older employees being less capable or willing to work with AI.

To mitigate against these risks, employers should consult an employment lawyer to ensure their written employment agreements are sufficiently robust and enforceable.

Conclusion

Compared to other areas of law, AI may be less likely to impact the practice of employment law itself given that employment law tends to involve both fewer and briefer documents than many other areas of law such as business acquisitions, real estate, banking and litigation. However since AI will likely have material impacts on most jobs this will, in turn, affect numerous employer-employee situations within the scope of employment law.

Employers must exercise caution if using AI in the recruitment process and should carefully consider implementing policies regarding AI usage in the workplace given AI’s ever-growing prevalence. In addition, due to heightened risks of constructive dismissal or other termination scenarios from AI, the need for robust written employment agreements containing enforceable termination language and permitting job modifications is even greater.

AI’s impact on employment law will almost certainly change as AI continues to develop and as legislators and the courts have more time to address novel issues. For example, in British Columbia an employee is exempt from Parts 4 and 5 of the Employment Standards Act which cover hours of work, overtime entitlements and statutory holiday pay if they qualify as a “manager”, which includes a person whose principal employment responsibilities consist of supervising or directing, or both supervising and directing, “human or other resources”. “Other resources” would seem to capture AI resources but whether that is the case, and any conditions to AI eligibility for that definition, may not be more conclusively known until a relevant case is adjudicated.

Our team of Employment Lawyers at RBS would be happy to advise on legal issues related to AI in employment law matters.

We would like to thank articled student Ajay Gill (2024/2025 year) for his contributions to this article.

[1] Workable, “How is AI used in human resources? 7 ways it helps HR” (December 2023)
[2] IBM, “Shedding light on AI bias with real world examples” (October 16, 2023)
[3] American Bar Association, “Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies” (April 10, 2024)
[4] Reuters, “Insight – Amazon scraps secret AI recruiting tool that showed bias against women” (October 10, 2018)
[5] The Conversation, “Facial analysis AI is being used in job interviews – it will probably reinforce inequality” (October 7, 2019)
[6] G. Bogaard, et al., “Strong, but Wrong: Lay People’s and Police Officers’ Beliefs about Verbal and Nonverbal Cues to Deception” (2016)
[7] Working for Workers Four Act, 2024, S.O. 2024, c. 3 [the Act].
[8] Employment Standards Act, 2000, S.O. 2000, c. 41. SO 2000, c 41 | Employment Standards Act, 2000 | CanLII
[9] The Act, s 8.4
[10] Ontario Regulation 476/24 [the Regulation].
[11] The Regulation, s.2
[12] Blackberry, “Why Are So Many Organizations Banning ChatGPT?” (August 8, 2023)

 

Print This Page

Your RBS Pressroom

View all News and EventsView all Publications