December 23, 2024

Legal Issues in the HR Use of Artificial Intelligence

In recent years employers have been increasingly turning to artificial intelligence (AI) tools to streamline the hiring process. Promoters of AI tools explain that decisions can be made more quickly, and new employees selected more efficiently by using AI tools. At the same time, as the use of AI tools has grown, concerns over potential legal risks associated with its use have increased.

By leveraging talent data, AI tools can, among other things, automatically screen thousands of resumes to identify top candidates. For example, an employer may use intelligent screening software that applies machine learning to resumes to auto-screen candidates. Using the employer’s talent data, the software learns which previously selected candidates moved on to become leaders or successful sole contributors and applies the acquired knowledge to new applicants in order to rank the strongest candidates. 

Other employers are using AI to improve recruiter-to-applicant communications through the creation of ‘chatbots’ that can answer questions and provide feedback quickly. A few employers are even using AI tools in digitizing interviews to analyze a candidate’s word choices, speech patterns, and physical reactions to assess the candidate’s probable organizational fit.

The move to the use of AI in recruiting has not been without controversy. In October 2018, Reuters reported that Amazon had scrapped its use of an AI recruiting tool when testing identified a bias against women. The experimental tool used AI to score job candidates for software developer jobs on a scale of one to five stars. The problem identified by Amazon was that the patterns used to create the models for scoring the candidates were based on resumes submitted to the company over ten years. Due to the dominance of men in the field during that period, the claim was that the system had ‘taught’ itself to prefer male candidates.

Leaders in federal agencies have expressed concerns over the use of AI. The current Director of the Office of Federal Contract Compliance Programs, Jenny Yang, has written and testified about her concerns. The OFCCP is the subagency of the federal Department of Labor responsible for assuring that employers that have contracts with the federal government comply with equal opportunity requirements and Executive Orders. Director Yang has explained her concern that AI employee selection tools are at risk for violating nondiscrimination laws and that processes are needed to protect from this kind of outcome. In a June 2019 article for Next50 Forum of the Urban Institute, she stated:

At their best, AI-powered hiring models can help employers efficiently identify candidates based on specific criteria and mitigate the subjectivity that may arise with human decision-making.

But algorithms can also replicate and deepen existing inequities. Hiring algorithms trained on inaccurate, biased, or unrepresentative data can produce employment outcomes biased regarding race, sex, or other characteristics protected by antidiscrimination law.

Employers looking to use AI in recruiting should document what they are doing to minimize the potential for discriminatory results. Best practices in leveraging AI tools in employee selection while minimizing risks include:

  • Retain a human review as part of the decision making when using AI
  • Audit AI decision processes to recognize and mitigate bias
  • Train decision-makers in using AI outcomes, including the appropriate weight to give AI outcomes in the overall decision process