Littler: The Emerging Workplace
Legal Risks Using AI to Make Hiring Decisions
What are some of the legal risks & pitfalls when using AI technology to make hiring decisions?
by Natalie Pierce, Co-Chair of Robotics, AI and Automation Industry Group & Tiana R. Harding, Associate
Algorithmic bias, et al
What are some of the legal risks & pitfalls when using this AI technology to make hiring decisions?
One of the concerns with the use of AI in hiring is what’s called “algorithmic bias.” This is when biases—such as those belonging to the person developing the program—seep their way into the algorithms used in the decision-making process or when the algorithms disproportionately affect people of a particular category (such as race, gender, or economic status).
Historical data used to train models also reflects past bias in HR decision-making. These biases are often unintended and, generally, the algorithms used in AI can be trained to be more objective than human decision-making. But, the potential for bias can still present concerns with regard to discriminatory impact, creating a legal risk. Especially in jurisdictions that have legislation in this area, Employer Inc. should be transparent regarding the technology used, what it measures, why it is being used, how any data collected is stored and protected, and should request informed consent.
Employer Inc. should also be mindful that in a state like Illinois, rights to delete data such as AI in video interviews within thirty days of request may be in conflict with federal retention laws regarding retention of applicant interviews.
As additional state and national legislation arises in this area, the need to carefully navigate the legal landscape will be increasingly palpable. But, for now, Employer Inc. should be mindful of the notice requirements under the state statutes regarding the use of technology to analyze video and biometric data, and the risk of bias in the data and algorithms used to train the tools.
If Employer Inc. is concerned with the potential impact that using AI in its hiring process may have, what can it do to mitigate risk?
See related: Rick Haythornthwaite, chairman of MasterCard, and Natalie Pierce, co-chair of Littler’s Robotics, AI & Automation Practice: Artificial Intelligence, Robots, Reskilling & Ethics – Fourth Revolution Board of Director Imperatives & the Chair’s Evolving Role.
First, Employer Inc. should be concerned. We anticipate algorithmic bias class actions in growing numbers in the coming years. Employer Inc. should request to review industry specific validation studies demonstrating use of the technology will not adversely affect people of a particular demographic once implemented.
When the cost of AI making the wrong decision is particularly high, collecting evidence about what happened will be important when applying liability in any ensuing legal fallout. AI vendors unwilling to disclose the inputs and algorithms used to power the outputs or their validation studies, perhaps on account of proprietary concerns or “black box” problems, might be more willing to understand an employer’s need for indemnification language in the contract (though this will remain hard to come by).
Hard to explain factors will be hard to defend. An employer unable to perform its own due diligence should fight to limit its liability for unintended consequences.
Regardless of the agreement with the AI-powered HR tool provider, Employer Inc. should test hiring algorithms against their existing hiring systems to verify no bias in the tool.
Employer Inc. should also run attorney client privileged pilot tests using the technology on sample applications to monitor for any adverse impact and should continue to periodically audit during the life cycle of the AI-powered tool. This as a solid and necessary means of reducing liability.
It is important for employers to develop internal guidelines and procedures concerning best practices for the development and use of artificial intelligence tools in order to mitigate impermissible AI-based bias. By incorporating a systematic approach to monitor and evaluate AI-based tools throughout use, employers will be in an informed position to take corrective measures in the event of adverse impact on protected classes.
Employers will be in the strongest position to defend against claims of bias if they carefully evaluate AI tools prior to adoption, routinely monitor for deviations in impact to protected classes throughout a tool’s life cycle, and take corrective action when necessary.
The amount of new technological advances that accelerate and simplify workplace processes is astounding.
Artificial intelligence is quickly becoming a staple in every department of the business environment. Human Resources is not excluded from the vast array of technological advances that can help a department become more efficient and more accurate. Though there are some legal risks, those risks are ever-present in any employment decision and Employer Inc. should not be dissuaded from taking advantage of the myriad opportunities AI offers to improve its hiring processes.