June 21, 2022 - 464 views|
The increased regulation surrounding AI-driven hiring practices demonstrates that a more socio-technical approach is needed.
Digital tools are hardly new in the hiring process, especially as employers are increasingly forced to make hiring decisions at scale. For years, job seekers have been strategizing their presentations to boost the odds their resume will pass algorithmic muster.
But the artificial intelligence (AI) that powers hiring systems is hardly flawless; in one German test, an algorithm knocked 10 points off an applicant’s score for the sin of wearing glasses.
The biggest challenge with AI and human resources (HR) tech is systemic discrimination against historically marginalized and minoritized populations. For example, having a “Black-sounding” name or mentioning a women’s college have been shown to get applicants dinged.
These AI shortcomings are serious enough to have drawn the attention of regulators. Various US federal agencies are now warning employers against discriminatory hiring algorithms, as noted in a recent Wired article. Even at the local level, legislation is being developed to fight bias in AI hiring.
According to a piece in The National Law Review, AI-evaluated video interviews (like the one that penalizes wearers of spectacles) are a minefield of potential bias—and resulting legal problems, such as labor violations and class actions. US Equal Employment Opportunity Commission Chair Charlotte Burrows recently pointed out that AI may inappropriately screen out those with speech impediments, visible disabilities or disabilities that affect movement. As a result, the EEOC just issued guidance on using AI in hiring without violating the Americans with Disabilities Act.
Such concerns are by no means confined to the US; some believe an AI regulatory framework being drafted by the EU Commission could have “significant repercussions” in the hiring and HR arena.
We anticipated issues around AI and hiring in our “21 HR Jobs of the Future" report. One of the positions we posited, Algorithm Bias Auditor, would be charged with conducting “a methodical and rigorous investigation into every algorithm across every business unit within the organization.” The auditor, we added, “will establish guidelines and compliance methodologies that employees across the organization can easily understand and follow.” The current spotlight on AI and hiring practices demonstrates that businesses should implement this position, or something like it, sooner rather than later.
According to Jillian Powers, Global Head of Cognizant's Responsible AI practice, businesses should do a full risk assessment of their operations and data pipelines first, and then conduct reviews and audits of high-risk operations. Data-enabled autonomous decisioning in HR is an area of high risk because it involves sensitive data, the dignity and comfort that comes from labor and employment, and the ability for people to access work.
Powers points out there are a lot of snake-oil salespeople in HR AI, and regulation hasn’t solidified yet. There is no one-stop technical solve; a full systemic review of an HR AI operation and the data pipelines that feed it can help companies hire and retain a diverse, talented workforce and keep them. This would require a more socio-technical approach than is often found today.