Published
Reading time
2 min read
Ghost controlling a humanoid marionette during a job interview with a female candidate

Companies are using AI to screen and even interview job applicants. What happens when out-of-control algorithms are the human resources department?

The fear: Automated systems manage every stage of the hiring process, and they don’t play fair. Trained on data rife with social biases, they blatantly discriminate when choosing which candidates to promote and which to reject. The door to your dream job is locked, and an unaccountable machine holds the key. Minority candidate? Speak with an accent? Unconventional background? You’re out of distribution!

Horror stories: Many companies and institutions use automated hiring systems, but independent researchers have found them prone to bias and outright error.

  • A 2021 study by Accenture and Harvard found that 63 percent of employers in the U.S., UK, and Germany — including 99 percent of Fortune 500 companies — used automated systems to recruit candidates or screen applications.
  • Hiring systems MyInterview and Curious Thing, which together boast thousands of corporate clients, gave high marks for English proficiency to mock candidates who spoke German during their interviews, an investigation by MIT Technology Review found.
  • A video interviewing program from Retorio scored job seekers differently depending on whether they wore glasses, donned headscarves, or displayed bookshelves in the background, Bavarian Public Broadcasting reported. The program’s users include BMW and Lufthansa.
  • A popular video interviewing system from HireVue offered to predict candidates’ aptitude for particular jobs based on face analysis. The company removed the capability after a member of its scientific advisory board resigned in protest.

Bad performance review: Automated hiring systems are facing scrutiny from lawmakers and even the companies that use them.

  • In 2023, New York City will require prospective employers to inform job applicants if they use hiring algorithms and to offer non-automated alternatives, and to conduct yearly audits for bias. Illinois passed a similar law in 2020.
  • The current draft of the European Union’s proposed AI Act requires hiring algorithms to undergo extensive human oversight. Developers who seek to sell systems in Europe must provide a risk assessment and evidence that neither the system nor its training data are unacceptably biased. UK lawmakers are considering similar restrictions.
  • The Data and Trust Alliance, a nonprofit group that seeks to reduce tech-related bias in workplaces, developed tools to assess fairness in hiring algorithms. 22 companies including IBM, Meta, and Walmart agreed to use them.

Facing the fear: While many companies use hiring algorithms, most still keep humans in the loop. They have good incentive to do so: While machines can process mountains of resumes, human managers may recognize candidates who have valuable traits that an algorithm would miss. Humans and machines have complementary strengths, and a careful combination may be both efficient and fair.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox