Published
Reading time
2 min read
Questionnaire for evaluating AI system vendors

Some of the world’s largest corporations will use standardized criteria to evaluate AI systems that influence hiring and other personnel decisions.

What’s new: The Data and Trust Alliance, a nonprofit group devoted to mitigating tech-induced bias in workplaces, introduced resources for evaluating fairness in algorithms for personnel management. Twenty-two companies have agreed to use them worldwide including IBM, Meta, and Walmart.

What it says: Algorithmic Bias Safeguards for Workforce includes a questionnaire for evaluating AI system vendors, a scoring system for comparing one vendor to another, and materials for educating human-resources teams about AI.

  • The questionnaire addresses three themes: the business case for a given algorithm; vetting for bias during data collection, training, and deployment; and organizational biases arising from poor governance or lack of transparency. Answers can be shared between the developer and customer only, not with the rest of the group.
  • Answers are converted into scores, then added to scorecards that alliance members can use to compare vendors.
  • A primer is available to help human-resources staff, lawyers, and other stakeholders in interpreting the results of the survey. Other educational materials define key vocabulary and explain how algorithmic bias impacts hiring and personnel decisions.
  • Alliance members are not required to use the materials.

Behind the news: Algorithms for hiring and managing employees have been at the center of several high-profile controversies.

  • Earlier this year, drivers for Amazon’s Flex, the online retailer’s delivery-crowdsourcing program, criticized the company’s algorithm for scoring their performance, saying it unjustly penalized them for unavoidable delays due to bad weather and other factors outside their control. In 2018, Amazon abandoned a hiring algorithm that was found to penalize female candidates.
  • An analysis by MIT Technology Review found that hiring systems from Curious Thing and MyInterview gave high scores in English proficiency to an applicant who spoke entirely in German.
  • In January, hiring software developer HireVue stopped using face recognition, which purportedly analyzed facial expressions to judge traits such as dependability, in response to a lawsuit from the nonprofit Electronic Privacy Information Center that challenged the company’s use of AI for being unfair and deceptive.

Why it matters: Companies need ways to find and retain top talent amid widening global competition. However, worries over biased AI systems have spurred laws that limit algorithmic hiring in New York City and the United Kingdom. Similar regulations in China, the European Union, and the United States may follow.

We’re thinking: We welcome consistent standards for AI systems of all kinds. This looks like a good first step in products for human resources.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox