Toward Explainable AI

Published
Reading time
1 min read
Explanation graph, related to Explainable Boosting Machine

Machine learning systems are infamous for making predictions that can’t readily be explained. Now Microsoft offers an open source tool kit providing a variety of ways to interrogate model.

What's in the package: InterpretML implements Explainable Boosting Machine, a generative additive model that delivers both high accuracy and high explainability. The package also comes with several methods to generate explanations of model behavior for regression and binary classification models. Developers can compare explanations produced by different methods and check consistency among models.

Why it matters: Building models that can explain how they reach their conclusions is critical in life-and-death situations like transportation, healthcare, and law enforcement. And it’s a top priority in high-stakes industries such as finance where decisions may be called into question. Understanding the behavior of intelligent systems is important to:

  • debug models
  • detect bias
  • meet regulatory requirements
  • defend legal challenges
  • establish trust in a system’s output

What’s next: Principal researcher Rich Caruana and his colleagues aim to improve InterpretML’s categorical encoding and add support for multi-class classification and missing values. They’re hopeful the open source community will build on their work to illuminate what goes on inside machine learning's proliferating black boxes.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox