Local Interpretable Model-Agnostic Explanations (LIME)
Bias Goes Undercover
As black-box algorithms like neural networks find their way into high-stakes fields such as transportation, healthcare, and finance, researchers have developed techniques to help explain models’ decisions. New findings show that some of these methods can be fooled.