Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples
Reading time: 1 minute
...
📝 Original Info
- Title: Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples
- ArXiv ID: 2001.02271
- Date: 2020-01-13
- Authors: Chelsea M. Myers, Evan Freed, Luis Fernando Laris Pardo, Anushay Furqan, Sebastian Risi, Jichen Zhu
📝 Abstract
AI algorithms are not immune to biases. Traditionally, non-experts have little control in uncovering potential social bias (e.g., gender bias) in the algorithms that may impact their lives. We present a preliminary design for an interactive visualization tool CEB to reveal biases in a commonly used AI method, Neural Networks (NN). CEB combines counterfactual examples and abstraction of an NN decision process to empower non-experts to detect bias. This paper presents the design of CEB and initial findings of an expert panel (n=6) with AI, HCI, and Social science experts.📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.