Table of Contents
<aside> ✅ There is lot of interest in interpretability
But the thing is, we know little about how practitioners will react to our methods and how they will incorporate it to their existing workflows.
→ So bottom line is we don’t know how interpretability is currently practiced in the real world
→ Therefore, the paper conducts 22 semi-structured interviews with **industry practitioners
Goals of the interviews: ← Understand how they conceive of and design for interpretability while they deploy ML models
Findings : 1.** Model interpretability frequently involves cooperation and mental model comparison between people in different roles
→ aimed at building trust not only between people and models but also between people within the organization
→ present implications for design which talks about diff(interpretability challenges practitioners face, approaches proposed in the literature)
Paper highlights possible research directions that better address real world needs.
</aside>
There has been research on,
proposing new methods to interpret models
attempts to rigorously define what interpretability means
What remains unexplored : how interpretability is understood by and how it impacts the actual ML practitioners.
→ Lack of studies aimed at empirically understanding how ML professionals perform interpretability related tasks and what are their practices, needs and challenges
Therefore, this paper contributes with an empirical study which,
results contribute novel insights into the three roles (i.e., model builders, model breakers, and model consumers) played by diferent stakeholders involved in interpretability work; how their tasks and strategies vary depending on three stages in their model building process (i.e., model conceptualization, model