The emergence of distinct machine learning explanation methods has leveraged a number of new issues to be investigated. The disagreement problem is one such issue, as there may be scenarios where the output of different explanation methods disagree with each other. Although understanding how often, when, and where explanation methods agree or disagree is important to increase confidence in the explanations, few works have been dedicated to investigating such a problem. In this work, we proposed Visagreement, a visualization tool designed to assist practitioners in investigating the disagreement problem. Visagreement builds upon metrics to quantitatively compare and evaluate explanations, enabling visual resources to uncover where and why methods mostly agree or disagree. The tool is tailored for tabular data with binary classification and focuses on local feature importance methods. In the provided use cases, Visagreement turned out to be effective in revealing, among other phenomena, how disagreements relate to the quality of the explanations and machine learning model accuracy, thus assisting users in deciding where and when to trust explanations. To assess the effectiveness and practical utility of Visagreement, we conducted an evaluation involving four experts. These experts assessed the tool's Effectiveness, Usability, and Impact on Decision-Making. The experts confirm the Visagreement tool's effectiveness and user-friendliness, making it a valuable asset for analyzing and exploring (dis)agreements.
The Visagreement visual analytic tool is designed to explore the (dis)agreement problem, which occurs when the output of different explanation methods differ from each other. 2) (Dis)Agreement Space View depicts instances where disagreement (blue dots on the bottom left), agreement (green dots on the top right), and instances with neutral agreement (pink dots). 3) Feature Space View is a dimensionality reduction scatter plot depicting instances in the feature space. 4) Tabs where explanation quality, disagreement between explanation methods, model accuracy, and feature disagreement can be analyzed. In visualizations A, B, and C, the contribution of each feature to the disagreement by area is shown (the caption of the features is shown in 5).
BibTex Code Here