Feature attribution methods are widely used in machine learning to explain model behavior, yet experts often face challenges in selecting the most appropriate method for their specific context. In this paper, we introduce Explainalytics, a Python library designed to assist machine learning practitioners in systematically assessing, comparing, and selecting the best feature attribution method for their needs. The library provides a comprehensive set of metrics and visualizations, enabling users to evaluate methods based on sensibility, stability, faithfulness, fidelity, and alignment with domain knowledge. The enabled interactive exploratory resources render Explainalytics a human-centered analytic tool, ensuring that users can choose explanation methods aligned with their preferences and task requirements. We conducted a user study to evaluate the library, highlighting its contributions to improving interpretability in machine learning workflows. Our findings demonstrate how Explainalytics facilitates a more informed and nuanced selection of feature attribution methods, ultimately benefiting decision-makers in machine learning applications.
BibTex Code Here