On Shapley value interpretability in concept-based learning with formal concept analysis
We propose the usage of two power indices from cooperative game theory and public choice theory for ranking attributes of closed sets, namely intents of formal concepts (or closed itemsets). The introduced indices are related to extensional concept stability and are also based on counting of generators, especially of those that contain a selected attribute. The introduction of such indices is motivated by the so-called interpretable machine learning, which supposes that we do not only have the class membership decision of a trained model for a particular object, but also a set of attributes (in the form of JSM-hypotheses or other patterns) along with individual importance of their single attributes (or more complex constituent elements). We characterise computation of the Shapley and Banzhaf-Penrose values of a formal concept in terms of minimal generators and their order filters, provide the reader with their properties important for computation purposes, prove related #P-completeness results, and show experimental results with model and real datasets. We also show how this approach can be applied in both supervised (classification) and unsupervised (pattern mining) settings.