Enhancing Cluster Analysis With Explainable AI and Multidimensional Cluster Prototypes

Explainable Artificial Intelligence (XAI) aims to introduce transparency and intelligibility into the decision-making process of AI systems. Most often, its application concentrates on supervised machine learning problems such as classification and regression. Nevertheless, in the case of unsupervised algorithms like clustering, XAI can also bring satisfactory results. In most cases, such application is based on the transformation of an unsupervised clustering task into a supervised one and providing generalised global explanations or local explanations based on cluster centroids. However, in many cases, the global explanations are too coarse, while the centroid-based local explanations lose information about cluster shape and distribution. In this paper, we present a novel approach called ClAMP (Cluster Analysis with Multidimensional Prototypes) that aids experts in cluster analysis with human-readable rule-based explanations. The developed state-of-the-art explanation mechanism is based on cluster prototypes represented by multidimensional bounding boxes. This allows representing of arbitrary shaped clusters and combines the strengths of local explanations with the generality of global ones. We demonstrate and evaluate the use of our approach in a real-life industrial case study from the domain of steel manufacturing as well as on the benchmark datasets. The explanations generated with ClAMP were more precise than either centroid-based or global ones.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here