Data Science Asked by anonymous197364761335 on May 17, 2021
Often clustering algorithms only output a bunch of class labels, and do not provide any sort of interpretation of the classes formed by the algorithm. It seems to me not entirely unreasonable to attempt to get some sort of interpretation by using the class labels provided by the clustering algorithm as the target in a supervised classification problem.
For a concrete example, say you cluster using k-means, and use a single decision tree classifier to predict the cluster based off of other features. The decision tree then should be able to give you some way of interpreting the clusters.
However, I can’t find any literature (or blog posts…) mentioning this as a technique for interpreting the results of a clustering algorithm, which leads me to believe it is problematic. So in short:
Question: Can supervised classification algorithms be used to interpret the reuslts of unsupervised clustering algorithms? If not, why?
There's nothing wrong with this idea and although I don't have literature on hand, I'm fairly confident I've seen this sort of thing done. I disagree that clustering algorithms often don't provide interpretation though. There are definitely plenty that don't, but I'm not sure k-means is one of them. The centroids of your clusters should provide you with the interpretability you're looking for. Passing the results of k-means into a decision tree is probably just going to exchange the centroid for left and right bounds for your features (although it might actually be interesting if the tree ignores particular dimensions in the decision process). Generative models like GMM and LDA also give a lot of useful information.
Regarding literature, although I don't think I've seen this specifically applied to clustering, there's definitely a fair amount of on-going research into techniques for adding interpretability to "black-box models". Consider for example this article: Interpretable & Explorable Approximations of Black Box Models
We propose Black Box Explanations through Transparent Approximations (BETA), a novel model agnostic framework for explaining the behavior of any black-box classifier by simultaneously optimizing for fidelity to the original model and interpretability of the explanation. To this end, we develop a novel objective function which allows us to learn (with optimality guarantees), a small number of compact decision sets each of which explains the behavior of the black box model in unambiguous, well-defined regions of feature space. Furthermore, our framework also is capable of accepting user input when generating these approximations, thus allowing users to interactively explore how the black-box model behaves in different subspaces that are of interest to the user. To the best of our knowledge, this is the first approach which can produce global explanations of the behavior of any given black box model through joint optimization of unambiguity, fidelity, and interpretability, while also allowing users to explore model behavior based on their preferences. Experimental evaluation with real-world datasets and user studies demonstrates that our approach can generate highly compact, easy-to-understand, yet accurate approximations of various kinds of predictive models compared to state-of-the-art baselines.
Poke around for "black box model interpretability" and you'll find a ton more recent research.
Answered by David Marx on May 17, 2021
You can have look into interpretable clustering, or the extended article.
Answered by Graph4Me Consultant on May 17, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP