TransWikia.com

Mapping between original feature space and an interpretable feature space

Data Science Asked by iaaml on June 13, 2021

I’m reading the following really interesting paper

https://arxiv.org/pdf/1602.04938.pdf on local interpretable model explanations

on page 3 however particularly section 3.3 Sampling for Local Exploration they mention obtaining perturbed samples $z’ in {0,1}^{d’}$, it then says

“we recover the sample in the original representation
$z in mathbb{R}^{d}$
and obtain $f(z)$

with no indication how this is done, surely the map is not injective? If not how would you know you recovered the correct sample? To this end, i wondering how something like this might be done in practice, moving from one feature space $mathbb{R}^{d}$ to another ${0,1}^{d’}$. I’d really appreciate any help.

One Answer

Welcome to the community @iaaml! I hope I understood the concept right by briefly going through your reference. This is my impression:

in 3.1, they say

For example, a possible interpretable representation for text classification is a binary vector indicating the presence or absence of a word.

So, I suppose the point is something like Sparse Representation (you may look for sparse representation to see the methods and examples). For instance, an image vector which is predicted as cat can be explained by a 3d explainer, namely, nose, glasses and mouth. Nose and mouth are evidence for cat while glasses is a human face feature (very naive example). So having a vocabulary for all explainers, you can come up with a sparse representation of each decision in which, significant criteria are represented with 0 or 1 to help validating prediction. It happens by examining the features which contribute the most to that class (that's why in Figure.1 they could understand that "No Fatigue" observation is against the prediction).

To obtain such representation, you can build the vocabulary (a set of features which are significant and all together cover whole or most of the space). Then you map your data on these space in which presence of each element of vocabulary is 0 or 1.

Example

I have three samples and their corresponding predictions:

a) Spanish is the main language in Buenos Aires         :Argentina
b) Apple released its new software                      :IT
c) Apple is the man agriculture product in Buenos Aires :Argentina

Using BoW for constructing the feature space, Apple becomes an important feature as it is a famous IT company, but in the last sentence it affects in a wrong way. Alongside, you can also have a map of which feature contributes the most to which class (let's say through Mutual Information or any other feature ranking method, specified on different classes) and construct the matrices below:

   BuenosAires  Apple
a     1           0
b     0           1
c     1           1

when you have what should happen according to the feature ranking for each class:

   BuenosAires  Apple
a     1           0
b     0           1
c     1           0

comparing these two gives you the probably-wrong Apple in last sentence (like "No Fatigue" in Figure.1). The first matrix is the mapping that you do, and the second is the mapping that feature ranking gives you.

Hope I understood it right!

Answered by Kasra Manshaei on June 13, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP