TransWikia.com

Shapley values without intercept (or without `expected_value`)

Data Science Asked on December 22, 2020

I have a model and I want to derive its interpretability by using feature contributions. In the end, I want to have some contribution per feature such that the sum of contributions equals the prediction of the model.

One approach may be to use shapley values. The issue with shapley values and its implementation in the shap python library is that it also derives and expected_value, which I’ll call $E$. To obtain the prediction of the model, one must add all the shapley values and $E$. Is there a way to derive feature contributions without the need of $E$?


My solution

I’ve derived a solution but I’m not sure it makes sense. Let’s say I have features $x_1, dots, x_n$ with shapley values $phi_1, dots, phi_n$ for the model $f$. Then, I have
$$
f(x) = phi_1 + dots + phi_n + E Rightarrow f(x) = (1 + frac{E}{sum phi_i})(phi_1 + dots + phi_n)
$$

(by using a simple math trick). Then, I claim that I have shapley values, $hat{phi_1}, dots, hat{phi_n}$ where
$$
hat{phi_j}=(1 + frac{E}{sum phi_i})phi_j
$$

These values are suitable with
$$f(x) = hat{phi_1} + dots + hat{phi_n}$$
But, does it make sense to call them shapley values?

Any criticism is more than welcome.

One Answer

This is very similar to fitting a linear regression and not including an intercept, and I think they will face similar issues.

To be very concrete, consider an example with $f(x)=1, E=1, phi_1=1, phi_2=-1$. Then your scaling factor is undefined, trying to divide by zero. Well OK, but you won't often get such exact numbers. Let's tweak them to $$f(x)=1, E=1, phi_1=1.01, phi_2=-1.$$ Now the scaling factor is $1+1/.01=101,$ so that $hat{phi}_1=102.01$ and $hat{phi}_2=-101$. This isn't locally too bad, because it still indicates the relative importance of the two variables in making this prediction. But if you want to compare this to other points $x$ (where say $f(x)=2, phi_1=1, phi_2=0$, so that $hat{phi}_1=2, hat{phi}_2=0$) things will look quite strange, and aggregating importances over your entire sample may be very misleading.

For the connection to linear regression and more, see this excellent description of shapley values.

Correct answer by Ben Reiniger on December 22, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP