TransWikia.com

Do neural networks have explainability like decision trees do?

Data Science Asked by navya on March 13, 2021

In Decision Trees, we can understand the output of the tree structure and we can also visualize how the Decision Tree makes decisions. So decision trees have explainability (their output can be explained easily.)

Do we have explainability in Neural Networks like with Decision Trees?

3 Answers

No. Neural network is generally difficult to understand. You trade predictive power for model complexity. While it's possible to visualize the NN weights graphically, they don't tell you exactly how a decision is made. Good luck trying to understanding a deep network.

There is a popular Python package (and it has a paper) that can model a NN locally with a simpler model. You may want to take a look.

Answered by SmallChess on March 13, 2021

I disagree with the previous answer and with your suggestion for two reasons:

1) Decision trees are based on simple logical decisions which combined together can make more complex decisions. BUT if your input has 1000 dimensions, and the features learned are highly non linear, you get a really big and heavy decision tree which you won't be able to read/understand just by looking at the nodes.

2) Neural networks are similar to that in the sens that the function they learn is understandable only if they are very small. When getting big, you need other tricks to understand them. As @SmallChess suggested, you can read this article called Visualizing and Understanding Convolutional Networks which explains for the particular case of convolutional neural networks, how you can read the weights to understand stuff like "it detected a car in this picture, mainly because of the wheels, not the rest of the components".

These visualizations helped a lot of researchers to actually understand weaknesses in their neural architectures and helped to improve the training algorithms.

Answered by Robin on March 13, 2021

https://arxiv.org/abs/1704.02685 provide a NN specific local explanation tool : deep lift. It works by propagating the difference in activation between the instance you want to explain and a reference instance. Getting a reference is a bit tricky, but the tool appears to be interpretable and scalable overall. It can be used on tabular data.

Answered by lcrmorin on March 13, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP