TransWikia.com

Comparison between addition and multiplication function in deep neural network?

Data Science Asked by amir Maleki on November 7, 2021

I designed a specific Convolution Neural Network to study in the area of image processing. The network has a part that there are two tensors that have to be transformed into a tensor in order to be fed to the next layer. This situation happens at several points of the network. In fact, there are several operations such as addition, multiplication, etc. The results of the network are a bit better when I use the addition pyramid pooling module (the second image between two convolutions) and multiply function (in the last step of the network). I used tf.math.add and tf.math.multiply which perform the operation element-wisely. The whole network is shown in the first image.

enter image description here

The second image represents the pyramid pooling module which includes several scale images.

enter image description here

I am looking forward to the addition and multiplication function’s attribute in a deep neural network.

The question is:

Why does the addition function (between conv1 and conv2) indicate better final performance in Accuracy (precision) and mean Intersection of Union(mIoU) compared to multiplication and concatenation when I unify two tensors into one tensor?

One Answer

The observation is very interesting you report, since concatenation and addition are practically the same. A nice explanation can be found in https://distill.pub/2018/feature-wise-transformations/ .

Answered by Andreas Look on November 7, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP