Data Science Asked by Mario Ishac on August 24, 2020
In a neural network there are 4 gates: input, output, forget and a gate whose output performs element wise multiplication with the output of the input gate, which is added to the cell state (I don’t know the name of this gate, but it’s the one in the below picture with the output C_tilde
).
Why is the addition of the C_tilde
gate required in the model? In order to allow the input gate to subtract from the cell state, we could change the activation function that results in i_t
from sigmoid
to tanh
and remove the C_tilde
gate.
My reasoning is that the input gate already has a weight matrix W_i
that can is being multiplied to the input gate’s input, hence it already does filtering. However, when C_tilde
is multiplied with i_t
that seems to be another unnecessary filter.
My proposed input gate would then be i_t = tanh(W_i * [h_t-1, x_t] + b_i)
and i_t
would directly be added to C_t
(C_t = f_t * C_t + i_t
rather than C_t = f_t * C_t + i_t * C_tilde_t
).
Here is my hypothesis : The $i_t$ can add explainability to the model, as the value of the sigmoid function can give an idea as to how important a particular word is to altering the cell state $C$. This is because $i_t$ lies between 0 and 1. Having single $W$ do both filtering as well as a feature transform of $[h_{t-1},x_t]$ not only puts more stress on the matrix (has to do two things at once), but also no longer has this explainability factor.
Example: 2 vectors may require the same transformation $W*v$, but unless you allow seperate sigmoid function to give each an importance, their contribution to the cell state will remain same.
Answered by Sridhar Thiagarajan on August 24, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP