Data Science Asked on April 20, 2021
In convolution neural networks, we have a concept that inner layers learn fine features like lines and edges, while outer layers learn more complex shapes.
Do we have any such understanding for layers in RNNs (like LSTMs), something like inner layers understand grammar while outer layers understand more complete meanings of sentences assuming that we are using the LSTM for some natural language task like text summarization?
Its not like it just understands grammar.
In LSTM
s the network tries to preserve the hidden states over time. By doing this they try to learn long-term dependencies in the language and relationships between words at variable distances.
LSTM
does this by using its three famous gates.
Answered by ashutosh singh on April 20, 2021
RNN/LSTM is designed for series (data has time step) like data(E. g. a sentence ) which has dependency between different parts of the data. In English, some words in a sentence have a dependency on previous words. To carry the dependency information and ignore the non-important information until the end of the sentence RNN/LSTM was introduced.
If you use other variants of deep neural network (MLP) in series like data that network the network forget dependency information.
Answered by Ta_Req on April 20, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP