Data Science Asked on May 4, 2021
I’m in the beginning to learn and understand recurrent neural networks. As far as I can imagine, its multiple feed-forward neural networks with one neuron at each layer put next to each other, and connected from left to right, where each neuron is connected not just with the neuron below it, but the one at the left from the previous time. Not sure if it’s a right way to think about it, but so far it’s my first impression.
Some things are unclear though.
In the illustration above there’s $A_0$. From where? I would assume at least two timesteps are needed to make a prediction, so in my understanding an $x_0$ is missing from the left side of the diagram. Am I right?
I’ve been reading through an article which says "Lets train a 2-layer LSTM with 512 hidden nodes". Does it mean two layer of activations, and 512 timesteps?
As far as I can imagine, its multiple feed-forward neural networks with one neuron at each layer put next to each other, and connected from left to right, where each neuron is connected not just with the neuron below it, but the one at the left from the previous time.
Not really. Each cyan box in your image represents the exact same cell. Now this cell can be a lot of things just take a look at LSTM cell (h and c represents your A) but it can also be a network which takes $A_i$ and $X_{i+1}$ as input and returns $A_{i+1}$ and $Y_{i+1}$ as output.
Correct answer by YuseqYaseq on May 4, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP