TransWikia.com

Google Trax's GRU layer

Data Science Asked by Bharathi A on January 1, 2021

I am learning about Trax for the implementation of GRU and LSTMs.

Their documentation says that a GRU layer in Trax can only accept a number of hidden units equal to the number of elements in the embeddings(Embedding size) of the input words .

Having studied the traditional implementations of RNNs and GRUs, my question is , shouldn’t the number of hidden units in the GRU layer be equal to the input dimension? So as to propagate the values of hidden states from say the first word of a sentence upto the last word in recurrence.

If the number of hidden units in GRU are fixed to the embedding size, how does the Recurrence part work?

There seems to be no guide on this in the internet.

Can someone please help me understand how this setting will work ?

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP