Data Science Asked on December 29, 2020
I was trying some tensorflow tutorials and see that in all of them they use layers.embedding to learn these word embeddings, but how are these learned? , with a NN? which arquitecture? , or word2vec?
Thanks
The keras embedding layer is initialized with random weights and will learn an embedding for all of the words in the training dataset. So,the output vectors are not computed from the input using any mathematical operation. Instead, each input integer is used as the index to access a table that contains all possible vectors.
You could also use pre trained word2vec embedding.
Refer this to know more.
Correct answer by prashant0598 on December 29, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP