Data Science Asked by Benyamin Jafari on December 23, 2020
I’m using GloVe pre-trained word vectors (glove.6b.50d.txt, glove.6b.300d.txt) as word embedding.
I have a conceptual question:
Glove creates word vectors that capture meaning in vector space by taking global count statistics. The training objective of GloVe is to learn word vectors such that their dot product equals the logarithm of the words probability of co-occurrence. while optimizing this, you can use any number of hidden representations for word vector. In the original paper, they trained with 25, 50, 100, 200, 300. These dimensions are not interpretable. after training, we are getting a vector with 'd' dim that captures many properties of that word. If the dimension is increasing, the vector can capture much more information but computational complexity will also increase. please go through this blog.
Answered by Uday on December 23, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP