Data Science Asked by Sanyo Mn on September 5, 2021
I’ve written the code below to try word2vec implementation of gensim. I’ve two questions:
Thanks.
import nltk
from nltk.tokenize import sent_tokenize
from nltk.corpus import gutenberg
import gensim
from gensim.models import Word2Vec
from gensim.parsing.preprocessing import remove_stopwords
from nltk.tokenize import RegexpTokenizer
text = gutenberg.raw('austen-emma.txt').
text = remove_stopwords(text).
tokenizer = RegexpTokenizer(r'w+').
data = [].
for i in sent_tokenize(text):
temp = [].
for j in tokenizer.tokenize(i):
temp.append(j.lower()).
data.append(temp).
model = gensim.models.Word2Vec(data, min_count = 1,
size = 32, window = 2)
model.wv.most_similar(positive='friend', topn=10)
[('mind', 0.9998476505279541),
('present', 0.9998302459716797),
('till', 0.9998292326927185),
('herself', 0.9998183250427246),
('highbury', 0.999806821346283),
('the', 0.9998062252998352),
('place', 0.9998047351837158),
('house', 0.999799907207489),
('her', 0.9997915029525757),
('me', 0.9997879266738892)]
In general, word2vec needs some manual tuning before the results start subjectively making sense (unless your dataset is very large).
Correct answer by hssay on September 5, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP