Data Science Asked by Luca Di Mauro on August 18, 2020
I am trying to understand neural networks in easy and visual way, specifically neutral networks used for text classification and analysis.
I know that there are several ways to build a NN and an easy way to think of it is as follows:
or, in a more schematic way, as follows:
What I am still not understanding is the layer(s). Let’s suppose I have a text and I want to find similarity between words: I would use algorithms such as cosine similarity or Jaccard or word2vec if I am interested in synonymous.
Now each of them takes an input, for example one or more sentences: "I like play basketball" or/and "My mom is an English teacher at the University of Cambridge". If I want to test the similarity of words within the same sentence I should consider to tokenize the sentence, then ‘doing something’ that I do not know yet (I hope you can tell me more about this step), and apply an algorithm which says, for example:
and so on.
My output should be the value of this similarity comparison, should it not?
My question, therefore, is the following:
what is a neutral network and how I can think of it when I apply such problems with ‘hidden layers’ (word improperly used as example here) where an algorithm that I cannot see (because others built it already) is applied ?
Could you please provide an easy textual/numerical example in order to make me easy the understanding, if I am wrong in the example mentioned above?
Thanks a mill.
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP