Data Science Asked on January 20, 2021
I’m trying to understand Deep Neural Networks for YouTube Recommendations.
Their candidate generation step outputs top N items
I guess $v_j$ represents, (from softmax layer to nearest neighbor index)
topn videos you get via softmax, and represent them in the original encoding (same encoding you used for the input (used for embedded video watches))
apparently, $v_j$ are in the different encoding from the input encodings.
The softmax layer outputs a multinomial distribution over the same 1M video classes with a dimension of 256 (which can be thought of as a separate output video embedding)
I’m trying to understand what they mean by interpreting softmax output as a separate output video embedding. I thought softmax layer that outputs 1M classes has dimension of 1M, where does 256 came from? (It’s the same question as How to create a multi-dimensional softmax output in Tensorflow? and I don’t think it has been answered there..)
user vector $u$ is the output of the final ReLU unit, although I’m not sure what this user vector is used for.
I guess in serving time, to pick the top N for a given user, user vector $u$ is used by nearest-neighbor. But my understanding of nearest-neighbor is for a given vector, it finds nearest vectors in the same dimension. (such as given an movie, find nearest movies). However here, you are given a user and need to find topn videos. How does that work?
My best guess is that, for a given user, u get a user vector as the ReLU output, then find user-user nearest neighbor, and combine their topn items obtained in the training time. But it’s just a guess..
$v_j$ is the learned vector of weights (dimension 256) that connect the last hidden layer (dimension 256) to the corresponding output node for video class $j$. The last hidden layer is the user embedding vector $u$.
The paper uses a vocabulary $V$ of 1M video classes so the deep neural net learns 1M vectors of weights that connect the last hidden layer to each class: $[v_1, ldots, v_{1M}]$. Negative sampling is used to train such a model with so many classes.
Below softmax function on page 2 section 3.1 with vj and $u$.
$$P(w_t=i|U,C) = frac{e^{v_i u}}{sum_{j in V e^{v_{j}u}}}$$
I believe $v_j$ is "thought of as a separate output video embedding" because $v_j$ can be interpreted as a compressed representation of video j in a 256 dimensional vector space.
Since $e^x$ is monotonically increasing, we just care about the dot product $v_iu$ in the numerator of equation above. This is why "the scoring problem reduces to a nearest neighbor search in the dot product space" where we simply find the nearest neighbors $v$ to $u$.
Answered by Johnson on January 20, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP