TransWikia.com

One-hot encoding to embedded vector - BigGAN

Data Science Asked by Bartek Wójcik on June 22, 2021

I am trying to replicate a BigGAN architecture in Tensorflow but fail to understand exact nature of inputs.
BigGAN generator has 2 inputs, noise z that is a vector of [batch size, 120] elements drawn from normal distribution and vector Embedded(y) that is of size [batch size,128], where y is a vector of class labels. BigGAN architecture 128x128
By looking at tfhub code example

# Load BigGAN 128 module.
module = hub.Module('https://tfhub.dev/deepmind/biggan-128/2')

# Sample random noise (z) and ImageNet label (y) inputs.
batch_size = 8
truncation = 0.5  # scalar truncation value in [0.02, 1.0]
z = truncation * tf.random.truncated_normal([batch_size, 120])  # noise sample
y_index = tf.random.uniform([batch_size], maxval=1000, dtype=tf.int32)
y = tf.one_hot(y_index, 1000)  # one-hot ImageNet label

# Call BigGAN on a dict of the inputs to generate a batch of images with shape
# [8, 128, 128, 3] and range [-1, 1].
samples = module(dict(y=y, z=z, truncation=truncation))

my understanding is that y is a one-hot form and Embedded(y) is a mapping of these class labels.

  1. What kind of functions is it?
  2. How can I encode all possible classes to such embedding?
  3. Is it equivalent to tf.keras.layers.Embedding (see here)?version=stable ?
  4. If yes, can it be implemented in tensorflow as explained here?

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP