TransWikia.com

Using AutoEncoder for Greedy Layer-Wise Pretraining [Convolutional Neural Networks]

Data Science Asked by Stefan Radonjic on August 15, 2020

I am trying to implement greedy layer-wise pretraining for Convolutional Neural Network binary classifier using AutoEncoders. However, I am a little bit confused regarding the logic of implementation. If I understood correctly, I need to:

  1. Build a CNN architecture and flag all layers as NOT TRAINABLE (i.e. model.layer[idx].trainable = False)
  2. Iterate through each layer, make layer in the current iteration trainable, while others remain NOT trainable, and fit the model for couple of epochs using MSE loss.
  3. Repeat step 2 until all layers, but the one meant for classification, are trained.
  4. Fine-tune network for its original task.

Is this correct? If so, can anyone tell me how is greedy layer-wise pretraining different from training a complete AutoEncoder architecture and then using Encoder weights to initialize CNN architecture which is going to be fine-tuned latter.

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP