Data Science Asked by Stefan Radonjic on August 15, 2020
I am trying to implement greedy layer-wise pretraining for Convolutional Neural Network binary classifier using AutoEncoders. However, I am a little bit confused regarding the logic of implementation. If I understood correctly, I need to:
model.layer[idx].trainable = False
)MSE
loss.Is this correct? If so, can anyone tell me how is greedy layer-wise pretraining different from training a complete AutoEncoder architecture and then using Encoder weights to initialize CNN architecture which is going to be fine-tuned latter.
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP