Data Science Asked by David Pitts on December 20, 2020
I’m training a denoising autoencoder right now to reduce the dimension of a feature vector I designed of dim 58 to a latent space of dim 10, or less hopefully. I’m having a hard time understanding what does the following error decreasing pattern imply :
]1
My intuition tells me the gradient descent is "crawling" toward a valley for a few epochs, once he reached that valley he quickly improve and then crawl again. So if I’m correct, does it mean my learning rate is too low? If I’m wrong I would love some help.
Code in pytorch for the AE architture :
Class Denoising_AE(nn.Module):
def __init__(self):
super(Denoising_AE, self).__init__()
# encoder
self.enc0 = nn.Linear(in_features=58, out_features=45)
self.enc1 = nn.Linear(in_features=45, out_features=30)
self.enc2 = nn.Linear(in_features=30, out_features=15)
self.enc21 = nn.Linear(in_features=15, out_features=LATENT_SPACE_DIM)
# self.enc6 = nn.Linear(in_features=10, out_features=8)
# self.enc = nn.Linear(in_features=8, out_features=6)
#
#
# # decoder
# self.dec = nn.Linear(in_features=6, out_features=8)
self.dec0 = nn.Linear(in_features=LATENT_SPACE_DIM, out_features=15)
self.dec1 = nn.Linear(in_features=15, out_features=30)
self.dec2 = nn.Linear(in_features=30, out_features=45)
self.dec3 = nn.Linear(in_features=45, out_features=58)
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP