Data Science Asked by PDPDPDPD on June 25, 2021
I am attempting to train an autoencoder on data that is extremely sparse. Each datapoint is only zeros and ones and contains ~3% 1s. Being that the data is mostly zero the autoencoder learns to guess zero every time. Is there a way to prevent this from happening? To give context this is extremely sparse data when you consider that the number of features is over 865,000
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP