TransWikia.com

DNN predicting the same value for train+test Data

Data Science Asked on May 6, 2021

I have trained a Deep Neural architecture for regression problem and after the hundred’s of epochs, model predicting the same output for both training and testing data.

When I reduced the batch size, atleast I’m not getting the same value for all the samples.

According to me reasons could be, model is not getting trained and gradient died in between. If it’s correct, Im not sure why it’s happening in my case where I have just "3 CNN + 3 DNN" layers on my architecture.

  1. Using ReLu as an activation function, changing to LeakyRelu will be beneficial?
  2. How batch size contributing on it?
  3. Any other probable reason for this problem.

Regards.

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP