Data Science Asked on January 25, 2021
Trained the same model twice with the same dataset, the same parameters (Epochs, Batch Size, Learning rate, etc..). But both trained model shows different train as well as test accuracy on the same dataset.
(code is same for two models)
Why both models show different accuracy?
Train the model
Test accuracy = 87.98%
Again Train the model
Test accuracy = 67.18%
Somewhere in the code there should be some parameter that is initialised randomly, this is usually called the random seed. It could be the different initialisation of your neural network weights that is affecting the results, or maybe the k-fold being different if done at random. The extent of the difference between the performance suggests that you might not have enough data to reliably learn a good model, or your model is underspecified and gets stuck at different local minima. Consider, changing the learning rate and other hyperparameters and see if the effect stays.
Answered by LuckyLuke on January 25, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP