Data Science Asked by dohm on February 20, 2021
I have a DenseNet121 implemented in Pytorch for image classification. For now, the training set-up is pretty straightforward:
I was wondering how to extend my set-up to make it a proper statistical experiment set-up, with the goal of getting stable accuracy and loss results ie I want to be able to say that my model gives consistently the same results. I was thinking along the lines of running the testing step say 10 times and averaging the error? Or do you see some deficiency in the training that I could improve to improve stability?
Thanks in advance
The learning rate is one of those first and most important parameters of a model, and one that you need to start thinking about pretty much immediately upon starting to build a model. It controls how big the jumps your model makes, and from there, how quickly it learns.
There are learning rate technique called Cyclical Learning Rates.
Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. Check this out : Cyclical Learning Rates for Training Neural Networks. And implementation in pytorch and details blog : Adaptive - and Cyclical Learning Rates using PyTorch.
By this small trick, you can build a stable version of your model. Best of luck.
Correct answer by AIFahim on February 20, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP