Data Science Asked by Faruk on May 16, 2021
Currently, I am working on my thesis which is built on LSTM
networks and I am using PyTorch
library. However I am struggling to solve the conceptual problem of archiving trained models
.
To make the question more clear; I am saving models in a archicture that I can give this form /models/model-with-loss-2.634221
as an example. But with this form, it is hard to determine which is which. I tried use more detailed form like 1-layered-100-epoch-128-batchsize-...-etc
, but it is also hard to read and determine.
What is your way that you think is most productive to handle such operation?
By the way I am not sure this is the correct network ask this question on, you can drop an comment if it is not.
One option is to give each model a unique identifier (e.g., a hash value or nickname). Then store all the metadata in another file.
Another option is using the PyTorch torch-model-archiver feature.
Correct answer by Brian Spiering on May 16, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP