Data Science Asked by Maurits van Roozendaal on December 19, 2020
I am dabbling in active learning and was wondering how to combine this in seeking out the best architecture for the network.
In my understanding, active learning uses a heuristic for selecting the best instances to label in order to learn as quickly as possible. However, the way these instances are chosen are dependent on the model itself.
Is there a way to handle this model dependency?
It seems to me that models architecture is dependent on the train size, for example. If that’s the case, wouldn’t it be beneficial to allow the model to change its architecture during active learning?
Is there a way to do this or do we have to be very careful when selecting a model before performing active learning.
Another possibility I see is to perform a grid search on the networks architecture on all the labeled data after every so-often queries. But then again, the model that comes out on top is still dependent on the initially chosen model…
There are nested loops in training machine learning models. The outer loop is selecting the hyperparameters for the model, including the architecture. The inner loop trains the model's parameters while holding the hyperparameters constant.
It is possible to try many possible options on outer loop and make those options dependent on the results of the inner loop.
Answered by Brian Spiering on December 19, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP