TransWikia.com

How to re-train a model from false positives

Data Science Asked on July 4, 2021

I’m still a bit new to deep learning. What I’m still struggling, is what is the best practice in re-training a good model over time?

I’ve trained a deep model for my binary classification problem (fire vs non-fire) in Keras. I have 4K fire images and 8K non-fire images (they are video frames). I train with 0.2/0.8 validation/training split. Now I test it on some videos, and I found some false positives. I add those to my negative (non-fire) set, load the best previous model, and retrain for 100 epochs. Among those 100 models, I take the one with lowest val_loss value. But when I test it on the same video, while those false positives are gone, new ones are introduced! This never ends, and Idk if I’m missing something or am doing something wrong.

How should I know which of the resulting models is the best? What is the best practice in training/re-training a good model? How should I evaluate my models?

Here is my simple model architecture if it helps:

def create_model():
  model = Sequential()
  model.add(Conv2D(32, kernel_size = (3, 3), activation='relu', input_shape=(300, 300, 3)))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Dropout(0.2))
  model.add(Flatten())
  model.add(Dense(256, activation='relu'))
  model.add(Dropout(0.2))
  model.add(Dense(64, activation='relu'))
  model.add(Dense(2, activation = 'softmax'))

  return model

#....
if retrain_from_prior_model == False:
    model = create_model()
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
else:
    model = load_model("checkpoints/model.h5")

One Answer

In most cases, one shouldn't retrain a trained network with only the new data. Rather, train the network from scratch with the new and old data.

Adding new data and retraining the model just on that new set of data, will probably make your model fit to only that new data, thus forgetting general features from the other data it was trained on.

Also, instead of selecting your final model based on only validation loss, you should select your model based on a validation metric. For example, in your case, it could be accuracy, precision, recall etc

Answered by Sid on July 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP