Data Science Asked by 1b15 on March 10, 2021
I am training an LSTM for time series forecasting and it has produced an extremly high loss value during one epoch:
Epoch 00043: saving model to /...
904/904 - 2s - loss: 0.7537 - mean_absolute_error: 0.5772 - val_loss: 1.4430
- val_mean_absolute_error: 0.7124
Epoch 00044: saving model to /...
904/904 - 2s - loss: 240372339275.7649 - mean_absolute_error: 56354.0078
- val_loss: 4.6229 - val_mean_absolute_error: 1.5681
Epoch 00045: saving model to /...
904/904 - 2s - loss: 1.3348 - mean_absolute_error: 0.7894 - val_loss: 2.2875
- val_mean_absolute_error: 1.1510
My model:
model = keras.Sequential()
model.add(keras.layers.LSTM(360, activation='relu', input_shape=(N_STEPS, n_features)))
model.add(keras.layers.Dropout(0.1))
model.add(keras.layers.Dense(1, activation='relu'))
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
What is the cause of this?
Theoretically, it shouldn’t be able to have such a high loss unless it outputs very high values for that epoch. Which is strange since the model’s output makes sense during other epochs.
Sorry I couldn't comment as it requires 50 Reputation. On Epoch 44 there is a huge spike in the loss. It is entirely possible that the model may have come across new data and it may have learned a few tricks up its sleeve. Try to plot loss of train & validation vs epoch to see if it underfits or overfits.
Answered by Justice_Lords on March 10, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP