TransWikia.com

Strange behavior of CNN when forecasting time series

Data Science Asked on March 25, 2021

I have a time series containing 5 features. I tried to use LSTM to predict the next 112 periods in the series. However, I got very bad results. So I tried to use CNN.

First, it did not work properly when using a lot of data for training, and it was not possible to find the characteristics of the series. By decreasing the data, leaving only 224 data for training and 224 for validation, CNN can find the data pattern but results in overfitting (Final epoch: loss: 4.7838e-15 – val_loss: 0.0971).

The strange behavior observed is that even though it results in overfitting, it manages to generalize and predict better future values than when using some kind of regularizer. Does anyone have an explanation for this? Any tips to improve predictions in this case?

The code:

model = Sequential()
model.add(Conv1D(filters=32, kernel_size=2, activation='sigmoid', input_shape=(n_steps_in,n_features)))
model.add(Conv1D(filters=32, kernel_size=2, activation='sigmoid'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(32, activation='sigmoid'))
model.add(Dense(32, activation='sigmoid'))
model.add(Dense(n_output))
model.compile(optimizer='adam', loss='mse')

from keras.callbacks import EarlyStopping
stop = EarlyStopping(monitor='loss',min_delta=0.000000000001,patience=30) 

# fit model
history = model.fit(X, y, epochs=1000, verbose=1,callbacks=[stop], validation_data = (test_X, test_y))

enter image description here

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP