Data Science Asked on April 5, 2021
Given a time-series prediction with a Recurrent Neural Network (doesn’t matter if LSTM/GRU/…), a forecast might look like this:
to_predict (orange) was fed to the model, predicted (purple) is the forecast resulting from the RNN-model and correct (dashed blue) is how it should have benn forecasted correctly.
As can be seen, to_predict (as well as all the training-data) is quite “spiky”, while the forecast is much smoother. The smoothness is presumably the result of the models architecture etc.; anyhow, my question aims somewhere else (even though connected to this):
Is a smooth forecast that more or less middles the zigzag of peaks and valleys of the correct data a hint for…
I intentionally did not mention any criteria like MAE, MAPE etc. since I am only concerned with this graphical interpretation.
Maybe a little bit late to the party, but this might help you in the future:
Neural networks in general are heavily overparameterised functions that are fitted to match the training data. This is true for simple MLPs or more complex RNNs. These functions never encapsulate the "true" function, but only approximate it.
In you data, the spikes are simply outliers (meaning that they cannot be explained by the given, underlying input data). This means that to approximate those outliers, a function of your given inputs is not enough, since you are "averaging out" or ignoring these spkies as outliers during training (I assume you have a large dataset to train on).
See it that way: the given input data to your approximated function is not enough to explain the full "world" that your time series exists in.
Answered by Dorian on April 5, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP