Data Science Asked on January 16, 2021
My model’s structure is
Output
^
|
----------------
| Dense Network |
----------------
/
||
||
||
|--------------------| || | ----------------------|
| RNN on features | ========>||<======== | Dense Network on non |
| changing with time | [concat] | time series data |
|--------------------| |-----------------------|
These are the Training and Validation set metric outputs of my model. Why are the values fluctuating so much for the validation set?. Any ideas?
[![4 Graphs. Loss, Accuracy, Precision, Recall][1]][1]
Update :
As suggested in comments I have tried increasing the validation set size Now the size ratio is
49.6%-50.4%
Also I have made the model very simple by Using fewer layers. The new graph obtained looks like this
[![4 Graphs. Loss, Accuracy, Precision, Recall on simpler model][2]][2]
Is this acceptable as ‘okay-fluctuating’?
Thanks for updating the post, this level of fluctuation in the validation set is a lot less dramatic than before and appears to be similar to regular fluctuation I have seen in my experience. Kudos that you have also managed to prevent the model from overfitting.
Correct answer by shepan6 on January 16, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP