Artificial Intelligence Asked by Zahra on December 7, 2020
I’m trying to implement a soft actor-critic algorithm for financial data (stock prices), but I have trouble with losses: no matter what combination of hyper-parameters I enter, they are not converging, and basically it caused bad reward return as well. It sounds like the agent is not learning at all.
I already tried to tune some hyperparameters (learning rate for each network + number of hidden layers), but I always get similar results.
The two plots below represent the losses of my policy and one of the value functions during the last episode of training.
My question is, would it be related to the data itself (nature of data) or is it something related to the logic of the code?
I would say it is the nature of data. Generally speaking, you are trying to predict a random sequence, especially if you use the history data as an input and try to get the future value as an output.
Correct answer by oleg.mosalov on December 7, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP