Artificial Intelligence Asked on August 24, 2021
I’m looking at some baseline implementations of RL agents on the Pendulum environment. My guess was to use a relatively small neural net (~100 parameters).
I’m comparing my solution with some baselines, e.g. the top entry on the Pendulum leaderboard. The models for these solutions are typically huge, i.e. ~120k parameters. What’s more, they use very large replay buffers as well, like ~1M transitions. Such model sizes seem warranted for Atari-like environments, but for something as small as the Pendulum, this seems like complete overkill to me.
Are there examples of agents that use a more modest number of parameters on Pendulum (or similar environments)?
Actually, I just started inspecting the entries further down in the leaderboard list, and there are in fact more modest architectures, e.g. this one, which uses 3 hidden layers with 8 units each.
Answered by Kris on August 24, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP