TransWikia.com

Understanding policy gradient theorem - What does it mean to take gradients of reward wrt policy parameters?

Data Science Asked by MiloMinderbinder on September 4, 2021

I am looking for a little clarity on what the policy gradient theorem means. My confusion lies in the fact that the reward $R$ in reinforcement learning is non-differentiable in the policy parameters. As that is the case how does the central objective of policy gradients, finding the gradients of Reward $R$ wrt the parameters of policy function even make sense?

One Answer

We want to find the gradient of policy "return" $V$ wrt. parameters of the policy $theta$. Where the return $V$ could be written as "how good is an action $Q$ $times$ probability of taking that action $pi$".

Consider the policy gradient, $nabla_theta V = sum_a Q nabla_theta pi + pi nabla_theta Q$

The first term tells us to adjust the action probability proportionally to how good it is. To me it could read "if an action yields good, take more". That is to move the peak of $pi$ to match the peak of $Q$. This is a reasonable thing to do. But of course since $Q$ cannot directly guide us toward its peak, it is up to our $pi$ to luckily stumble upon the high peak of $Q$. This emphasizes the importance of exploratory nature of $pi$.

The second term is vice versa. That is to move the peak of $Q$ to match the peak of $pi$. This is much harder a task because $Q$ is a function of both action and policy, $Q_{pi_theta}(s, a)$. We clearly don't have this in a differentiable form i.e. we don't have a universal $Q$ function over the space of all possible $pi$.

We now have a partial gradient from the first term but we have yet to estimate the second term.

Turns out, the second term can be recursively written solely in the form of the first term but with subsequent actions and states.

$$ nabla_theta V_0 = sum Q_0 nabla_theta pi_0 + sum Q_1 nabla_theta pi_1 + sum Q_2 nabla_theta pi_2 + dots $$

That is to get good policy i.e. policy gradient we only need to move the peaks of $pi$ to match the peaks of $Q$ not only the first (state, action) but also for all subsequent (state, action)'s. This yields the same result as if we differentiate through the $Q$.

Answered by Phizaz on September 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP