Off-Policy Prediction via Importance Sampling

 

All learning contron methods seek to learn action values on optimal behaviour while exploring all actions. One approach for this - is to use two policies, one that is learned about and that becomes the optimal policy, and other that is more explotory and is used to generate behavior.

  • The policy being learned about is called target policy
  • The policy that is used to generate behavior is called behaviour policy

Here, as learning is from the data “off” the target policy, we call the process OFF-POLICY LEARNING. Prediction Problem of Off-policy can be summarized as:

  • If both target and behavior policy are fixed, we need to estimate \(v_{\pi}\) and \(q_{\pi}\).

  • We have all episodes following another policy \(b\) where \(\pi\) is target policy, \(b\) is behaviour policy and \(b ≠ \pi\).

  • To use episodes from \(b\) to estimate values for \(\pi\), every action taken under \(\pi\) is at least taken occasionally under \(b\) as well. We require, \(π (a ∣ s) \gt 0\) implies \(b(a ∣ s) \gt 0\).

This is called assumption of coverage. It follows from coverage that \(b\) must be stochastic in states where it is not identical to \(\pi\). In control, the target policy is typically deterministic greedy policy with respect to the current estimate of the action-value function, while the behavior policy remains stochastic and more explotory like \(ϵ -\text{greedy}\)

Importance Sampling

(a technique for estimating expected values under one distribution given samples from another).

Importance Sampling is applied by weighing returns according to the relative probability of their trajectories occuring under the target and behaviour policies called importance sampling ratio. Given a starting state \(S_{t}\), the probability of subsequent state-action trajectory \(A_{t}, S_{t+1},A_{t+1},S_{t+2},\cdots, S_{T}\) occuring under policy \(\pi\) is:

\[\text{Pr} \left\{A_{t},S_{t+1},A_{t+1},\cdots,S_{T} ∣ S_{t},A_{t:T-1} ∼ π \right\}\] \[= \pi(A_{t} ∣ S_{t}) p(S_{t+1} ∣ S_{t},A_{t}) \pi(A_{t+1} ∣ S_{t+1}) \cdots p(S_{T} ∣ S_{T-1},A_{T-1})\] \[= \prod_{k=t}^{T-1} \pi (A_{k} \mid S_{k}) p (S_{k+1} \mid S_{k},A_{k})\]

where, \(p =\)state transition probability
Thus, the importance sampling ratio is:

\[\begin{align} P_{t:T-1} &\dot{=} \frac{\prod_{k=t}^{T-1} π (A_{k} ∣ S_{k}) p (S_{k+1} ∣ S_{k}, A_{k})} {\prod_{k=t}^{T-1} b(A_{k} ∣ S_{k}) p (S_{k+1} ∣ S_{k}, A_{k})} \notag \\ & = \prod_{k=t}^{T-1} \frac{\pi (A_{k},S_{k})}{b(A_{k},S_{k})} \\ \end{align}\]

As numerator and deominator cancels out in eq(4), we can understand importance sampling ratio depends only on policies and sequences.

In importance sampling, we wish to estimate expected returns under the target policy, but we get \(G_{t}\) returns due to behaviour policy with expectation i.e., \(\mathbb{E} [ G_{t} | S_{t} = s] = V_{b}(s)\) which cannot converge to \(v_{\pi}\). Therefore, Importance Sampling that helps find right return is:

\(\mathbb{E}[ P_{t:T-1} G_{t} ∣ S_{t} = s] = V_{\pi}(s)\).