贝尔曼公式
v π ( s ) = E [ G t ∣ S t = s ] = E [ R t + 1 + γ G t + 1 ∣ S t = s ] = E [ R t + 1 ∣ S t = s ] + γ E [ G t + 1 ∣ S t = s ] = ∑ a π ( a ∣ s ) E [ R t + 1 ∣ S t = s , A t = a ] + ∑ s ′ E [ G t + 1 ∣ S t = s , S t + 1 = s ′ ] p ( s ′ ∣ s ) = ∑ a π ( a ∣ s ) ∑ r p ( r ∣ s , a ) r + ∑ s ′ v π ( s ′ ) p ( s ′ ∣ s ) = ∑ a π ( a ∣ s ) ∑ r p ( r ∣ s , a ) r + ∑ s ′ v π ( s ′ ) ∑ a p ( s ′ ∣ s , a ) π ( a ∣ s ) = ∑ a π ( a ∣ s ) ∑ r p ( r ∣ s , a ) r ⏟ mean of immediate rewards + γ ∑ a π ( a ∣ s ) ∑ s ′ p ( s ′ ∣ s , a ) v π ( s ′ ) ⏟ mean of future rewards = ∑ a π ( a ∣ s ) [ ∑ r p ( r ∣ s , a ) r + γ ∑ s ′ p ( s ′ ∣ s , a ) v π ( s ′ ) ] ⏟ q π ( s , a ) = ∑ a π ( a ∣ s ) E [ G t ∣ S t = s , A t = a ] ⏟ q π ( s , a ) = ∑ a π ( a ∣ s ) q π ( s , a ) , ∀ s ∈ S . \begin{aligned} v_\pi(s) &=\mathbb{E}\left[G_t \mid S_t=s\right] \\ &=\mathbb{E}\left[R_{t+1}+\gamma G_{t+1} \mid S_t=s\right] \\ &=\mathbb{E}\left[R_{t+1} \mid S_t=s\right]+\gamma \mathbb{E}\left[G_{t+1} \mid S_t=s\right] \\ &=\sum_a \pi(a \mid s) \mathbb{E}\left[R_{t+1} \mid S_t=s, A_t=a\right]+ \sum_{s^{\prime}} \mathbb{E}\left[G_{t+1} \mid S_t=s, S_{t+1}=s^{\prime}\right] p\left(s^{\prime} \mid s\right) \\ &=\sum_a \pi(a \mid s) \sum_r p(r \mid s, a) r + \sum_{s^{\prime}} v_\pi\left(s^{\prime}\right) p\left(s^{\prime} \mid s\right) \\ &=\sum_a \pi(a \mid s) \sum_r p(r \mid s, a) r + \sum_{s^{\prime}} v_\pi\left(s^{\prime}\right) \sum_a p\left(s^{\prime} \mid s, a\right) \pi(a \mid s) \\ &=\underbrace{\sum_a \pi(a \mid s) \sum_r p(r \mid s, a) r}_{\text {mean of immediate rewards }}+\underbrace{\gamma \sum_a \pi(a \mid s) \sum_{s^{\prime}} p\left(s^{\prime} \mid s, a\right) v_\pi\left(s^{\prime}\right)}_{\text {mean of future rewards }} \\ &=\sum_a \pi(a \mid s) \underbrace{\left [\sum_r p(r \mid s, a) r+\gamma \sum_{s^{\prime}} p\left(s^{\prime} \mid s, a\right) v_\pi\left(s^{\prime}\right)\right]}_{{q_\pi(s, a)}} \\ &=\sum_a \pi(a \mid s) \underbrace{\mathbb{E}\left[G_t \mid S_t=s, A_t=a\right]}_{q_\pi(s, a)} \\ &=\sum_a \pi(a \mid s) q_\pi(s, a), \quad \forall s \in \mathcal{S} . \end{aligned} v π ( s ) = E [ G t ∣ S t = s ] = E [ R t + 1 + γ G t + 1 ∣ S t = s ] = E [ R t + 1 ∣ S t = s ] + γ E [ G t + 1 ∣ S t = s ] = a ∑ π ( a ∣ s ) E [ R t + 1 ∣ S t = s , A t = a ] + s ′ ∑ E [ G t + 1 ∣ S t = s , S t + 1 = s ′ ] p ( s ′ ∣ s ) = a ∑ π ( a ∣ s ) r ∑ p ( r ∣ s , a ) r + s ′ ∑ v π ( s ′ ) p ( s ′ ∣ s ) = a ∑ π ( a ∣ s ) r ∑ p ( r ∣ s , a ) r + s ′ ∑ v π ( s ′ ) a ∑ p ( s ′ ∣ s , a ) π ( a ∣ s ) = mean of immediate rewards a ∑ π ( a ∣ s ) r ∑ p ( r ∣ s , a ) r + mean of future rewards γ a ∑ π ( a ∣ s ) s ′ ∑ p ( s ′ ∣ s , a ) v π ( s ′ ) = a ∑ π ( a ∣ s ) q π ( s , a ) [ r ∑ p ( r ∣ s , a ) r + γ s ′ ∑ p ( s ′ ∣ s , a ) v π ( s ′ ) ] = a ∑ π ( a ∣ s ) q π ( s , a ) E [ G t ∣ S t = s , A t = a ] = a ∑ π ( a ∣ s ) q π ( s , a ) , ∀ s ∈ S .
贝尔曼最优公式
v ( s ) = max π ∑ a π ( a ∣ s ) ( ∑ r p ( r ∣ s , a ) r + γ ∑ s ′ p ( s ′ ∣ s , a ) v ( s ′ ) ) = max π ∑ a π ( a ∣ s ) q ( s , a ) , ∀ s ∈ S \begin{aligned} v(s) &=\max _\pi \sum_a \pi(a \mid s)\left(\sum_r p(r \mid s, a) r+\gamma \sum_{s^{\prime}} p\left(s^{\prime} \mid s, a\right) v\left(s^{\prime}\right)\right) \\ &=\max _\pi \sum_a \pi(a \mid s) q(s, a), \quad \forall s \in \mathcal{S} \end{aligned} v ( s ) = π max a ∑ π ( a ∣ s ) ( r ∑ p ( r ∣ s , a ) r + γ s ′ ∑ p ( s ′ ∣ s , a ) v ( s ′ ) ) = π max a ∑ π ( a ∣ s ) q ( s , a ) , ∀ s ∈ S
根据 Contraction mapping theorem 可知贝尔曼最优公式中的 v(state value) 存在唯一的最优解,并且可能有多种最优策略。
参考
强化学习的数学原理——从零开始透彻理解强化学习
Book-Mathmatical-Foundation-of-Reinforcement-Learning