16.隐马尔科夫链(HMM):一种强大的序列建模工具

826 阅读5分钟

一、概述

1. 介绍

动态模型可以类比高斯混合模型这种静态模型,高斯混合模型的特点是“混合”,动态模型的特点是在“混合”的基础上加入了“时间”。动态模型包括多种模型:

Dynamic  Model{HMMKalman  FilterParticle  FilterDynamic\; Model\left\{\begin{matrix} HMM \\ Kalman\; Filter\\ Particle\; Filter \end{matrix}\right.

隐马尔可夫模型是动态模型的一种,它的状态空间是离散的,而另外两种动态模型的状态空间是连续的。

2. 模型

隐马尔可夫模型的概率图模型如下:

image.png

概率图模型

上图中tt代表时刻,阴影部分为观测变量序列OO,非阴影部分为状态变量序列II,另外我们定义观测变量取值的集合为VV,状态变量取值的集合为QQ

O=o1,o2,,otV={v1,v2,,vm}I=i1,i2,,itQ={q1,q2,,qn}O=o_{1},o_{2},\cdots ,o_{t}\rightarrow V=\left \{v_{1},v_{2},\cdots ,v_{m}\right \}\\ I=i_{1},i_{2},\cdots ,i_{t}\rightarrow Q=\left \{q_{1},q_{2},\cdots ,q_{n}\right \}

隐马尔可夫模型的参数用λ\lambda表达:

λ=(π,A,B)\lambda =(\pi ,A,B)

其中π\pi为初始概率分布,是一个多维向量;AA为状态转移矩阵;BB为发射矩阵:

π=(π1,π2,,πN),i=1Nπi=1A=[aij],aij=P(it+1=qjit=qi)B=[bj(k)],bj(k)=P(ot=vkit=qj)\pi =(\pi _{1},\pi _{2},\cdots ,\pi _{N}),\sum_{i=1}^{N}\pi _{i}=1\\ A=[a_{ij}],a_{ij}=P(i_{t+1}=q_{j}|i_{t}=q_{i})\\ B=[b_{j}(k)],b_{j}(k)=P(o_{t}=v_{k}|i_{t}=q_{j})

3. 两个假设

  • 齐次马尔可夫假设

任意时刻的状态只依赖于前一时刻的状态,即:

P(it+1it,it1,,i1,ot,ot1,,o1)=P(it+1it)P(i_{t+1}|i_{t},i_{t-1},\cdots ,i_{1},o_{t},o_{t-1},\cdots ,o_{1})=P(i_{t+1}|i_{t})
  • 观测独立假设

任意时刻的观测只依赖于当前时刻的状态,即:

P(otit,it1,,i1,ot1,,o1)=P(otit)P(o_{t}|i_{t},i_{t-1},\cdots ,i_{1},o_{t-1},\cdots ,o_{1})=P(o_{t}|i_{t})

4. 三个问题

  • Evaluation

已知模型的参数λ=(π,A,B)\lambda =(\pi ,A,B),计算某个观测序列发生的概率,即求:

P(Oλ)P(O|\lambda )
  • Learning

已知观测序列,使用EM算法求参数λ\lambda

λMLE=argmaxλ  P(Oλ)\lambda _{MLE}=\underset{\lambda }{argmax}\; P(O|\lambda )
  • Decoding

已知观测序列OO和参数λ\lambda,求使概率P(IO)P(I|O)最大的状态序列II,即:

I^=argmaxI  P(IO)\hat{I}=\underset{I}{argmax}\; P(I|O)

二、Evaluation问题

对于下图中的隐马尔可夫模型,Evaluation问题是在已知参数λ\lambda的情况下,求解P(Oλ)P(O|\lambda )

webp.webp

1. 前向算法

首先我们有:

P(Oλ)=IP(I,Oλ)=IP(OI,λ)P(Iλ)P(O|\lambda )=\sum _{I}P(I,O|\lambda )=\sum _{I}P(O|I,\lambda )P(I|\lambda )

对于上式中的P(Iλ)P(I|\lambda ),有:

P(Iλ)=P(i1,i2,,iTλ)=P(iTi1,i2,,iT1,λ)P(i1,i2,,iT1λ)根据齐次Markov假设:P(iti1,i2,,it1,λ)=P(itit1)=ait1it所以:P(Iλ)=π(i1)t=2Tait1itP(I|\lambda )=P(i_{1},i_{2},\cdots ,i_{T}|\lambda )=P(i_{T}|i_{1},i_{2},\cdots ,i_{T-1},\lambda )\cdot P(i_{1},i_{2},\cdots ,i_{T-1}|\lambda )\\ 根据齐次Markov假设:\\ P(i_{t}|i_{1},i_{2},\cdots ,i_{t-1},\lambda )=P(i_{t}|i_{t-1})=a_{i_{t-1}i_{t}}\\ 所以:\\P(I|\lambda )=\pi (i_{1})\prod_{t=2}^{T}a_{i_{t-1}i_{t}}

对于上式中的P(OI,λ)P(O|I,\lambda ),有:

P(OI,λ)=t=1Tbit(ot)P(O|I,\lambda )=\prod_{t=1}^{T}b_{i_{t}}(o_{t})

因此可得:

P(Oλ)=Iπ(i1)t=2Tait1itt=1Tbit(ot)=i1i2iTπ(i1)t=2Tait1itt=1Tbit(ot)复杂度:O(NT)P(O|\lambda )=\sum _{I}\pi (i_{1})\prod_{t=2}^{T}a_{i_{t-1}i_{t}}\prod_{t=1}^{T}b_{i_{t}}(o_{t})=\underset{复杂度:O(N^{T})}{\underbrace{\sum _{i_{1}}\sum _{i_{2}}\cdots \sum _{i_{T}}\pi (i_{1})\prod_{t=2}^{T}a_{i_{t-1}i_{t}}\prod_{t=1}^{T}b_{i_{t}}(o_{t})}}

上面的求和是对所有的观测变量求和,所以复杂度为O(NT)O(N^{T})

下面记:

αt(i)=P(o1,o2,,ot,it=qiλ)\alpha _{t}(i)=P(o_{1},o_{2},\cdots ,o_{t},i_{t}=q_{i}|\lambda )

所以:

αT(i)=P(O,iT=qiλ)\alpha _{T}(i)=P(O,i_{T}=q_{i}|\lambda )

所以可以得到:

P(Oλ)=i=1NP(O,it=qiλ)=i=1NαT(i)P(O|\lambda )=\sum_{i=1}^{N}P(O,i_{t}=q_{i}|\lambda )=\sum_{i=1}^{N}\alpha _{T}(i)

对于αt+1(j)\alpha _{t+1}(j)

αt+1(j)=P(o1,,ot,ot+1,it+1=qjλ)=i=1NP(o1,,ot,ot+1,it+1=qj,it=qiλ)=i=1NP(ot+1o1,,ot,it=qi,it+1=qj,λ)P(o1,,ot,it=qi,it+1=qjλ)=i=1NP(ot+1it+1=qj,λ)P(o1,,ot,it=qi,it+1=qjλ)=i=1Nbj(ot+1)P(it+1=qjo1,,ot,it=qi,λ)P(o1,,ot,it=qiλ)=i=1Nbj(ot+1)P(it+1=qjit=qi,λ)αt(i)=i=1Nbj(ot+1)aijαt(i)\alpha _{t+1}(j)=P(o_{1},\cdots ,o_{t},o_{t+1},i_{t+1}=q_{j}|\lambda )\\ =\sum_{i=1}^{N}P(o_{1},\cdots ,o_{t},o_{t+1},i_{t+1}=q_{j},i_{t}=q_{i}|\lambda )\\ =\sum_{i=1}^{N}{\color{Red}{P(o_{t+1}|o_{1},\cdots ,o_{t},i_{t}=q_{i},i_{t+1}=q_{j},\lambda )}}P(o_{1},\cdots ,o_{t},i_{t}=q_{i},i_{t+1}=q_{j}|\lambda )\\ =\sum_{i=1}^{N}{\color{Red}{P(o_{t+1}|i_{t+1}=q_{j},\lambda )}}P(o_{1},\cdots ,o_{t},i_{t}=q_{i},i_{t+1}=q_{j}|\lambda )\\ =\sum_{i=1}^{N}{\color{Red}{b_{j}(o_{t+1})}}{\color{Blue}{P(i_{t+1}=q_{j}|o_{1},\cdots ,o_{t},i_{t}=q_{i},\lambda )}}{\color{DarkOrange}{P(o_{1},\cdots ,o_{t},i_{t}=q_{i}|\lambda )}}\\ =\sum_{i=1}^{N}{\color{Red}{b_{j}(o_{t+1})}}{\color{Blue}{P(i_{t+1}=q_{j}|i_{t}=q_{i},\lambda )}}{\color{Orange}{\alpha _{t}(i)}}\\ =\sum_{i=1}^{N}{\color{Red}{b_{j}(o_{t+1})}}{\color{Blue}{a_{ij}}}{\color{Orange}{\alpha _{t}(i)}}

上式利用两个假设得到了一个递推公式,这个算法叫做前向算法,其复杂度为O(TN2)O(TN^{2})

2. 后向算法

定义:

βt(i)=P(ot+1,,oTit=qi,λ)\beta _{t}(i)=P(o_{t+1},\cdots ,o_{T}|i_{t}=q_{i},\lambda )

所以:

P(Oλ)=P(o1,,oTλ)=i=1NP(o1,,oT,i1=qiλ)=i=1NP(o1,,oTi1=qi,λ)P(i1=qiλ)πi=i=1NP(o1o2,,oT,i1=qi,λ)P(o2,,oTi1=qi,λ)β1(i)πi=i=1NP(o1i1=qi,λ)β1(i)πi=i=1Nbi(o1)β1(i)πiP(O|\lambda )=P(o_{1},\cdots ,o_{T}|\lambda )\\ =\sum_{i=1}^{N}P(o_{1},\cdots ,o_{T},i_{1}=q_{i}|\lambda )\\ =\sum_{i=1}^{N}P(o_{1},\cdots ,o_{T}|i_{1}=q_{i},\lambda )\underset{\pi _{i}}{\underbrace{P(i_{1}=q_{i}|\lambda )}}\\ =\sum_{i=1}^{N}P(o_{1}|o_{2},\cdots ,o_{T},i_{1}=q_{i},\lambda )\underset{\beta _{1}(i)}{\underbrace{P(o_{2},\cdots ,o_{T}|i_{1}=q_{i},\lambda )}}\pi _{i}\\ =\sum_{i=1}^{N}P(o_{1}|i_{1}=q_{i},\lambda )\beta _{1}(i)\pi _{i}\\ =\sum_{i=1}^{N}b_{i}(o_{1})\beta _{1}(i)\pi _{i}

因此如果我们能找到βt(i)\beta _{t}(i)βt+1(j)\beta _{t+1}(j)的递推式,就可以由通过递推得到β1(i)\beta _{1}(i),从而计算P(Oλ)P(O|\lambda )

βt(i)=P(ot+1,,oTit=qi,λ)=j=1NP(ot+1,,oT,it+1=qjit=qi,λ)=j=1NP(ot+1,,oTit+1=qj,it=qi,λ)P(it+1=qjit=qi,λ)=j=1NP(ot+1,,oTit+1=qj,λ)aij=j=1NP(ot+1ot+2,,oT,it+1=qj,λ)P(ot+2,,oTit+1=qj,λ)aij=j=1NP(ot+1it+1=qj,λ)βt+1(j)aij=j=1Nbj(ot+1)aijβt+1(j)\beta _{t}(i)=P(o_{t+1},\cdots ,o_{T}|i_{t}=q_{i},\lambda )\\ =\sum_{j=1}^{N}P(o_{t+1},\cdots ,o_{T},i_{t+1}=q_{j}|i_{t}=q_{i},\lambda )\\ =\sum_{j=1}^{N}{\color{Red}{P(o_{t+1},\cdots ,o_{T}|i_{t+1}=q_{j},i_{t}=q_{i},\lambda)}}{\color{Blue}{P(i_{t+1}=q_{j}|i_{t}=q_{i},\lambda )}}\\ =\sum_{j=1}^{N}{\color{Red}{P(o_{t+1},\cdots ,o_{T}|i_{t+1}=q_{j},\lambda)}}{\color{Blue}{a_{ij}}}\\ =\sum_{j=1}^{N}{\color{Orange}{P(o_{t+1}|o_{t+2},\cdots ,o_{T},i_{t+1}=q_{j},\lambda)}}{\color{Orchid}{P(o_{t+2},\cdots ,o_{T}|i_{t+1}=q_{j},\lambda)}}{\color{Blue}{a_{ij}}}\\ =\sum_{j=1}^{N}{\color{Orange}{P(o_{t+1}|i_{t+1}=q_{j},\lambda)}}{\color{Orchid}{\beta _{t+1}(j)}}{\color{Blue}{a_{ij}}}\\ =\sum_{j=1}^{N}{\color{Orange}{b_{j}(o_{t+1})}}{\color{Blue}{a_{ij}}}{\color{Orchid}{\beta _{t+1}(j)}}

上式中红色的一步变换利用了概率图模型中有向图head to tail结构的性质:

webp-1673786867633-54.webp

这种结构满足:

ACBB被观测,则路径被阻塞。A\perp C|B\Leftrightarrow 若B被观测,则路径被阻塞。

到此为止便得到了递推式。这就是后向算法,其复杂度也为O(TN2)O(TN^{2})

三、Learning问题

Learning问题的目标是求解参数λ\lambda,使用的是Baum Welch算法(也就是EM算法)。

EM算法的迭代公式如下:

θ(t+1)=argmaxθZlog  P(X,Zθ)P(ZX,θ(t))dZ\theta ^{(t+1)}=\underset{\theta }{argmax}\int _{Z}log\; P(X,Z|\theta )\cdot P(Z|X,\theta ^{(t)})\mathrm{d}Z

在隐马尔可夫模型中,隐变量ZZ即为II,观测变量XX即为OO,参数θ\theta即为λ\lambda,因此隐马尔可夫模型的EM算法迭代公式可以写为:

λ(t+1)=argmaxλIlog  P(O,Iλ)P(IO,λ(t))\lambda ^{(t+1)}=\underset{\lambda}{argmax}\sum _{I}log\; P(O,I|\lambda )\cdot P(I|O,\lambda ^{(t)})

上式中P(IO,λ(t))=P(O,Iλ(t))P(Oλ(t))P(I|O,\lambda ^{(t)})=\frac{P(O,I|\lambda ^{(t)})}{P(O|\lambda ^{(t)})},由于在Learning问题中,观测序列OO是已知的,所以P(Oλ(t))P(O|\lambda ^{(t)})是个常数,迭代公式可以写为:

λ(t+1)=argmaxλIlog  P(O,Iλ)P(O,Iλ(t))\lambda ^{(t+1)}=\underset{\lambda}{argmax}\sum _{I}log\; P(O,I|\lambda )\cdot P(O,I|\lambda ^{(t)})

根据之前的计算对QQ函数进行整理:

Q(λ,λ(t))=Ilog  P(O,Iλ)P(O,Iλ(t))=I[logπ(i1)t=2Tait1itt=1Tbit(ot)P(O,Iλ(t))]=I[(logπ(i1)+t=2Tlog  ait1it+t=1Tlog  bit(ot))P(O,Iλ(t))Q(\lambda ,\lambda ^{(t)})=\sum _{I}log\; P(O,I|\lambda )\cdot P(O,I|\lambda ^{(t)})\\ =\sum _{I}[log\pi (i_{1})\prod_{t=2}^{T}a_{i_{t-1}i_{t}}\prod_{t=1}^{T}b_{i_{t}}(o_{t})\cdot P(O,I|\lambda ^{(t)})]\\ =\sum _{I}[(log\pi (i_{1})+\sum_{t=2}^{T}log\; a_{i_{t-1}i_{t}}+\sum_{t=1}^{T}log\; b_{i_{t}}(o_{t}))\cdot P(O,I|\lambda ^{(t)})

接下来以求解π(t+1)\pi ^{(t+1)}为例展示迭代的过程:

π(t+1)=argmaxπ  Q(λ,λ(t))=argmaxπIlog  π(i1)P(O,Iλ(t))=argmaxπi1i2iTlog  π(i1)P(O,i1,i2,,iTλ(t))=argmaxπi1log  π(i1)P(O,i1λ(t))=argmaxπi=1Nlog  πiP(O,i1=qiλ(t))\pi ^{(t+1)}=\underset{\pi }{argmax}\; Q(\lambda ,\lambda ^{(t)})\\ =\underset{\pi }{argmax}\sum _{I}log\; \pi (i_{1})\cdot P(O,I|\lambda ^{(t)})\\ =\underset{\pi }{argmax}\sum _{i_{1}}\sum _{i_{2}}\cdots \sum _{i_{T}}log\; \pi (i_{1})\cdot P(O,i_{1},i_{2},\cdots ,i_{T}|\lambda ^{(t)})\\ =\underset{\pi }{argmax}\sum _{i_{1}}log\; \pi (i_{1})\cdot P(O,i_{1}|\lambda ^{(t)})\\ =\underset{\pi }{argmax}\sum _{i=1}^{N}log\; \pi _{i}\cdot P(O,i_{1}=q_{i}|\lambda ^{(t)})

结合对π\pi的约束i=1Nπi=1\sum_{i=1}^{N}\pi _{i}=1,构建拉格朗日函数:

L(π,η)=i=1Nlog  πiP(O,i1=qiλ(t))+η(i=1Nπi1)L(\pi ,\eta )=\sum _{i=1}^{N}log\; \pi _{i}\cdot P(O,i_{1}=q_{i}|\lambda ^{(t)})+\eta (\sum_{i=1}^{N}\pi _{i}-1)

然后对πi\pi _{i}求导:

Lπi=1πiP(O,i1=qiλ(t))+η=0P(O,i1=qiλ(t))+πiη=0i=1N[P(O,i1=qiλ(t))+πiη]=0P(Oλ(t))+η=0η=P(Oλ(t))代入P(O,i1=qiλ(t))+πiη=0πi(t+1)=P(O,i1=qiλ(t))P(Oλ(t))\frac{\partial L}{\partial \pi _{i}}=\frac{1}{\pi _{i}}P(O,i_{1}=q_{i}|\lambda ^{(t)})+\eta =0 \\ \Rightarrow P(O,i_{1}=q_{i}|\lambda ^{(t)})+\pi _{i}\eta =0\\ \Rightarrow \sum_{i=1}^{N}[P(O,i_{1}=q_{i}|\lambda ^{(t)})+\pi _{i}\eta ]=0\\ \Rightarrow P(O|\lambda ^{(t)})+\eta =0\\ \Rightarrow \eta =-P(O|\lambda ^{(t)})\\ 代入P(O,i_{1}=q_{i}|\lambda ^{(t)})+\pi _{i}\eta =0\\ \Rightarrow \pi ^{(t+1)}_{i}=\frac{P(O,i_{1}=q_{i}|\lambda ^{(t)})}{P(O|\lambda ^{(t)})}

同样地,A(t+1)A^{(t+1)}B(t+1)B^{(t+1)}都以同样的方法求出,然后不断迭代直至收敛,最终求得模型的参数。

四、Decoding问题

Decoding问题是指已知观测序列OO和参数λ\lambda,求使概率P(IO)P(I|O)最大的状态序列II,即:

I^=argmaxI  P(IO)\hat{I}=\underset{I}{argmax}\; P(I|O)

我们采用动态规划的思想来求解这个问题(Viterbi),首先定义:

δt(i)=maxi1,i2,,it1P(o1,o2,,ot,i1,i2,,it1,it=qi)\delta _{t}(i)={\color{Red}{\underset{i_{1},i_{2},\cdots ,i_{t-1}}{max}}}P(o_{1},o_{2},\cdots ,o_{t},i_{1},i_{2},\cdots ,i_{t-1},i_{t}=q_{i})

由于参数λ\lambda是已知的,为简便起见省略了λ\lambda,接下来我们需要找到δt+1(j)\delta _{t+1}(j)δt(i)\delta _{t}(i)之间的递推式:

δt+1(j)=maxi1,i2,,itP(o1,o2,,ot+1,i1,i2,,it,it+1=qj)=max1iNδt(i)aijbj(ot+1)\delta _{t+1}(j)=\underset{i_{1},i_{2},\cdots ,i_{t}}{max}P(o_{1},o_{2},\cdots ,o_{t+1},i_{1},i_{2},\cdots ,i_{t},i_{t+1}=q_{j})\\ ={\color{Red}{\underset{1\leq i\leq N}{max}}}\delta _{t}(i)a_{ij}b_{j}(o_{t+1})

由此我们就找到了动态规划的递推式,同时我们还需要记录路径(因为全局最优唯一,但最优路径不一定唯一),因此定义:

ψt+1(j)=argmax1iN  δt(i)aij\psi _{t+1}(j)={\color{Red}{\underset{1\leq i\leq N}{argmax}}}\; \delta _{t}(i)a_{ij}

因此:

max  P(IO)=max  δt(i)max\; P(I|O)=max\; \delta _{t}(i)

使P(IO)P(I|O)最大的δt(i)\delta _{t}(i)tt时刻it=qii_t=q_i,然后由ψt(i)\psi _{t}(i)得到t1t-1时刻it1i_{t-1}的取值,然后继续得到前一时刻的it2i_{t-2}时刻的取值,最终得到整个序列II

五、总结

image-20230115210331582.png

HMM 是⼀种动态模型(Dynamic Model),是由混合树形模型和时序结合起来的⼀种模型(类似 GMM + Time)。对于类似 HMM 的这种状态空间模型(State Space Model),普遍的除了学习任务(采⽤ EM )外,还有推断任务。

使用XX代表观测序列,ZZ代表隐变量序列,λ\lambda代表参数。这一类模型需要求解的问题的大体框架为:

{Learning:λMLE=argmaxλ  P(Xλ)Baum  Welch  Algorithm(EM)Inference{Decoding:Z=argmaxZ  P(ZX,λ)Viterbi  AlgorithmProb  of  evidence:P(Xλ)Forward  Backward  AlgorithmFiltering:P(ztx1,x2,,xt,λ)Forward  AlgorithmSmoothing:P(ztx1,x2,,xT,λ)ForwardBackward  AlgorithmPrediction:{P(zt+1x1,x2,,xt,λ)P(xt+1x1,x2,,xt,λ)}Forward  Algorithm\left\{\begin{matrix} Learning:\lambda _{MLE}=\underset{\lambda }{argmax}\; P(X|\lambda ){\color{Blue}{【Baum\; Welch\; Algorithm(EM)】}}\\ Inference\left\{\begin{matrix} Decoding:Z=\underset{Z}{argmax}\; P(Z|X,\lambda ){\color{Blue}{【Viterbi\; Algorithm】}}\\ Prob\; of\; evidence:P(X|\lambda ){\color{Blue}{【Forward\;、Backward\; Algorithm】}}\\ Filtering:P(z_{t}|x_{1},x_{2},\cdots ,x_{t},\lambda ){\color{Blue}{【Forward\; Algorithm】}}\\ Smoothing:P(z_{t}|x_{1},x_{2},\cdots ,x_{T},\lambda ){\color{Blue}{【Forward -Backward\; Algorithm】}}\\ Prediction:\begin{Bmatrix} P(z_{t+1}|x_{1},x_{2},\cdots ,x_{t},\lambda )\\ P(x_{t+1}|x_{1},x_{2},\cdots ,x_{t},\lambda ) \end{Bmatrix}{\color{Blue}{【Forward\; Algorithm】}} \end{matrix}\right. \end{matrix}\right.

接下来对Filtering&Smoothing&Prediction问题做一些说明,下面使用x1:tx_{1:t}代表x1,x2,,xtx_{1},x_{2},\cdots ,x_{t},同时也省略已知参数λ\lambda

1. Filtering问题

P(ztx1:t)=P(x1:t,zt)P(x1:t)=P(x1:t,zt)ztP(x1:t,zt)P(x1:t,zt)=αtP(z_{t}|x_{1:t})=\frac{P(x_{1:t},z_{t})}{P(x_{1:t})}=\frac{P(x_{1:t},z_{t})}{\sum _{z_{t}}P(x_{1:t},z_{t})} \propto P(x_{1:t},z_{t})=\alpha _{t}

因此使用Forward Algorithm来解决Filtering问题。

Filtering问题通常出现在online learning中,当新进入一个数据,可以计算概率P(ztx1:t)P(z_{t}|x_{1:t})

2. Smoothing问题

P(ztx1:T)=P(x1:T,zt)P(x1:T)=P(x1:T,zt)ztP(x1:T,zt)P(z_{t}|x_{1:T})=\frac{P(x_{1:T},z_{t})}{P(x_{1:T})}=\frac{P(x_{1:T},z_{t})}{\sum _{z_{t}}P(x_{1:T},z_{t})}

其中:

P(x1:T,zt)=P(x1:t,xt+1:T,zt)=P(xt+1:Tx1:t,zt)P(x1:t,zt)αt=P(xt+1:Tzt)βtαt=αtβtP(x_{1:T},z_{t})=P(x_{1:t},x_{t+1:T},z_{t})\\ ={\color{Red}{P(x_{t+1:T}|x_{1:t},z_{t})}}\cdot \underset{\alpha _{t}}{\underbrace{P(x_{1:t},z_{t})}}\\ =\underset{\beta _{t}}{\underbrace{{\color{Red}{P(x_{t+1:T}|z_{t})}}}}\cdot \alpha _{t}\\ =\alpha _{t}\beta _{t}

红色这一步是使用了有向图的D划分的方法,有关讲解参照9.概率图模型。这里我们定义A集合为x1:tx_{1:t},B集合为xt+1:Tx_{t+1:T},C集合为ztz_t,通过D划分的方法我们可以知道xAxBxCx_{A}\perp x_{B}|x_{C},即xt+1:Tx_{t+1:T}x1:tx_{1:t}是相互独立的。

由上面的式子我们可以得出:

P(ztx1:T)P(x1:T,zt)=αtβtP(z_{t}|x_{1:T})\propto P(x_{1:T},z_{t})=\alpha _{t}\beta _{t}

因此解决Smoothing问题的算法叫做Forward-Backward Algorithm。

Smoothing问题通常出现在offline learning中,当知道全部观测数据时,来计算概率P(ztx1:T)P(z_{t}|x_{1:T})

3. Prediction问题

P(zt+1x1:t)=ztP(zt+1,ztx1:t)=ztP(zt+1zt,x1:t)P(ztx1:t)=ztP(zt+1zt)P(ztx1:t)FilteringP(z_{t+1}|x_{1:t})=\sum _{z_{t}}P(z_{t+1},z_{t}|x_{1:t})\\ =\sum _{z_{t}}P(z_{t+1}|z_{t},x_{1:t})\cdot P(z_{t}|x_{1:t})\\ =\sum _{z_{t}}P(z_{t+1}|z_{t})\cdot \underset{Filtering}{\underbrace{P(z_{t}|x_{1:t})}}

上式应用了齐次马尔可夫假设将预测P(zt+1x1:t)P(z_{t+1}|x_{1:t})的问题进行了转化,使用转移概率和求解Filtering问题的方法就可以计算这个概率。

P(xt+1x1:t)=zt+1P(xt+1,zt+1x1:t)=zt+1P(xt+1zt+1,x1:t)P(zt+1x1:t)=zt+1P(xt+1zt+1)P(zt+1x1:t)PrecitionP(x_{t+1}|x_{1:t})=\sum _{z_{t+1}}P(x_{t+1},z_{t+1}|x_{1:t})\\ =\sum _{z_{t+1}}P(x_{t+1}|z_{t+1},x_{1:t})\cdot P(z_{t+1}|x_{1:t})\\ =\sum _{z_{t+1}}P(x_{t+1}|z_{t+1})\cdot \underset{Precition}{\underbrace{P(z_{t+1}|x_{1:t})}}

上式应用了观测独立假设将预测P(xt+1x1:t)P(x_{t+1}|x_{1:t})的问题进行了转化,使用发射概率和求解上一个Prediction问题的方法就可以计算这个概率。

“开启掘金成长之旅!这是我参与「掘金日新计划 · 2 月更文挑战」的第 13 天,点击查看活动详情