详解线性分类-朴素贝叶斯分类器(Naive Bayes Classifer)【白板推导系列笔记】

204 阅读2分钟

持续创作,加速成长!这是我参与「掘金日新计划 · 10 月更文挑战」的第5天,点击查看活动详情

朴素贝叶斯是对数据属性之间的关系进行了假设,即各个属性维度之间独立。

 

NB中我们假设XX是离散的,服从多项分布(包括伯努利)。GDA的XX可以用多维高斯分布表示,但是在NB中我们却不能直接使用多项分布。我们用垃圾邮件分类器来阐述NB的思想。

在这个分类器中我们可以用单词向量作为输入特征,具体的,我们的单词书中如果一共有50000个词,那么一封邮件的x向量可以是

x=[10010]aaardvarkaardwolfbuyzenx=\left[\begin{matrix}1\\0\\0\\\cdot\\\cdot\\1\\\cdot\\\cdot\\0\end{matrix}\right]\begin{matrix}a\\aardvark\\aardwolf\\\cdot\\\cdot\\buy\\\cdot\\\cdot\\zen\end{matrix}

xx是一个5000050000维的向量,在这封邮件中如果存在字典中的词,那该词所在的位置设置为11;否则为00

如果要直接用多项分布p(xy)p(x|y)建模, p(xy)p(x|y)共有2500002^{50000}个不同的值,那么我们至少需要25000012^{50000}−1个参数使参数和为11,对如此多的参数进行估计是不现实的,所以我们做一个强假设来简化概率模型。

因为每一维度都有0,10,1两种可能,因此就有2500002^{50000}种组合

作者:rushshi

链接:高斯判别分析(GDA)和朴素贝叶斯(NB)_rushshi的博客-CSDN博客

 

{(xi,yi)}i=1N,xiRp,yi{0,1} \begin{gathered} \left\{(x_{i},y_{i})\right\}_{i=1}^{N},x_{i}\in \mathbb{R}^{p},y_{i}\in \left\{0,1\right\} \end{gathered}

朴素贝叶斯假设每一个维度都是独立的,则有

p(x1,,xpy)=p(x1y)p(x2y,x1)p(xpy,x1,,xp1)根据朴素贝叶斯假设各个维度独立=p(x1y)p(x2y)p(xpy)=j=1pp(xjy) \begin{aligned} p(x_{1},\cdots ,x_{p}|y)&=p(x_{1}|y)p(x_{2}|y,x_{1})\cdots p(x_{p}|y,x_{1},\cdots ,x_{p-1})\\ &根据朴素贝叶斯假设各个维度独立\\ &=p(x_{1}|y)p(x_{2}|y)\cdots p(x_{p}|y)\\ &=\prod\limits_{j=1}^{p}p(x_{j}|y) \end{aligned}

这里需要先假设

yB(1,ϕy)p(y)=ϕy(1ϕ)1yp(xj=1y=0)=ϕjy=0p(xj=1y=1)=ϕjy=1ϕjy=ϕjy=1yϕjy=01yp(xjy)=ϕjyxj(1ϕjy)1xj \begin{aligned} y &\sim B(1,\phi_{y})\\ &\Rightarrow p(y)=\phi^{y}(1-\phi)^{1-y}\\ p(x_{j}=1|y=0)&=\phi_{j|y=0}\\ p(x_{j}=1|y=1)&=\phi_{j|y=1}\\ \phi_{j|y}&=\phi_{j|y=1}^{y}\phi_{j|y=0}^{1-y}\\ p(x_{j}|y)&=\phi_{j|y}^{x_{j}}(1-\phi_{j|y})^{1-x_{j}} \end{aligned}

对数似然函数

L(ϕy,ϕjy=0,ϕjy=1)=logi=1Np(xi,yi)=logi=1Np(xiyi)p(yi)=logi=1N(j=1pp(xijyi))p(yi)=i=1N[logp(yi)+j=1plogp(xijyi)]=i=1N[yilogϕy+(1yi)log(1ϕy)(1)+j=1p[(xijlogϕjyi)+(1xij)log(1ϕjyi)](2)] \begin{aligned} L(\phi_{y},\phi_{j|y=0},\phi_{j|y=1})&=\log \prod\limits_{i=1}^{N}p(x_{i},y_{i})\\ &=\log \prod\limits_{i=1}^{N}p(x_{i}|y_{i})p(y_{i})\\ &=\log \prod\limits_{i=1}^{N}\left(\prod\limits_{j=1}^{p}p(x_{ij}|y_{i})\right) p(y_{i})\\ &=\sum\limits_{i=1}^{N}\left[\log p(y_{i})+\sum\limits_{j=1}^{p}\log p(x_{ij}|y_{i})\right]\\ &=\sum\limits_{i=1}^{N}\left[\underbrace{y_{i}\log \phi_{y}+(1-y_{i})\log (1-\phi_{y})}_{(1)}+\underbrace{\sum\limits_{j=1}^{p}[(x_{ij}\log \phi_{j|y_{i}})+(1-x_{ij})\log (1-\phi_{j|y_{i}})]}_{(2)}\right] \end{aligned}

对于ϕjy=0\phi_{j|y=0}

(2)=j=1p[(xijlogϕjyi)+(1xij)log(1ϕjyi)]=j=1p[xijlogϕjy=01{yi=0}+(1xij)log(1ϕjy=0)1{yi=0}](2)ϕjy=0=j=1p[xij1ϕjy=01{yi=0}(1xij)11ϕjy=01{yi=0}]=00=j=1p[(xijϕjy=0)1{yi=0}]0=j=1p(xij1{yi=0})ϕjy=0j=1p1{yi=0}0=j=1p1{xij=1yi=0}ϕjy=0j=1p1{yi=0}ϕjy=0^=j=1p1{xij=1yi=0}j=1p1{yi=0} \begin{aligned} (2)&=\sum\limits_{j=1}^{p}[(x_{ij}\log \phi_{j|y_{i}})+(1-x_{ij})\log (1-\phi_{j|y_{i}})]\\ &=\sum\limits_{j=1}^{p}[x_{ij}\log \phi_{j|y=0}1\left\{y_{i}=0\right\}+(1-x_{ij})\log(1- \phi_{j|y=0})1\left\{y_{i}=0\right\}]\\ \frac{\partial (2)}{\partial \phi_{j|y=0}}&=\sum\limits_{j=1}^{p}\left[x_{ij} \frac{1}{\phi_{j|y=0}}1\left\{y_{i}=0\right\}-\left(1-x_{ij}\right) \frac{1}{1-\phi_{j|y=0}}1\left\{y_{i}=0\right\}\right]=0\\ 0&=\sum\limits_{j=1}^{p}[(x_{ij}-\phi_{j|y=0})1\left\{y_{i}=0\right\}]\\ 0&=\sum\limits_{j=1}^{p}(x_{ij}\cdot 1\left\{y_{i}=0\right\})-\phi_{j|y=0}\sum\limits_{j=1}^{p}1 \left\{y_{i}=0\right\}\\ 0&=\sum\limits_{j=1}^{p}1\left\{x_{ij}=1\land y_{i}=0\right\}-\phi_{j|y=0}\sum\limits_{j=1}^{p}1\left\{y_{i}=0\right\}\\ \widehat{\phi_{j|y=0}}&=\frac{\sum\limits_{j=1}^{p}1\left\{x_{ij}=1 \land y_{i}=0\right\}}{\sum\limits_{j=1}^{p}1\left\{y_{i}=0\right\}} \end{aligned}

 

指示函数

1A(x)={1xA0xA1_{A}(x)=\left\{\begin{aligned}&1&x \in A\\&0&x \notin A\end{aligned}\right.

也可记作IA(x),XA(x)I_{A}(x),X_{A}(x)

这里的指示函数在GDA中有类似的代替,即

C1={xiyi=1,i=1,2,,N},C1=N1C0={xiyi=0,i=1,2,,N},C0=N0xiC1,xiC0\begin{gathered}C_{1}=\left\{x_{i}|y_{i}=1,i=1,2,\cdots,N\right\},|C_{1}|=N_{1}\\C_{0}=\left\{x_{i}|y_{i}=0,i=1,2,\cdots,N\right\},|C_{0}|=N_{0}\\\sum\limits_{x_{i}\in C_{1}},\sum\limits_{x_{i}\in C_{0}}\end{gathered}

 

ϕjy=0^\widehat{\phi_{j|y=0}}可以理解为y=0y=0的样本中xx维度为11的数量除以y=0y=0的样本个数

同理可得ϕjy=1^\widehat{\phi_{j|y=1}}

ϕjy=1^=j=1p1{xij=1yi=1}j=1p1{yi=1} \widehat{\phi_{j|y=1}}=\frac{\sum\limits_{j=1}^{p}1\left\{x_{ij}=1\land y_{i}=1\right\}}{\sum\limits_{j=1}^{p}1\left\{y_{i}=1\right\}}

对于ϕy\phi_{y}

(1)=i=1N[yilogϕy+(1yi)log(1ϕy)](1)ϕy=i=1N[yi1ϕy(1yi)11ϕy]=00=i=1N[yi(1ϕy)(1yi)ϕy]0=i=1N(yiϕy)ϕy^=i=1N1{yi=1}N \begin{aligned} (1)&=\sum\limits_{i=1}^{N}[y_{i}\log \phi_{y}+(1-y_{i})\log (1-\phi_{y})]\\ \frac{\partial (1)}{\partial \phi_{y}}&=\sum\limits_{i=1}^{N}\left[y_{i} \frac{1}{\phi_{y}}-\left(1-y_{i}\right) \frac{1}{1-\phi_{y}}\right]=0\\ 0&=\sum\limits_{i=1}^{N}[y_{i}(1-\phi_{y})-(1-y_{i})\phi_{y}]\\ 0&=\sum\limits_{i=1}^{N}(y_{i}-\phi_{y})\\ \hat{\phi_{y}}&=\frac{\sum\limits_{i=1}^{N}1\left\{y_{i}=1\right\}}{N} \end{aligned}

这里假设xx只能等于0,10,1,但实际上xx常常服从于类别分布,实际上思路相同,只是估计参数变多,这里不进行推导