机器学习笔记——SVM丝滑推导及代码实现,从硬间隔到软间隔再到核函数

124 阅读5分钟

前言

开始搓延期了一百年的机器学习实验,把SVM推导过程从定义到求解都刷了一遍,包括推导优化目标、提出对偶问题、利用KKT条件得到原问题的解以及SMO算法等。

注意:本文在某些地方比如KKT条件和SMO算法处不提供证明过程(太麻烦了喵),而是直接使用其结论,感兴趣的读者可以查阅参考资料

参考资料:

推导过程学习视频:(系列六) 支持向量机1-硬间隔SVM-模型定义_哔哩哔哩_bilibili

拉格朗日对偶性的条件:拉格朗日对偶性详解(手推笔记)-CSDN博客

从几何角度理解KKT条件和对偶关系:机器学习笔记(8)-对偶关系和KKT条件 - Epir - 博客园 (cnblogs.com)'

代码参考:统计学习:线性可分支持向量机(Cvxpy实现) - orion-orion - 博客园 (cnblogs.com)

一、优化问题

当一个分类问题线性可分时,我们希望找到一个最优的超平面将其划分,这个最优的超平面需要满足以下性质:

超平面距超平面最近的样本点 之间的间隔应该尽可能大

举一个简单的例子:

当这是一个二分类问题,且样本点特征数为2时(点可以表示在二维平面),该超平面是一条一维的直线。那么,我们想要找到能够将两类数据划分的最优的直线,它与距离它最近的点的距离应该尽可能大

那么我们的优化问题可以描述为:

给定样本点X=(x1,x2,...,xm)T,XRm×n,xiRnX = (x_1, x_2, ..., x_m)^T, X \in R^{m \times n}, x_i \in R^n
给定标签y=(y1,y2,...,ym)T,yRm,yi{1,1}y = (y_1, y_2, ..., y_m)^T, y\in R^m, y_i \in \{-1, 1\}代表样本点xix_i所属的类
先求一最优的超平面wTx+b=0w^T x + b = 0yiy_i的值不同的样本点分割开,也就是求

maxw,bminxiwTxi+bws.t.yi(wTxi+b)>0max_{w, b} min_{x_i} \frac{| w^T x_i + b |}{\| w \|} \quad s.t. \quad y_i(w^T x_i + b) > 0

由于yi(wTxi+b)=wTxi+by_i(w^T x_i + b) = | w^T x_i + b |,原问题等价于

maxw,bminxiyi(wTxi+b)ws.t.yi(wTxi+b)>0max_{w, b} min_{x_i} \frac{y_i(w^T x_i + b)}{\| w \|} \quad s.t. \quad y_i(w^T x_i + b) > 0

由于最小化问题是关于xix_i的,那么w\| w \|便是无关变量,可往前提

maxw,b1wminxiyi(wTxi+b)s.t.yi(wTxi+b)>0max_{w, b} \frac{1}{{\| w \|}} min_{x_i} y_i(w^T x_i + b) \quad s.t. \quad y_i(w^T x_i + b) > 0

由于yi(wTxi+b)>0y_i(w^T x_i + b) > 0,那么一定γ>0\exist \gamma > 0使得minxi(yi(wTxi+b))=γmin_{x_i}(y_i(w^T x_i + b)) = \gamma,也就是yi(wTxi+b)>=γy_i(w^T x_i + b) >= \gamma

由于wwbb的缩放不影响样本点到超平面的几何距离,可取γ=1\gamma = 1方便讨论

minxi(yi(wTxi+b))=1min_{x_i}(y_i(w^T x_i + b)) = 1代入到上述优化问题中,并修改约束如下:

maxw,b1ws.t.yi(wTxi+b)>=1max_{w, b} \frac{1}{{\| w \|}} \quad s.t. \quad y_i(w^T x_i + b) >= 1

问题等同于

minw,b12wTws.t.yi(wTxi+b)>=1min_{w, b} \frac{1}{2} w^T w \quad s.t. \quad y_i(w^T x_i + b) >= 1

minw,b12wTws.t.1yi(wTxi+b)<=0min_{w, b} \frac{1}{2} w^T w \quad s.t. \quad 1- y_i(w^T x_i + b) <= 0

二、对偶问题

1. 得到对偶问题

构造上述问题的广义拉格朗日函数:

L(w,b,α)=12wTw+Σi=1nαi(1yi(wTxi+b))(αi>=0)\mathcal{L}(w, b, \alpha) = \frac{1}{2} w^T w + \Sigma_{i = 1}^n \alpha_i (1- y_i(w^T x_i + b)) \quad (\alpha_i >= 0)

由拉格朗日对偶性:

拉格朗日对偶性

对于 minxf(x)s.t.ci(x)<=0,i=1,...,khj(x)=0,j=1,...,lmin_x f(x) \quad \\ s.t. \quad c_i(x) < = 0, i = 1, ..., k \\ \quad \quad h_j(x) = 0, j = 1, ..., l

构造广义拉格朗日函数如下:

L(w,α,β)=f(x)+Σi=1kαici(x)+Σj=1lβci(x)(αi>=0)\mathcal{L}(w, \alpha, \beta) = f(x) + \Sigma_{i=1}^k \alpha_i c_i(x) + \Sigma_{j=1}^l \beta c_i(x) \quad (\alpha_i >= 0)

那么原问题等价于

minxmaxα,βL(x,α,β)min_x max_{\alpha, \beta} \mathcal{L}(x, \alpha, \beta)

证明:

maxα,βL(x,α,β)={f(x),x满足约束,x不满足约束max_{\alpha, \beta} \mathcal{L}(x, \alpha, \beta) = \left\{ \\ \begin{matrix} f(x), \quad x满足约束 \\ \infty ,\quad x不满足约束\\ \end{matrix} \right.

minxmaxα,βL(x,α,β)minxf(x),x满足约束min_x max_{\alpha, \beta} \mathcal{L}(x, \alpha, \beta) \\ \Leftrightarrow min_x f(x), \quad x满足约束

对于任意极大极小问题:minamaxbf(a,b)min_a max_b f(a, b)

对应的极小极大问题:maxbminaf(a,b)max_b min_a f(a, b)都是它的弱对偶问题,即

maxbminaf(a,b)<=minamaxbf(a,b)max_b min_a f(a, b) <= min_a max_b f(a, b)

证明:

由于minaf(a,b)<=f(a,b)<=maxbf(a,b)min_a f(a, b) <= f(a, b) <= max_b f(a, b)

minaf(a,b)<=maxbf(a,b)min_a f(a, b) <= max_b f(a, b)

由于该式恒成立,那么左式取最大值,右式取最小值时仍然成立:

maxbminaf(a,b)<=minamaxbf(a,b)max_b min_a f(a, b) <= min_a max_b f(a, b)

当问题满足以下两个条件时,上述不等式取等号,也就是强对偶关系

  1. 原问题是凸优化问题
  2. 原问题满足slater条件

凸优化问题:目标函数是凸函数,并且可行域是凸集
slater条件:所有约束函数都是凸函数并且存在满足所有约束的点

由于minxmaxα,βL(x,α,β)min_x max_{\alpha, \beta} \mathcal{L}(x, \alpha, \beta)是关于xx的优化问题
而对偶问题maxα,βminxL(x,α,β)max_{\alpha, \beta} min_x \mathcal{L}(x, \alpha, \beta)是关于α\alphaβ\beta的问题
当两者满足强对偶关系时,可以利用KKT条件将对偶问题的解映射到原问题的解

KKT条件的表述如下:

α,β\alpha^*, \beta^*是对偶问题的解,xx^*是原问题的解,那它们满足以下条件:

1.可行条件

ci(x)<=0,i=1,...,khj(x)=0,j=1,...,lαi>=0c_i(x^*) < = 0, i = 1, ..., k \\ h_j(x^*) = 0, j = 1, ..., l \\ \alpha_i^* >= 0

2.互补松弛条件

αici(x)=0,i=1,...,k\alpha_i^* c_i(x^*) = 0, i = 1, ..., k

3.梯度为0

tL(t,α,β)t=x=0\nabla_t \mathcal{L}(t, \alpha^*, \beta^*)|_{t=x^*} = 0

综上,原问题的对偶问题为

maxαminw,bL(w,b,α)max_{\alpha} min_{w, b} \mathcal{L}(w, b, \alpha)

对于minw,bL(w,b,α)min_{w, b} \mathcal{L}(w, b, \alpha),对w,bw, b求偏导,得

wL(w,b,α)=wΣi=1nαiyixi=0\nabla_w \mathcal{L}(w, b, \alpha) = w - \Sigma_{i=1}^n \alpha_i y_i x_i = 0

bL(w,b,α)=Σi=1nαiyi=0\nabla_b \mathcal{L}(w, b, \alpha) = - \Sigma_{i=1}^n \alpha_i y_i = 0

w=Σi=1nαiyixiw = \Sigma_{i=1}^n \alpha_i y_i x_i

Σi=1nαiyi=0\Sigma_{i=1}^n \alpha_i y_i = 0

将上述两式代入到对偶问题中,得

maxα12Σi=1nΣj=1nαiαjyiyjxiTxj+Σi=1nαis.t.Σi=1nαiyi=0αi>=0max_{\alpha} -\frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j x_i^T x_j + \Sigma_{i = 1}^n \alpha_i \\ s.t. \quad \Sigma_{i=1}^n \alpha_i y_i = 0 \\ \quad \quad \quad \alpha_i >= 0

该最大化问题等同于以下最小化问题

minα12Σi=1nΣj=1nαiαjyiyjxiTxjΣi=1nαis.t.Σi=1nαiyi=0αi>=0min_{\alpha} \frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j x_i^T x_j - \Sigma_{i = 1}^n \alpha_i \\ s.t. \quad \Sigma_{i=1}^n \alpha_i y_i = 0 \\ \quad \quad \quad \alpha_i >= 0

2. 求解过程

求上述问题的最优解α\alpha^*(使用SMO算法)

之后,由KKT条件,有

梯度为0:

wL(w,b,α)=wΣi=1nαiyixi=0\nabla_w \mathcal{L}(w^*, b^*, \alpha^*) = w^* - \Sigma_{i=1}^n \alpha_i^* y_i x_i = 0

w=Σi=1nαiyixiw^* = \Sigma_{i=1}^n \alpha_i^* y_i x_i

互补松弛条件:

αi(1yi((w)Txi+b))=0,i=1,2,...,n\alpha_i^* (1- y_i((w^*)^T x_i + b^*)) = 0, \quad i = 1, 2, ..., n

αi=0\forall \alpha_i^* = 0,由上,那么w=0w^* = 0,而w=0w^* = 0不是原问题的解,因而假设不成立。存在一个αj>0\alpha_j^* > 0

那么,取任一αj>0\alpha_j^* > 0,存在

yj((w)Txj+b)=1y_j((w^*)^T x_j + b^*) = 1

由于yj2=1y_j^2 = 1,等式左右乘以yjy_j,得到

(w)Txj+b=yj(w^*)^T x_j + b^* = y_j

b=yj(w)Txj=yjΣi=1nαiyixiTxjb^* = y_j - (w^*)^T x_j = y_j - \Sigma_{i=1}^n \alpha_i y_i x_i^T x_j

故,求得的最优超平面可写作:

Σi=1nαiyixiTx+b=0\Sigma_{i=1}^n \alpha_i^* y_i x_i^T x + b^* = 0

决策函数可写作:

f(x)=sign(Σi=1nαiyixiTx+b)f(x) = sign(\Sigma_{i=1}^n \alpha_i^* y_i x_i^T x + b^*)

三、软间隔

1. 得到优化问题

当不同类别的样本轻微混杂在一起,导致线性不可分时,原先的优化问题无法得到最优解。所以修改约束条件使其允许一定的误差:

minw,b12wTw+lossmin_{w, b} \frac{1}{2} w^T w + loss

假如将lossloss定义为

loss={1,yi(wTxi+b)<10,yi(wTxi+b)>=1loss = \left\{ \\ \begin{matrix} 1, \quad y_i(w^T x_i + b) < 1 \\ 0 , \quad y_i(w^T x_i + b) >= 1\\ \end{matrix} \right.

但是它不连续,数学性质不好

故定义lossloss

loss={1yi(wTxi+b),yi(wTxi+b)<10,yi(wTxi+b)>=1loss = \left\{ \\ \begin{matrix} 1 - y_i(w^T x_i + b), \quad y_i(w^T x_i + b) < 1 \\ 0 , \quad y_i(w^T x_i + b) >= 1\\ \end{matrix} \right.

loss=max{0,1yi(wTxi+b)}loss = max\{0, 1 - y_i(w^T x_i + b) \}

ξi=1yi(wTxi+b)\xi_i = 1 - y_i(w^T x_i + b),由于样本无法完全满足原问题的约束yi(wTxi+b)>=1y_i(w^T x_i + b) >= 1,修改其约束为:

yi(wTxi+b)>=1ξi,ξi>=0y_i(w^T x_i + b) >= 1 - \xi_i, \quad \xi_i >= 0

因此,软间隔的SVM的优化问题为:

minw,b,ξ12wTw+CΣi=1nξis.t.yi(wTxi+b)>=1ξi,i=1,2,...,nξi>=0,i=1,2,...,nmin_{w, b, \xi} \frac{1}{2} w^T w + C \Sigma_{i=1}^n \xi_i \\ s.t. \quad y_i(w^T x_i + b) >= 1 - \xi_i, \quad i = 1, 2, ..., n\\ \quad \quad \quad \xi_i >= 0, \quad i = 1, 2, ..., n

2. 求解过程

同理,构造广义拉格朗日函数:

L(w,b,ξ,α,β)=12wTw+CΣi=1nξiΣi=1nαi(yi(wTxi+b)1+ξi)Σi=1nβiξis.t.αi>=0βi>=0\mathcal{L}(w, b, \xi, \alpha, \beta) = \frac{1}{2} w^T w + C\Sigma_{i=1}^n \xi_i - \Sigma_{i = 1}^n \alpha_i (y_i(w^T x_i + b) - 1 + \xi_i) - \Sigma_{i=1}^n \beta_i \xi_i \\ s.t. \quad \alpha_i >= 0 \\ \quad \quad \beta_i >= 0

w,b,ξiw, b, \xi_i求偏导,得

wL=wΣi=1nαiyixi=0\nabla_w \mathcal{L} = w - \Sigma_{i=1}^n \alpha_i y_i x_i = 0

bL=Σi=1nαiyi=0\nabla_b \mathcal{L} = - \Sigma_{i=1}^n \alpha_i y_i = 0

ξiL=Cαiβi=0\nabla_{\xi_i} \mathcal{L} = C - \alpha_i - \beta_i = 0

同理,将上述式子代入到对偶问题maxα,βminw,b,ξLmax_{\alpha, \beta} min_{w, b, \xi} \mathcal{L},得

maxα12Σi=1nΣj=1nαiαjyiyjxiTxj+Σi=1nαis.t.Σi=1nαiyi=0Cαiβi=0αi>=0βi>=0max_{\alpha} -\frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j x_i^T x_j + \Sigma_{i = 1}^n \alpha_i \\ s.t. \quad \Sigma_{i=1}^n \alpha_i y_i = 0 \\ \quad \quad \quad C - \alpha_i - \beta_i = 0 \\ \quad \quad \quad \alpha_i >= 0 \\ \quad \quad \quad \beta_i >= 0

Cαiβi=0C - \alpha_i - \beta_i = 0βi>=0\beta_i >= 0,得Cαi>=0C - \alpha_i >= 0,即αi<=C\alpha_i <= C

对偶问题表示如下:

maxα12Σi=1nΣj=1nαiαjyiyjxiTxj+Σi=1nαis.t.Σi=1nαiyi=00<=αi<=Cmax_{\alpha} -\frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j x_i^T x_j + \Sigma_{i = 1}^n \alpha_i \\ s.t. \quad \Sigma_{i=1}^n \alpha_i y_i = 0 \\ \quad \quad \quad 0 <= \alpha_i <= C

求上述问题的最优解α\alpha^*

同理,由KKT条件,取一αj\alpha_j^*满足0<αj<C0 < \alpha_j^* < C

得到

w=Σi=1nαiyixiw^* = \Sigma_{i=1}^n \alpha_i^* y_i x_i

b=yj(w)Txj=yjΣi=1nαiyixiTxjb^* = y_j - (w^*)^T x_j = y_j - \Sigma_{i=1}^n \alpha_i y_i x_i^T x_j

四、核函数

当样本线性不可分时,可以将低维不可分的数据映射到高维,从而找到更高维的超平面将其分割

在对偶问题中,需要计算两个样本点之间的内积,当样本的特征数即维度很大时,计算内积相当耗时

故引入核函数直接求得两个样本点映射到高维空间后它们的内积:

K(xi,xj)=Φ(xi)TΦ(xj)K(x_i, x_j) = \Phi(x_i)^T \Phi(x_j)

对偶问题中的目标函数便可以写作:

W(α)=12Σi=1nΣj=1nαiαjyiyjK(xi,xj)Σi=1nαiW(\alpha) = \frac{1}{2} \Sigma_{i=1}^n \Sigma_{j=1}^n \alpha_i \alpha_j y_i y_j K(x_i, x_j) - \Sigma_{i = 1}^n \alpha_i

最优超平面可以写作:

Σi=1nαiyiK(xi,xj)+b=0\Sigma_{i=1}^n \alpha_i^* y_i K(x_i, x_j) + b^* = 0

决策函数可以写作:

f(x)=sign(Σi=1nαiyiK(xi,x)+b)f(x) = sign(\Sigma_{i=1}^n \alpha_i^* y_i K(x_i, x)+ b^*)

五、代码实现

1. cvxpy实现

import copy

import numpy as np
import cvxpy as cp
from sklearn import datasets
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from random import choice


class SVM:
    def __init__(self, C=None, KF=None):
        # when self.C is not None, there is the soft-margin SVM
        self.C = C
        self.KF = KF

    def K(self, i, j):
        if self.KF and callable(self.KF):
            return self.KF(self.X_train[i], self.X_train[j])
        else:
            return self.X_train[i].T @ self.X_train[j]
    def object_func(self, alpha):
        sum = 0
        for i in range(self.n):
            for j in range(self.n):
                print("calculate entry: (%d, %d)" % (i, j))
                sum += alpha[i] * alpha[j] * self.y_train[i] * self.y_train[j] * self.K(i, j)
        return 0.5 * sum - cp.sum(alpha)

    def fit(self, X_train, y_train):
        self.X_train = copy.deepcopy(X_train)
        self.y_train = copy.deepcopy(y_train)
        self.n = self.X_train.shape[0]
        print("begin to construct the convex problem...")
        alpha = cp.Variable(self.n)
        objective = cp.Minimize(self.object_func(alpha))
        constraint = []
        if self.C:
            constraint = [alpha >= 0, alpha <= C, self.y_train @ alpha == 0]
        else:
            constraint = [alpha >= 0, self.y_train @ alpha == 0]

        print("convex problem have built...")
        prob = cp.Problem(objective, constraint)
        prob.solve(solver='CVXOPT')
        self.alpha_star = alpha.value

        print("dual problem have been solved!")
        # KKT条件求解w和b
        self.w = np.zeros(self.X_train.shape[1])
        for i in range(self.n):
            self.w += X_train[i] * (self.alpha_star[i] * y_train[i])

        S_with_idx = None
        if self.C:
            S_with_idx = [(alpha_star_i, idx)
                          for idx, alpha_star_i in enumerate(self.alpha_star) if 0 < alpha_star_i < self.C]
        else:
            S_with_idx = [(alpha_star_i, idx)
                          for idx, alpha_star_i in enumerate(self.alpha_star) if alpha_star_i > 0]

        (_, s) = choice(S_with_idx)
        self.b = y_train[s]
        for i in range(self.n):
            self.b -= self.alpha_star[i] * y_train[i] * (X_train[i].T @ X_train[s])
    def pred(self, x):
        if self.KF and callable(self.KF):
            y = np.zeros(self.X_train.shape[1])
            for i in range(self.n):
                y += self.alpha_star[i] * y_train[i] * self.KF(self.X_train[i], x)
            y += self.b
            return y
        else:
            return self.w.T @ x + self.b
    def acc(self, X_test, y_test):
        y_pred = []
        for x in X_test:
            y_hat = np.sign(self.pred(x))
            y_pred.append(y_hat)
        y_pred = np.array(y_pred)
        acc = accuracy_score(y_pred, y_test)
        return acc

读取数据集并实现SVM实例

X, y = datasets.load_digits(return_X_y=True)
# X, y = datasets.load_breast_cancer(return_X_y=True)
# X, y = datasets.load_wine(return_X_y=True)
# X, y = datasets.load_iris(return_X_y=True)


y = np.where(y == 1, y, -1)

print(X.shape, y.shape)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

classicSVM = SVM()
classicSVM.fit(X_train, y_train)
print("acc of classic SVM: ", classicSVM.acc(X_test, y_test))

C = 0.1
softSVM = SVM(C=C)
softSVM.fit(X_train, y_train)
print("acc of soft SVM: ", softSVM.acc(X_test, y_test))


# 选择最优的C
# for i in range(-10, 10):
#     C = pow(10, i)
#     softSVM = SVM(C=C)
#     softSVM.fit(X_train, y_train)
#     print("when C = %e, acc of softSVM SVM: %.4f"
#           % (C, softSVM.pred(X_test, y_test)))

没用SMO算法。。跑的巨慢,摆了

2. sklearn实现

哈哈我是调包侠

import numpy as np
from sklearn import datasets
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC

X, y = datasets.load_digits(return_X_y=True)
# X, y = datasets.load_breast_cancer(return_X_y=True)
# X, y = datasets.load_wine(return_X_y=True)
# X, y = datasets.load_iris(return_X_y=True)
y = np.where(y == 1, y, -1)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# 创建SVM模型
svm_classic = SVC(kernel='linear')  # 经典SVM
svm_soft = SVC(kernel='linear', C=0.1)  # 软间隔SVM
svm_kernel = SVC(kernel='rbf', gamma=0.001)  # 带核函数的SVM

# 拟合模型
svm_classic.fit(X_train, y_train)
svm_soft.fit(X_train, y_train)
svm_kernel.fit(X_train, y_train)

y_test_classic = svm_classic.predict(X_test)
y_test_soft = svm_soft.predict(X_test)
y_test_kernel = svm_kernel.predict(X_test)

accuracy_classic = accuracy_score(y_test_classic, y_test)
accuracy_soft = accuracy_score(y_test_soft, y_test)
accuracy_kernel = accuracy_score(y_test_kernel, y_test)

print("accuracy of classic svm: %.2f\n"
      "accuracy of soft svm: %.2f\n"
      "accuracy of kernel svm: %.2f\n"
      % (accuracy_classic, accuracy_soft, accuracy_kernel))

结果如下:

D:\.py\PythonProject\ml2024\svm\mySVM>python svm.py
accuracy of classic svm: 0.98
accuracy of soft svm: 0.99
accuracy of kernel svm: 1.00