深度学习-什么是损失函数

55 阅读1分钟

损失函数

1. 分类损失函数

1)交叉熵损失函数, 2)二分类交叉熵损失函数

2. 回归损失函数

1)MAE平均绝对值损失函数。 2)MSE均方误差损失函数 3)Smooth L1损失函数

# 导包
import torch
# 交叉熵损失函数

y_true = torch.tensor([[0,0,0], [0,0,1]], dtype=torch.float)
y_pred = torch.tensor([[0.1, 0.7, 0.2], [0.1, 0.3, 0.6]])

loss_fn = torch.nn.CrossEntropyLoss()

loss = loss_fn(y_pred, y_true)

print(loss, '交叉熵损失函数')
tensor(0.4266)
# 二分类交叉熵损失函数
y_true = torch.tensor([0, 1, 0], dtype=torch.float)
# 准备预测值(开启自动微分)
y_pred = torch.tensor([0.1, 0.7, 0.2], requires_grad=True, dtype=torch.float32)
# TODO 计算损失
loss_fn = torch.nn.BCELoss()

loss = loss_fn(y_pred, y_true)

print(loss, '二分类交叉熵损失函数')
tensor(0.2284, grad_fn=<BinaryCrossEntropyBackward0>) 二分类交叉熵损失函数
# 回归损失函数
## 1. MEA, 平均绝对值损失函数
# 准备真实值
y_true = torch.tensor([1, 1, 1], dtype=torch.float32)
# 准备预测值
y_pred = torch.tensor([1.2, 1.7, 1.9], requires_grad=True)

loss_fn = torch.nn.L1Loss()

loss = loss_fn(y_pred, y_true)

print(loss, 'mea, 平均绝对值')
tensor(0.6000, grad_fn=<MeanBackward0>) mea, 平均绝对值
## 2. MSE, 平均方差损失函数
# 准备真实值
y_true = torch.tensor([1, 1, 1], dtype=torch.float32)
# 准备预测值
y_pred = torch.tensor([1.2, 1.7, 1.9], requires_grad=True)

loss_fn = torch.nn.MSELoss()

loss = loss_fn(y_pred, y_true)

print(loss, 'mse, 平均方差')
tensor(0.4467, grad_fn=<MseLossBackward0>) mse, 平均方差
## 3, smooth L1, 损失函数, 它是L1和L2的合体函数, 做到了0度的平滑又有梯度的收敛
# 准备真实值
y_true = torch.tensor([1, 1, 1], dtype=torch.float32)
# 准备预测值
y_pred = torch.tensor([1.2, 1.7, 1.9], requires_grad=True)

loss_fn = torch.nn.SmoothL1Loss()

loss = loss_fn(y_pred, y_true)

print(loss, 'smooth L1')
tensor(0.2233, grad_fn=<SmoothL1LossBackward0>) smooth L1