0、论文背景
标准粒子群算法在前面提到过和复现过,这里不在介绍,标准粒子群算法。本文是在改粒子群算法的基础上提出了一种基于二进制离散变量的粒子群优化算法,每个变量只有0或者1作为参数的搜索域。
Kennedy J, Eberhart R C. A discrete binary version of the particle swarm algorithm[C]//1997 IEEE International conference on systems, man, and cybernetics. Computational cybernetics and simulation. IEEE, 1997, 5: 4104-4108. 也是出自有名的会议,只不过年代较为久远,但不影响其思维的独到之处。
1、论文思想
粒子群算法的速度更新部分:
这里i代表第i个样本,d代表第d维变量,每一位变量的取值范围只有0、1。这个公式与之前标准粒子群公式一摸一样,只不过变化的是变量的取值范围是二进制离散的。
粒子群算法的位置更新部分:
编辑就是一个sigmod函数,主要作用就是将编辑转换到01,如果编辑取值范围在-66之间,那么编辑取值范围就是0.0025~0.9975之间。
2、个人想法
如果将二进制离散变量的粒子群思想应用到标准粒子群优化里面,会有什么效果?也就是将标准粒子群算法的变量进行二进制编码,然后通过二进制粒子群优化算法进行优化,相比较于标准粒子群优化算法,只多了编码和解码两个动作。
下面来复现实践相关思想:
首先验证标准粒子群(PS)算法实验,迭代3000次,重复实验100次,将实验的结果取平均值:
% Ackley
function [Out] = F2(X)
T1 = -20*exp(-0.2*(sqrt(sum(X.^2,2)*1./size(X,2))));
T2 = exp(sum(cos(X*2*pi),2)*1./size(X,2));
Out = T1-T2+20+exp(1);
%D=[-32,32]
clc;clear;clearvars;
% 随机生成5个数据
num_initial = 5;
num_vari = 5;
% 搜索区间
upper_bound = 32;
lower_bound = -32;
iter = 3000;
w = 1;
% 随机生成5个数据,并获得其评估值
sample_x = lhsdesign(num_initial, num_vari).*(upper_bound - lower_bound) + lower_bound.*ones(num_initial, num_vari);
sample_y = F2(sample_x);
Fmin = zeros(iter, 1);
aver_Fmin = zeros(iter, 1);
for n = 1 : 100
k = 1;
% 初始化一些参数
pbestx = sample_x;
pbesty = sample_y;
% 当前位置信息presentx
presentx = lhsdesign(num_initial, num_vari).*(upper_bound - lower_bound) + lower_bound.*ones(num_initial, num_vari);
vx = sample_x;
[fmin, gbest] = min(pbesty);
fprintf("iter 0 fmin: %.4f\n", fmin);
for i = 1 : iter
r = rand(num_initial, num_vari);
% pso更新下一步的位置,这里可以设置一下超过搜索范围的就设置为边界
vx = w.*vx + 2 * r .* (pbestx - presentx) + 2 * r .* (pbestx(gbest, :) - presentx);
vx(vx > upper_bound) = upper_bound;
vx(vx < lower_bound) = lower_bound;
presentx = presentx + vx;
presentx(presentx > upper_bound) = upper_bound;
presentx(presentx < lower_bound) = lower_bound;
presenty = F2(presentx);
% 更新每个单独个体最佳位置
pbestx(presenty < pbesty, :) = presentx(presenty < pbesty, :);
pbesty = F2(pbestx);
% 更新所有个体最佳位置
[fmin, gbest] = min(pbesty);
if mod(i,100) == 0
fprintf("iter %d fmin: %.4f\n", i, fmin);
end
Fmin(k, 1) = fmin;
k = k +1;
end
aver_Fmin = aver_Fmin + Fmin;
end
aver_Fmin = aver_Fmin ./ 100;
disp(pbestx(gbest, :));
plot(aver_Fmin);
实验过程只展示n=100的情况:
iter 0 fmin: 20.9684
iter 100 fmin: 19.8594
iter 200 fmin: 19.8594
iter 300 fmin: 19.8594
iter 400 fmin: 19.8594
iter 500 fmin: 19.8594
iter 600 fmin: 19.8594
iter 700 fmin: 19.8594
iter 800 fmin: 19.8594
iter 900 fmin: 19.8594
iter 1000 fmin: 19.8594
iter 1100 fmin: 19.8594
iter 1200 fmin: 19.8594
iter 1300 fmin: 19.8594
iter 1400 fmin: 19.8594
iter 1500 fmin: 19.8594
iter 1600 fmin: 19.8594
iter 1700 fmin: 19.8594
iter 1800 fmin: 19.8594
iter 1900 fmin: 19.8594
iter 2000 fmin: 19.8594
iter 2100 fmin: 19.8594
iter 2200 fmin: 19.8594
iter 2300 fmin: 19.8594
iter 2400 fmin: 19.8594
iter 2500 fmin: 19.8594
iter 2600 fmin: 19.8594
iter 2700 fmin: 19.8594
iter 2800 fmin: 19.8594
iter 2900 fmin: 19.8594
iter 3000 fmin: 19.8594
-32 32 0 0 -32
实验100次跑完后的平均优化成果:
接下来验证(Bare Bones Particle Swarms)BBPS算法实验,迭代3000次,重复实验100次,将实验的结果取平均值:
clc;clear;clearvars;
% 随机生成5个数据
num_initial = 5;
num_vari = 5;
% 搜索区间
upper_bound = 32;
lower_bound = -32;
iter = 3000;
% 随机生成5个数据,并获得其评估值
sample_x = lhsdesign(num_initial, num_vari).*(upper_bound - lower_bound) + lower_bound.*ones(num_initial, num_vari);
sample_y = F2(sample_x);
Fmin = zeros(iter, 1);
aver_Fmin = zeros(iter, 1);
for n = 1 : 100
k = 1;
% 初始化一些参数
pbestx = sample_x;
pbesty = sample_y;
[fmin, gbest] = min(pbesty);
fprintf("iter 0 fmin: %.4f\n", fmin);
for i = 1 : iter
% pso更新下一步的位置,这里可以设置一下超过搜索范围的就设置为边界
r = rand;
if r > 0.5
x = pbestx;
else
x = normrnd((pbestx + pbestx(gbest, :)) ./ 2,abs(pbestx - pbestx(gbest, :)));
end
x(x > upper_bound) = upper_bound;
x(x < lower_bound) = lower_bound;
y = F2(x);
% 更新每个单独个体最佳位置
pbestx(y < pbesty, :) = x(y < pbesty, :);
pbesty = F2(pbestx);
% 更新所有个体最佳位置
[fmin, gbest] = min(pbesty);
if mod(i,100) == 0
fprintf("iter %d fmin: %.4f\n", i, fmin);
end
Fmin(k, 1) = fmin;
k = k +1;
end
aver_Fmin = aver_Fmin + Fmin;
end
aver_Fmin = aver_Fmin ./ 100;
disp(pbestx(gbest, :));
plot(aver_Fmin);
实验过程只展示n=100的情况:
iter 0 fmin: 20.7250
iter 100 fmin: 19.9354
iter 200 fmin: 19.9354
iter 300 fmin: 19.9354
iter 400 fmin: 19.9354
iter 500 fmin: 19.9352
iter 600 fmin: 19.9352
iter 700 fmin: 19.9352
iter 800 fmin: 19.9351
iter 900 fmin: 19.9351
iter 1000 fmin: 19.9351
iter 1100 fmin: 19.9351
iter 1200 fmin: 19.9351
iter 1300 fmin: 19.9351
iter 1400 fmin: 19.9351
iter 1500 fmin: 19.9351
iter 1600 fmin: 19.9348
iter 1700 fmin: 19.9348
iter 1800 fmin: 19.9347
iter 1900 fmin: 19.9347
iter 2000 fmin: 19.9347
iter 2100 fmin: 19.9347
iter 2200 fmin: 19.9347
iter 2300 fmin: 19.9347
iter 2400 fmin: 19.9347
iter 2500 fmin: 19.9347
iter 2600 fmin: 19.9347
iter 2700 fmin: 19.9347
iter 2800 fmin: 19.9347
iter 2900 fmin: 19.9347
iter 3000 fmin: 19.9347
-32.0000 -32.0000 -0.0002 -32.0000 32.0000
实验100次跑完后的平均优化成果
接下来验证(Binary Particle Swarms)BPS算法实验,迭代3000次,重复实验100次,将实验的结果取平均值:
function sample_x = init_sample(num_initial, num_vari, len_vari)
sample_x = zeros(num_initial, num_vari, len_vari);
for i = 1 : num_initial
for j = 1 : num_vari
for k = 1 : len_vari
if rand > 0.5
r = 1;
else
r = 0;
end
sample_x(i,j,k) = r;
end
end
end
end
function decode_x = decode(x,num_initial, num_vari, len_vari, upper_bound, lower_bound)
num = 2.0^len_vari;
decode_x = zeros(num_initial, num_vari);
for i = 1 : num_initial
for j = 1 : num_vari
tmp = 0.0;
m = 1;
for k = 1 : len_vari
tmp = tmp + x(i, j, len_vari + 1 - k) * m;
m = m * 2;
end
decode_x(i, j) = (upper_bound - lower_bound) * (tmp / num) + lower_bound;
end
end
end
clc;clear;clearvars;
% 随机生成5个数据
num_initial = 5;
num_vari = 5;
% 每个变量编码的长度
len_vari = 10;
% 搜索区间
upper_bound = 32;
lower_bound = -32;
iter = 3000;
w = 1;
% 随机生成5个数据,并获得其评估值
sample_x = init_sample(num_initial, num_vari, len_vari);
decode_sample_x = decode(sample_x, num_initial, num_vari, len_vari, upper_bound, lower_bound);
sample_y = F2(decode_sample_x);
Fmin = zeros(iter, 1);
aver_Fmin = zeros(iter, 1);
for n = 1 : 100
k = 1;
% 初始化一些参数
pbestx = sample_x;
pbesty = sample_y;
% 当前位置信息presentx
presentx = init_sample(num_initial, num_vari, len_vari);
vx = sample_x;
[fmin, gbest] = min(pbesty);
fprintf("iter 0 fmin: %.4f\n", fmin);
for i = 1 : iter
r1 = rand(num_initial, num_vari, len_vari);
% pso更新下一步的位置
vx = w.*vx + 2 * r1 .* (pbestx - presentx) + 2 * r1 .* (pbestx(gbest, :, :) - presentx);
% vx设置边界目的是为了使s在0~1之间
vx(vx > 10) = 10;
vx(vx < -10) = -10;
s = 1 ./ (1 + exp(-vx));
r2 = rand(num_initial, num_vari, len_vari);
presentx(r2 < s) = 1;
presentx(r2 >= s) = 0;
% 解码
decode_presentx = decode(presentx, num_initial, num_vari, len_vari, upper_bound, lower_bound);
presenty = F2(decode_presentx);
% 更新每个单独个体最佳位置
pbestx(presenty < pbesty, :, :) = presentx(presenty < pbesty, :, :);
decode_pbestx = decode(pbestx, num_initial, num_vari, len_vari, upper_bound, lower_bound);
pbesty = F2(decode_pbestx);
% 更新所有个体最佳位置
[fmin, gbest] = min(pbesty);
if mod(i,100) == 0
fprintf("iter %d fmin: %.4f\n", i, fmin);
end
Fmin(k, 1) = fmin;
k = k +1;
end
aver_Fmin = aver_Fmin + Fmin;
end
aver_Fmin = aver_Fmin ./ 100;
disp(decode_pbestx(gbest, :));
plot(aver_Fmin);
实验过程只展示n=100的情况:
iter 0 fmin: 20.5643
iter 100 fmin: 2.7118
iter 200 fmin: 2.4018
iter 300 fmin: 2.4018
iter 400 fmin: 2.4018
iter 500 fmin: 2.4018
iter 600 fmin: 2.4018
iter 700 fmin: 2.4018
iter 800 fmin: 2.4018
iter 900 fmin: 2.4018
iter 1000 fmin: 2.4018
iter 1100 fmin: 2.4018
iter 1200 fmin: 2.4018
iter 1300 fmin: 2.4018
iter 1400 fmin: 2.4018
iter 1500 fmin: 2.4018
iter 1600 fmin: 2.4018
iter 1700 fmin: 2.4018
iter 1800 fmin: 2.4018
iter 1900 fmin: 2.4018
iter 2000 fmin: 2.4018
iter 2100 fmin: 2.4018
iter 2200 fmin: 2.4018
iter 2300 fmin: 2.4018
iter 2400 fmin: 2.4018
iter 2500 fmin: 2.4018
iter 2600 fmin: 2.4018
iter 2700 fmin: 2.4018
iter 2800 fmin: 2.4018
iter 2900 fmin: 2.4018
iter 3000 fmin: 2.4018
-0.0625 -0.9375 -0.0625 -0.9375 0
实验100次跑完后的平均优化成果:
可以看出该方法性能远远优于 Bare Bones Particle Swarms算法和标准粒子群优化算法,不仅仅在收敛速度上,也在最优值上面完胜前面的两种算法。