1 简介
针对PSO算法在求解问题的优化问题中易陷入局部收敛且收敛速度较慢等缺陷,引入一种初始化改进策略,并将模拟退火算法与PSO算法相结合,提出了一种全新的算法.该算法将寻优过程分为两个阶段:为了提高算法的执行速度,前期使用标准PSO算法进行寻优,后期运用模拟退火思想对PSO中的参数进行优化搜索最优解.最后将该算法应用于八个经典的单峰/多峰函数中.模拟结果表明,该算法有效地避免了早熟收敛现象,并提高了收敛速度,从而提高了PSO算法解决全局优化的性能.
在现实世界里,鸟群、蜂群等动物的个体行为能力非常有限( 在粒子群中单个个体映射为粒子) ,它几乎不可能独立存在于自然世界中,而多个鸟群、蜂群等所形成的智能则是具有非常强大的生命力,并且这些生命力并不是多个个体能力的简单叠加,而是由于个体之间有一种信息的交互能力。尽管这种能力的获取到目前为止还无法使用具体的数学公式来描述,但是到目前为止,人们对这种群体行为的研究在函数优化、神经网络学习、机器学习等方面已经发挥了重大的作用。因此,研究粒子群优化算法,并且针对不同的问题对它进行改进,能够更好地发挥其作用。 由 Mark Millonas 于 1994 年提出的群智能的五大基本原则中,PSO 算法完全符合这些原则,其中有一个准则被称为适应性原则( principle of adaptability) 。这一准则的主要思想是: 当行为模式带来的回报与能量投资相比是值得的时候,群体应该改变其行为模式,以便取得更好的效果。本文主要思想是考虑到在标准的粒子群优化算法中,惯性权重、自身认知以及社会认知参数这三者在收敛效果方面起到了关键作用,从而保留标准的 PSO 算法公式,对相应系数进行自适应调整。在标准的 PSO 算法中,惯性权重通常随着迭代次数而线性减小,但是为了更好地模拟现实世界,惯性权重不能减小到接近于零。在算法初期阶段,自身认知部分占优势,社会认知部分占有一小部分,以保证算法在整个空间进行搜索,从而保证算法的全局收敛。随着迭代次数的不断增加,自身认知部分逐渐减小,与此同时,社会认知部分逐渐扩大,从而保证算法的收敛,提高其收敛的精度。本文引入模拟退火机制,将自身认知和社会认知部分进行自适应调整,以改善算法的性能,从而实现算法的全局收敛和局部收敛的有效平衡。
2 部分代码
function returns = PSOSA(args, paramsPSO,paramsSA)
%% PSO args
dmat = args.dmat; % Distance Matrix
nVar = args.nVar; % Number of Decision Variables
varSize = args.varSize; % Size of Decision Variables
%% PSO params
MaxIt = paramsPSO.maxIterations; % Max Number of Iterations
nPop = paramsPSO.nPop; % Swarm Size
nMove = paramsPSO.nMove; % Number of Neighbors per Individual
%% Initialize Coefficients
inertia = paramsPSO.inertia * 0.8; % Chances of utilizing velocity vector
c1 = 0.8 - inertia; % Personal Acceleration Coefficient
c2 = 1 - (c1 + inertia); % Social Acceleration Coefficient
wDamp = paramsPSO.wDamp; % Dampening Coefficient
% randVel = paramsPSO.randVel; % Randomize the velocity vector
%% SA params
MaxSubIt = paramsSA.maxSubIterations; % Max Number of Subiterations
T = -1.0/log(paramsSA.tStart);% Initialize Start Temp.
tEnd = -1.0/log(paramsSA.tEnd);% Initialize Final Temp.
frac = (0.2/inertia)^(1.0/(MaxIt+(MaxIt/10))); % Ramping Coefficient
fracTemp = @(best,curr) 1.0 + ((best-curr)/best);
% tempFunction = paramsSA.tempFunction;
%idx = linspace(0,4,maxIterations); % Set the boundaries for tanh/sig Temp function
%% Initialization Empty Swarm
% Create Empty Structure for Individuals
emptySwarm.position = [];
emptySwarm.velocity = [];
emptySwarm.cost = [];
% emptySwarm.prevCost = [];
% emptySwarm.c1 = c1;
% emptySwarm.c2 = c2;
% emptySwarm.inertia = inertia;
emptySwarm.best.position = [];
emptySwarm.best.cost = [];
% Create Population Array
swarm = repmat(emptySwarm,nPop,1);
% Initialize Other Parameters
globalBest.cost = inf;
globalBest.position = [];
[N,dims] = size(dmat);
n = N - 2; % Separate Start and End Cities
steps = round(MaxIt/10); % Step Size
%% Initialize Population
for i = 1:nPop
% Initialize Position and Cost
swarm(i).position = randperm(n) + 1;
swarm(i).cost = distanceCalc(swarm(i).position,dmat,N);
swarm(i).velocity = randi([1,n],n,2);
swarm(i).best.position = swarm(i).cost;
swarm(i).best.cost = swarm(i).cost;
% Update Global Best
if swarm(i).best.cost < globalBest.cost
globalBest = swarm(i).best;
end
end
%% Array to Hold Best Cost Values
bestCosts = zeros(MaxIt, 1); % Best cost of each iteration
bestAvgCosts = zeros(MaxIt, 1); % Avg cost of each iteration
averageTemp = zeros(MaxIt, 1); % Temp of each iteration
deltaEavg = 0.0; numAccept = 1; avgTemp = 0;
%% SA Main Loop
for it = 1:MaxIt
for subit = 1:MaxSubIt
newpop=repmat(emptySwarm,nPop,nMove);
% Create and Evaluate New Solutions
for i=1:nPop
for j=1:nMove
% Create Neighbor
newpop(i,j).position = posUpdate(...
swarm(i).position,swarm(i).velocity,inertia);
% Evaluate Neighbor Cost
newpop(i,j).cost = distanceCalc(...
newpop(i,j).position,dmat,N);
end
end
% Sort Neighbors
[~, SortOrder] = sort([newpop.cost]);
newpop = newpop(SortOrder);
avgCost = 0;
%% Evaluate SA of population and members
for i = 1:nPop
%% SA Equilibrium Loop
deltaE = abs(newpop(i).cost - swarm(i).cost);
if newpop(i).cost > swarm(i).cost
% Sets DeltaE_avg for first iteration
if (it == 1 && subit == 1)
deltaEavg = deltaE;
end
P = exp(-deltaE / (deltaEavg * T));
if rand < P
SAcond = true;
else
SAcond = false;
end
else
SAcond = true;
end
if SAcond == true
newpop(i).best = swarm(i).best;
swarm(i) = newpop(i);
if swarm(i).best.cost > newpop(i).cost
swarm(i).best.cost = newpop(i).cost;
swarm(i).best.position = newpop(i).position;
end
swarm(i).best.cost = swarm(i).cost;
swarm(i).best.position = swarm(i).position;
numAccept = numAccept + 1;
deltaEavg = (deltaEavg * (numAccept-1)...
+ deltaE) / numAccept;
end
%% Update Best Particle Solution
% if swarm(i).best.cost > swarm(i).cost
% swarm(i).best.cost = swarm(i).cost;
% swarm(i).best.position = swarm(i).position;
if swarm(i).cost < globalBest.cost
globalBest = swarm(i).best;
end
% end
%% Check For Average Cost
% if subit == MaxSubIt
avgCost = avgCost + swarm(i).cost;
% end
%% Test for Velocity Dampening
if rand() < wDamp; PSOcond = true; else; PSOcond = true; end
% Apply Velocity Dampening
if PSOcond == true || SAcond == true
swarm(i).velocity = randi([1,n],n,2);
else
swarm(i).velocity = velUpdate(swarm(i).velocity,wDamp,n,PSOcond);
end
end
% Update Temperature
avgTemp = avgTemp + T;
frac2 = fracTemp(globalBest.cost,avgCost/nPop);
T = frac2 * T;
end
%% Cost based on best positions of personal/social
swarm(i).position = posUpdateSoc(...
c1,swarm(i).position,swarm(i).best.position);
swarm(i).position = posUpdateSoc(...
c2,swarm(i).position,globalBest.position);
%% Acceleration/Inertial Coefficient Adjustments
inertiaNew = inertia * frac;
delta = inertia - inertiaNew;
c1 = c1 + (0.2 * delta);
c2 = c2 + (0.8 * delta);
%% Store Progress
bestAvgCosts(it) = avgCost/nPop;
bestCosts(it) = globalBest.cost;
averageTemp(it) = avgTemp/MaxSubIt;
%% Display Step Progress
if mod(it,steps) == 0
fprintf('Iteration %2d: Best Cost: %2d\n',it, bestCosts(it))
end
end
%% Store Progress
returns.globalBest = globalBest;
returns.bestCosts = bestCosts;
returns.avgCosts = bestAvgCosts;
%% Plotting
figure;
subplot(1,3,1)
semilogx(bestCosts,'linewidth',2,'Color','r');
xlabel('Iteration'); ylabel('Best Cost');
subplot(1,3,2)
semilogx(bestAvgCosts,'linewidth',2,'Color','b');
xlabel('Iteration'); ylabel('Avg Cost');
subplot(1,3,3)
semilogx(averageTemp,'linewidth',2,'Color','g');
xlabel('Iteration'); ylabel('Avg Temperature');
end
3 仿真结果
4 参考文献
[1]于海平, 刘会超, and 吴志健. "基于模拟退火的自适应粒子群优化算法的改进策略." 计算机应用研究 29.12(2012):3.
部分理论引用网络文献,若有侵权联系博主删除。
博主简介:擅长智能优化算法、神经网络预测、信号处理、元胞自动机、图像处理、路径规划、无人机等多种领域的Matlab仿真,有科研问题可私信交流。