## Description

The hybrid Particle Swarm Optimization and Grey Wolf Optimization algorithm is low level because we merge the functionalities of both of them. Both the algorithms Â run in parallel.

- firstly make the statement of the PSOGWO function

[Best_score,Best_pos,GWO_cg_curve]=PSOGWO(SearchAgents_no,Max_iteration,lb,ub,dim,fobj)Â

In above statement the input parameter is mainly a benchmark function which is represented by a â€˜fobjâ€™ and others are lb=lower bound limit and ub=upper bound limit. There are three agents position is initialize Alpha position , Beta position and Delta position. Velocity and weight parameters are calculated by the formula given below

Velocity = .3*randn (SearchAgents_no,dim)Â Â Â Â Â Â Â Â Â Â Â Â Â w=0.5+rand()/2Â

- Initialize position of search agents by calling the function

Positions=initialization(SearchAgents_no,dim,ub,lb);Â Â

In this statement the upper bound and lower bound limits are available. The search agents position is randomly search. The each search agents have different upper and lower bound limits. Calculate the initial position of the search variable. We initialize the parameters of algorithm, generate and also evaluate the initial position, and then determine the best solution in the position.

- Call the benchmark function

Benchmark function is represented by the â€˜fobjâ€™and find the initial best fitness value for benchmark objective function. The fobj function contains all the information about the benchmark function. It has 23 different benchmark function cases which have different dimension, upper bound and lower bound limits. We can randomly take any benchmark (F_{1 }â€¦..F_{23}) objective function.

- Start the main while loop (t< max no. of iteration)

Then start the main loop for the maximum iterations. Then update the position of the search agents. After updating the position Â the upper and lower bound limits are applied and update the position of search agents by using equation

Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lbÂ Â Â Â

- Evaluate the fitness position of search agents by using the equation

fitness=fobj(Positions(i,:))Â Â Â

The fitness value is obtained by using equation 4. Then update the three position which is describe by Alpha, Beta and Delta position

Â If,Â Â fitness<Alpha score && Alpha score=fitness

Then, Alpha Position=Positions (i,:)

If , fitness>Alpha score && fitness<Beta score && Beta score=fitness

Beta position=Positions (i,:)

If,Â fitness>Alpha score && fitness>Beta score && fitness<Delta score

Â Delta score=fitness

Delta position=Positions (i,:)

These three positions are the new position of the Wolfs. We obtained three best fit position but now update these three position randomly.

- Update the positions of first three agents

Using equation number 7, 8 and 9 update the values and these values represented by X1, X2 and X3.

X1=Alpha position (j)-A1*D_alphaÂ Â Â Â Â Â Â Â Â Â Â Â Â Â X2=Beta position (j)-A2*D_betaÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â X3=Delta position (j)-A3*D_deltaÂ

- Update the velocity and position now as shown in equation 13 and 14 by using the

Â Â

Or in case of matlab code the equations are written as

velocity(i,j)=w*(velocity(i,j)+C1*r1*(X1-Positions(i,j))+C2*r2*(X2-Positions(i,j))+C3*r3*(X3-Positions(i,j))) Â Positions(i,j)=Positions(i,j)+velocity(i,j)

- Write another main script which perform the action on benchmark function

- Initialize the search agents, iterations and bench mark function
- Call the bench mark function detailed
- Call the PSOGWO function which initialize and figure out the fitness value for any particular function
- Call the function plot for graphs and convergence curve for the benchmark function. The best position and best fitness value obtained by using the hybrid PSOGWO algorithm is shown by the curve.

After update the position of the search agents and particle velocity the fitness value save in the Alpha score. Then plot a convergence curve according to the search space for any benchmark function. We obtained results which are better than the GWO. So the PSOGWO hybrid approach is good for low level algorithms.

5out of 5Venkatesan C(verified owner)–I have to run and see

5out of 5Anonymous–Excellent

neeraj.arora(verified owner)–zxx

pourhaji(verified owner)––

nam.nguyen(verified owner)–It is a good document

k.sudheer(verified owner)–Greatly useful thesis available

k.sudheer(verified owner)–very useful

enireddy.vamsidhar(verified owner)–good site for researchers

mohsen.khatibinia(verified owner)–Thanks

eker(verified owner)–I don’t just reading

hejer.ghribi(verified owner)–excellent content

Fawad(verified owner)–good

venkat.reddy(verified owner)–great

venkat.reddy(verified owner)–great

joo hyun.moon(verified owner)–d

sdbhlxt(verified owner)–good

praveen.hipparge(verified owner)–good

shiffali.goyal(verified owner)–Gr8

ashutosh.makhariya(verified owner)–average

ramahk92(verified owner)–Thanks for Providing Code.

pooja.garg(verified owner)–kindly give code.I will be thankful to you.

noble.lion(verified owner)–help share knowledge, thanks

alok.kumar(verified owner)–Thanks for your support.

francisco marcio.barboza(verified owner)–Thank you

e.narayanan(verified owner)–thank you

sameer.kumthekar(verified owner)–best

Niranjana(verified owner)–SUPERB PLATFORM

chikwendu.nzenwa(verified owner)–great

sameer.kumthekar(verified owner)–best one

preethi.g(verified owner)–good

mohammed.dhriyyef(verified owner)–merci

prashant.kulkarni(verified owner)–excellent

prashant.kulkarni(verified owner)–excellent

xuexi(verified owner)–good

sravan kumar.kotha(verified owner)–good and excellent

sravan kumar.kotha(verified owner)–good and excellent