### Particle Swarm Optimization (PSO)

##### Description

Particle Swarm Optimization (PSO) is a stochastic optimization technique somewhat similar to evolutionary algorithms but different in an important way. It’s modeled not after evolution per se, but after swarming and flocking behaviors in animals. Unlike other population-based methods, PSO does not resample populations to produce new ones: it has no selection of any kind.

This implementation of the algorithm relies on five parameters:

**alpha**: how much of the original velocity is retained.

**beta**: how much of the personal best is mixed in. If b is large, particles tend to move more towards their
own personal bests rather than towards global bests. This breaks the swarm into a lot of separate
hill-climbers rather than a joint searcher.

**gamma**: how much of the informants’ best is mixed in. The effect here may be a mid-ground between b and d.
The number of informants is also a factor (assuming they’re picked at random): more informants is more
like the global best and less like the particle’s local best.

**delta**: how much of the global best is mixed in. If d is large, particles tend to move more towards the
best known region. This converts the algorithm into one large hill-climber rather than
separate hill-climbers. Perhaps because this threatens to make the system highly exploitative, d is
often set to 0 in modern implementations.

**epsilon**: how fast the particle moves. If e is large, the particles make big jumps towards the better areas —
and can jump over them by accident. Thus a big e allows the system to move quickly to best-known
regions, but makes it hard to do fine-grained optimization. Just like in hill-climbing. Most commonly, e
is set to 1.

##### Mutation Function

##### Parameters

$FES = d * 20000$, $N = 20$ and $CR = 0.8$

##### References

##### Pseudo Code

\begin{algorithm} \caption{PSO Algorithm} \begin{algorithmic} \STATE $\textit{swarmsize} \gets \text{desired swarm size}$ \STATE $\alpha \gets \text{proportion of velocity to be retained}$ \STATE $\beta \gets \text{proportion of personal best to be retained}$ \STATE $\gamma \gets \text{proportion of the informants’ best to be retained}$ \STATE $\delta \gets \text{proportion of global best to be retained}$ \STATE $\epsilon \gets \text{jump size of a particle}$ \STATE $\textit{P} \gets \{\}$ \FOR{$\textit{i}$ from one to $\textit{swarmsize}$} \STATE $P_i \gets P \cup \{$new random particle $\overrightarrow{x}$ with a random initial velocity $\overrightarrow{v}\}$ \ENDFOR \STATE $\overrightarrow{Best} \gets \text{Null}$ \WHILE{$\overrightarrow{Best}$ is the ideal solution or we ran out of time} \FOR{each particle $\overrightarrow{x}$ $\epsilon$ $P$ with velocity $\overrightarrow{v}$} \COMMENT{Determine how to Mutate} \STATE $\overrightarrow{x}^* \gets \text{previous fittest location of } \overrightarrow{x}$ \STATE $\overrightarrow{x}^+ \gets \text{previous fittest location of informants of } \overrightarrow{x}$ \COMMENT{Including $\overrightarrow{x}$ itself} \STATE $\overrightarrow{x}^! \gets \text{previous fittest location of any particle}$ \FOR{each dimesion $i$} \STATE $b \gets$ random number from $0.0$ to $\beta$ inclusive \STATE $c \gets$ random number from $0.0$ to $\gamma$ inclusive \STATE $d \gets$ random number from $0.0$ to $\delta$ inclusive \STATE $v_i \gets \alpha v_i + b (x^*_i - x_i) + c (x^+_i - x_i) + d (x^!_i - x_i)$ \ENDFOR \ENDFOR \FOR{each particle $\overrightarrow{x}$ $\epsilon$ $P$ with velocity $\overrightarrow{v}$} \COMMENT{Mutate} \STATE $\overrightarrow{x} \gets \overrightarrow{x} + \epsilon \overrightarrow{v}$ \ENDFOR \ENDWHILE \RETURN $\overrightarrow{Best}$ \end{algorithmic} \end{algorithm}