This article is part of a series:

This article concludes the initial discussion about the particle swarm optimizer. In this third part I will talk about the swarm structure and the easiest implementation of the optimizer itself.

## The Swarm - Nothing Special Here

To recapitulate here is the definition of the swarm:

```
type Swarm = {
GlobalBest : Solution;
Particles : Particle list
}
```

There is really not much exiting stuff here and therefore the logic that is used in relation to it, isn't to suprising either. It boils down to creation and upate functions as well as the logic to determine the size of the swarm, which I took from a well known Paper]. In the end this looks something like the following:

```
module Swarm =
let private typicalSwarmSize dimension =
10 + 2 * (dimension |> float |> Math.Sqrt |> int)
let private valueFromSolution solution =
let (_ , value) = solution
value
let private bestParticle particles =
particles
|> Seq.minBy (fun p -> p.LocalBest |> valueFromSolution)
let create problem =
let particles = [1 .. (typicalSwarmSize problem.Dimension)]
|> List.map (fun _ -> Particle.create problem)
let initialBestParticle = bestParticle particles
{
GlobalBest = initialBestParticle.LocalBest;
Particles = particles
}
let update (swarm:Swarm) (proposal : Solution) : Swarm=
let ( _, oldBest) = swarm.GlobalBest
let ( _, proposedBest ) = proposal
match proposedBest with
| better when better < oldBest ->
{ swarm with GlobalBest = proposal }
| worse when worse >= oldBest -> swarm
| _ -> swarm
```

The actually interesting logic follows now. Drum roll...

## Sequential Optimizer - Step by Step

Lets recapitulate what an optimizer implementation has to accomplish. The modules regarding particles and swarms care only about exactly those things. So it's the optimizers task to put things together. This can be summerized in the type definition `type Optimizer = OptimizationProblem -> Solution`

. We take a problem and get a solution. Everything else has to be done inside of that logic. For now this just seems the most convenient. Following I list a condensed version of the sequential optimizer to illustrate the core functionality. In the actual current implementaion there is some logging added.

```
let solve problem : Solution =
let iterParticleOnProblem = problem |> Particle.itterate
let updateSingleParticle particle {GlobalBest = currentBest} =
iterParticleOnProblem currentBest particle
let updateSingleAndApplyToSwarm swarm particle =
let updatedSingleParticle = particle |> updateSingleParticle <| swarm
let updatedSwarm = Swarm.update swarm updatedSingleParticle.LocalBest
{ updatedSwarm with Particles = (updatedSingleParticle::swarm.Particles) }
let singleIterationOverWholeSwarm ({GlobalBest = globalBest; Particles = particles}) : Swarm =
Seq.fold updateSingleAndApplyToSwarm {GlobalBest = globalBest;Particles = List.empty} particles
let itterationWithIndex swarm _ =
singleIterationOverWholeSwarm swarm
let swarm = Swarm.create problem
let swarmAfterMaxIterations =
Seq.fold itterationWithIndex swarm [1 .. 1000]
swarmAfterMaxIterations.GlobalBest
```

Here we basicly have a number of functions derived from functions in the particle and swarm module taylored to the problem given. This is done by partial application, in my oppinion one of the biggest strengths of functional languages. For example take `let iterParticleOnProblem = problem |> Particle.itterate`

. This takes a function with the defintion 'OptimizationProblem -> Solution -> Particle -> Particle' and resolves the *OptimizationProblem* parameter resulting in 'Solution -> Particle -> Particle'.

This is afterwards done for...

- applying the solution to a single particle
- updating a particle and...
- immediately updating the swarm using that particle
- updating the complete swarm

Now we can get concrete in the sense that we create an actual swarm state and itterate over it a thousand times. After that we just return the best found Solution.

This is really a basic implementaion and we will expand on that at some later point.

## Not as social as suspected - looking ahead

When I red the name Particle Swarm Optimizer, pictures of a hive mind formed in my head, but the particles in this algorithm merely use one common state. At least in this simple implementation. My personal goal for this series is to implement, explain and compare more complicated structures for this swarm structure. The topic of Particle Swarm Optimization is far from being a solved problem. So there should be interesting things ahead.