Particle Swarm Optimizer - An Introduction
This article is part of a series:
Introduction
Since I started programming and especially after I starting to work as a developer I was eager to work on actual problems while trying out new technologies. One recurring algorithm I tend to implement is the Particle Swarm Optimizer (from now on just called PSO).
Many moving Parts - the Algorithm
The main concept behind this optimization algorithm is a number of particles which are let loose on a numerical problem. Friendly as those particles are with each other they attract each other and share a common knowledge about the best solutions to the problem they are trying to solve. With the information from there own search and from the rumors heard from the other particles in the swarm, these particles move through the problem space.
This concept was appealing because I found that it is implemented pretty fast, tested pretty easily and that it has potential to be parallelized or tinkered with some way else.
I will go over the details of the algorithm again, but for now everybody interested in the inner workings should go to the Wikipedia page and look it up.
One very interesting aspect of this algorithm is that it's not reliant on working with a well behaved function as it doesn't work with e.g. the gradient of the function thats looked at.
The other advantage, for me at least, is that the results and the way to get them is easily visualized.
F# - The current language of choice
My main focus in the last years, language wise, was in C#. For some time now though, I tried to get a better understanding of functional programming concepts in contrast to the OOP one gets used to, using C#. After looking into things like elixir and javascript I settled for F# as the most convenient choice.
First things first - The Model
When you try to learn about F#, you're going the be told that it is very good for following a DDD (Domain Driven Design) way of building your program. So let's do that:
type Optimizer = OptimizationProblem -> Solution
This seems obvious and it's not special to this algorithm. It's just the general definition of an optimization machine like thingy. Lets work our way down from this beginning.
type OptimizationProblem = {
Function : TargetFunction
SearchSpace : float * float
Dimension : int
}
Still nothing that actually touches on the special nature of the PSO. An optimization problem is made up out of a actual function, which has a defined dimension and whose input parameters are typically in an interval of values(search space).
type Solution = ParameterSet * Fitnesse
type ParameterSet = float array
type Fitnesse = float
This part is just the underlying parts of the model we are working with here. A solution is a certain parameter set and its resulting function value. A parameter set is a set of numbers and the fitnesse of a solution is a single number. Nothing really to see here. Now we come to the interesting part.
type Particle = {
Position : ParameterSet
Velocity : float array
LocalBest : Solution
}
Every particle consists of a position it currently has in the search space. This position is bound to change over the course of an optimization.
To move any particle has to have a velocity associated to it. This velocity is changing because the particles talk to each other and remember their own best known state.
type Swarm = {
GlobalBest : Solution;
Particles : Particle list
}
The swarm itself is just consisting of the current population of particles and there shared and therefor globally best solution to the optimization problem at hand.
Wrap Up
We now looked into simple model which we can use to implement a working Particle Swarm Optimizer. In the next article we will look into the behavior of the particles and the gravity like attraction they have to each other.