Differential evolution

In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.

DE is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means DE does not require for the optimization problem to be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc.[1]

DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed.

DE is originally due to Storn and Price.[2][3] Books have been published on theoretical and practical aspects of using DE in parallel computing, multiobjective optimization, constrained optimization, and the books also contain surveys of application areas.[4][5][6][7] Excellent surveys on the multi-faceted research aspects of DE can be found in journal articles like.[8][9]

Algorithm

A basic variant of the DE algorithm works by having a population of candidate solutions (called agents). These agents are moved around in the search-space by using simple mathematical formulae to combine the positions of existing agents from the population. If the new position of an agent is an improvement it is accepted and forms part of the population, otherwise the new position is simply discarded. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.

Formally, let be the cost function which must be minimized or fitness function which must be maximized. The function takes a candidate solution as argument in the form of a vector of real numbers and produces a real number as output which indicates the fitness of the given candidate solution. The gradient of is not known. The goal is to find a solution for which for all in the search-space, which would mean is the global minimum. Maximization can be performed by considering the function instead.

Let designate a candidate solution (agent) in the population. denotes the cross over rate. The basic DE algorithm can then be described as follows:

Note that is called the differential weight and is called the crossover probability, both these parameters are selectable by the practitioner along with the population size see below.

Parameter selection

Performance landscape showing how the basic DE performs in aggregate on the Sphere and Rosenbrock benchmark problems when varying the two DE parameters and , and keeping fixed =0.9.

The choice of DE parameters and can have a large impact on optimization performance. Selecting the DE parameters that yield good performance has therefore been the subject of much research. Rules of thumb for parameter selection were devised by Storn et al.[3][4] and Liu and Lampinen.[10] Mathematical convergence analysis regarding parameter selection was done by Zaharie.[11] Meta-optimization of the DE parameters was done by Pedersen[12][13] and Zhang et al.[14]

Variants


Variants of the DE algorithm are continually being developed in an effort to improve optimization performance. Many different schemes for performing crossover and mutation of agents are possible in the basic algorithm given above, see e.g.[3] More advanced DE variants are also being developed with a popular research trend being to perturb or adapt the DE parameters during optimization, see e.g. Price et al.,[4] Liu and Lampinen,[15] Qin and Suganthan,[16] Civicioglu[17] and Brest et al.[18] There are also some work in making a hybrid optimization method using DE combined with other optimizers.[19]

Sample code

The following is a specific pseudocode implementation of differential evolution, written similar to the Java language. For more generalized pseudocode, please see the listing in the Algorithm section above.

//definition of one individual in population
public class Individual {
	//normally DifferentialEvolution uses floating point variables
	float data1, data2
	//but using integers is possible too  
	int data3
}

public class DifferentialEvolution {
	//Variables
	//linked list that has our population inside
	LinkedList<Individual> population=new LinkedList<Individual>()
	//New instance of Random number generator
	Random random=new Random()
	int PopulationSize=20

	//differential weight [0,2]
	float F=1
	//crossover probability [0,1]
	float CR=0.5
	//dimensionality of problem, means how many variables problem has. this case 3 (data1,data2,data3)
	int N=3;

	//This function tells how well given individual performs at given problem.
	public float fitnessFunction(Individual in) {
		...
		return	fitness	
	}

	//this is main function of program
	public void Main() {
		//Initialize population with individuals that have been initialized with uniform random noise
		//uniform noise means random value inside your search space
		int i=0
		while(i<populationSize) {
			Individual individual= new Individual()
			individual.data1=random.UniformNoise()
			individual.data2=random.UniformNoise()
			//integers cant take floating point values and they need to be either rounded
			individual.data3=Math.Floor( random.UniformNoise())
			population.add(individual)
			i++
		}
		
		i=0
		int j
		//main loop of evolution.
		while (!StoppingCriteria) {
			i++
			j=0
			while (j<populationSize) {
				//calculate new candidate solution
			
				//pick random point from population
				int x=Math.floor(random.UniformNoise()%(population.size()-1))
				int a,b,c

				//pick three different random points from population
				do{
					a=Math.floor(random.UniformNoise()%(population.size()-1))
				}while(a==x);
				do{
					b=Math.floor(random.UniformNoise()%(population.size()-1))
				}while(b==x| b==a);
				do{
					c=Math.floor(random.UniformNoise()%(population.size()-1))
				}while(c==x | c==a | c==b);
				
				// Pick a random index [0-Dimensionality]
				int R=rand.nextInt()%N;
				
				//Compute the agent's new position
				Individual original=population.get(x)
				Individual candidate=original.clone()
				
				Individual individual1=population.get(a)
				Individual individual2=population.get(b)
				Individual individual3=population.get(c)
				
				//if(i==R | i<CR)
					//candidate=a+f*(b-c)
				//else
					//candidate=x
				if( 0==R | random.UniformNoise()%1<CR){	
					candidate.data1=individual1.data1+F*(individual2.data1-individual3.data1)
				}// else isn't needed because we cloned original to candidate
				if( 1==R | random.UniformNoise()%1<CR){	
					candidate.data2=individual1.data2+F*(individual2.data2-individual3.data2)
				}
				//integer work same as floating points but they need to be rounded
				if( 2==R | random.UniformNoise()%1<CR){	
					candidate.data3=Math.floor(individual1.data3+F*(individual2.data3-individual3.data3))
				}
				
				//see if is better than original, if so replace
				if(fitnessFunction(original)<fitnessFunction(candidate)){
					population.remove(original)
					population.add(candidate)
				}
				j++
			}
		}
		
		//find best candidate solution
		i=0
		Individual bestFitness=new Individual()
		while (i<populationSize) {
			Individual individual=population.get(i)
			if(fitnessFunction(bestFitness)<fitnessFunction(individual)){
				bestFitness=individual
			}
			i++
		}
		
		//your solution
		return bestFitness
	}
}

See also

References

  1. Rocca, P.; Oliveri, G.; Massa, A. (2011). "Differential Evolution as Applied to Electromagnetics". IEEE Antennas and Propagation Magazine. 53 (1): 3849. doi:10.1109/MAP.2011.5773566.
  2. Storn, R.; Price, K. (1997). "Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces". Journal of Global Optimization. 11: 341359. doi:10.1023/A:1008202821328.
  3. 1 2 3 Storn, R. (1996). "On the usage of differential evolution for function optimization". Biennial Conference of the North American Fuzzy Information Processing Society (NAFIPS). pp. 519523.
  4. 1 2 3 Price, K.; Storn, R.M.; Lampinen, J.A. (2005). Differential Evolution: A Practical Approach to Global Optimization. Springer. ISBN 978-3-540-20950-8.
  5. Feoktistov, V. (2006). Differential Evolution: In Search of Solutions. Springer. ISBN 978-0-387-36895-5.
  6. G. C. Onwubolu and B V Babu, "New Optimization Techniques in Engineering". Retrieved 17 September 2016.
  7. Chakraborty, U.K., ed. (2008), Advances in Differential Evolution, Springer, ISBN 978-3-540-68827-3
  8. S. Das and P. N. Suganthan, "Differential Evolution: A Survey of the State-of-the-art", IEEE Trans. on Evolutionary Computation, Vol. 15, No. 1, pp. 4-31, Feb. 2011, DOI: 10.1109/TEVC.2010.2059031.
  9. S. Das, S. S. Mullick, P. N. Suganthan, "Recent Advances in Differential Evolution - An Updated Survey," Swarm and Evolutionary Computation, doi:10.1016/j.swevo.2016.01.004, 2016.
  10. Liu, J.; Lampinen, J. (2002). "On setting the control parameter of the differential evolution method". Proceedings of the 8th International Conference on Soft Computing (MENDEL). Brno, Czech Republic. pp. 1118.
  11. Zaharie, D. (2002). "Critical values for the control parameters of differential evolution algorithms". Proceedings of the 8th International Conference on Soft Computing (MENDEL). Brno, Czech Republic. pp. 6267.
  12. Pedersen, M.E.H. (2010). Tuning & Simplifying Heuristical Optimization (PDF) (PhD thesis). University of Southampton, School of Engineering Sciences, Computational Engineering and Design Group.
  13. Pedersen, M.E.H. (2010). "Good parameters for differential evolution" (PDF). Technical Report HL1002. Hvass Laboratories.
  14. Zhang, X.; Jiang, X.; Scott, P.J. (2011). "A Minimax Fitting Algorithm for Ultra-Precision Aspheric Surfaces". The 13th International Conference on Metrology and Properties of Engineering Surfaces.
  15. Liu, J.; Lampinen, J. (2005). "A fuzzy adaptive differential evolution algorithm". Soft Computing. 9 (6): 448462. doi:10.1007/s00500-004-0363-x.
  16. Qin, A.K.; Suganthan, P.N. (2005). "Self-adaptive differential evolution algorithm for numerical optimization". Proceedings of the IEEE congress on evolutionary computation (CEC). pp. 17851791.
  17. 1 2 Civicioglu, P. (2012). "Transforming geocentric cartesian coordinates to geodetic coordinates by using differential search algorithm". Computers & Geosciences. 46: 229247. doi:10.1016/j.cageo.2011.12.011.
  18. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. (2006). "Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark functions". IEEE Transactions on Evolutionary Computation. 10 (6): 646657. doi:10.1109/tevc.2006.872133.
  19. Zhang, Wen-Jun; Xie, Xiao-Feng (2003). DEPSO: hybrid particle swarm with differential evolution operator. IEEE International Conference on Systems, Man, and Cybernetics (SMCC), Washington, DC, USA: 3816-3821.

External links

This article is issued from Wikipedia - version of the 11/12/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.