Some Simulation Experiments
with Global Optimization

Here we present some of our experiments with different methods of global optimization.

Download

Differental Evolution, Repulsive Particle Swarm and Barter Algorithm FORTRAN Programs
Fitting a Logarithmic Spiral to Empirical Data with Displaced Origin

Logarithmic spirals are abundantly observed in nature. Gastropods/cephalopods (such as nautilus, cowie, grove snail, thatcher, etc.) in the mollusca phylum have spiral shells, mostly exhibiting logarithmic spirals vividly. Spider webs show a similar pattern. The low-pressure area over Iceland and the Whirlpool Galaxy resemble logarithmic spirals. Many materials develop spiral cracks either due to imposed torsion (twist), as in the spiral fracture of the tibia, or due to geometric constraints, as in the fracture of pipes. Spiral cracks may, however, arise in situations where no obvious twisting is applied; the symmetry is broken spontaneously. It has been found that the rank size pattern of the cities of USA approximately follows logarithmic spiral.
 
The usual procedure of curve-fitting fails miserably in fitting a spiral to empirical data. The difficulties in fitting a spiral to data become much more intensified when the observed points z = (x, y) are not measured from their origin (0, 0), but shifted away from the origin by (cx, cy). We intend in this paper to devise a method to fit a logarithmic spiral to empirical data measured with a displaced origin. The method is also tested on numerical data.
 
It appears that our method is successful in estimating the parameters of a logarithmic spiral. However, it may be noted that the range specification is very important. We have observed that larger is range guessed (in which the shift parameters lie), lesser is the efficiency of the method. Moreover, estimated values of the parameters of a logarithmic spiral (a and b in r = a*exp(b(theta+2*pi*k) highly sensitive to the precision to which the shift parameters (cx and cy) are correctly estimated.

Download

Experiments on Estimation of the Parameters of Gielis Super-formula by Simulated Annealing Method of Optimization

In this paper an attempt has been made to estimate the parameters of Gielis superformula (modified by various functions). Simulated data have been used for this purpose. The estimation has been done by the method of simulated annealing. It has been found that the simulated annealing method is quite successful in fitting the modified Gielis curves to observed data and, at the problem at hand, it performs better than the Genetic Algorithm (Download the paper). Our experiments have shown that it also performs better than the Generalized Simulated Annealing method of Tsallis and Stariolo (Download the paper : A Comparative Study on Fitting of Gielis Curves by Classical versus Generalized Simulated Annealing Methods). We have also found that the (Repulsive) Particle Swarm method of Global Optimization is more or less comparable to the Simulated Annealing method (Download the paper : Some Experiments on Fitting of Gielis Curves by Simulated Annealing and Particle Swarm Methods of Global Optimization). An attempt has also been made to estimate the parameters of the Chacon-Gielis superformula - which is a generalization of the original Gielis superformula. Chacon suggested to use Jacobian elliptic functions in place of trigonometric functions in the superformula (see Least Squares Fitting of Chacon-Gielis Curves by the Particle Swarm Method of Optimization). However, lack of empirical uniqueness of Gielis parameters has been corroborated. Due to this, it is quite unlikely to succeed at an estimation of the true parameters of Gielis superformula, more so when it is modified by an unknown function, and seek a scientific explanation behind them.
.
Keywords : Gielis super-formula, supershapes, Simulated annealing, nonlinear programming, multiple sub-optimum, global, local optima, genetic algorithm, MJ Box algorithm, Nelder-Mead, fit, data, empirical, estimation, parameters, curve fitting, Simulated annealing, Particle Swarm, Elliptic function

Download

Global Optimization by Particle Swarm Method: A Fortran Program

Programs that work very well in optimizing convex functions very often perform poorly when the problem has multiple local minima or maxima. They are often caught or trapped in the local minima/maxima. Several methods have been developed to escape from being caught in such local optima. The Particle Swarm Method of global optimization is one of such methods.
 
A swarm of birds or insects or a school of fish searches for food, protection, etc. in a very typical manner. If one of the members of the swarm sees a desirable path to go, the rest of the swarm will follow quickly. Every member of the swarm searches for the best in its locality - learns from its own experience. Additionally, each member learns from the others, typically from the best performer among them. Even human beings show a tendency to learn from their own experience, their immediate neighbours and the ideal performers. The Particle Swarm method of optimization mimics this behaviour. Every individual of the swarm is considered as a particle in a multidimensional space that has a position and a velocity. These particles fly through hyperspace and remember the best position that they have seen. Members of a swarm communicate good positions to each other and adjust their own position and velocity based on these good positions.
 
The Particle Swarm method of optimization testifies the success of bounded rationality and decentralized decisionmaking in reaching at the global optima. It has been used successfully to optimize extremely difficult multimodal functions. Here we give a FORTRAN program to find the global optimum by the Repulsive Particle Swarm method.
 
The FORTRAN computer program for particle swarm method has been tested on some 60 test functions, some newly introduced and others well-established as the benchmark function in the extant literature. For the papers : Performance of Repulsive Particle Swarm Method in Global Optimization of Some Important Test Functions: A Fortran Program (Download), Some New Test Functions for Global Optimization and Performance of Repulsive Particle Swarm Method(Download) and Repulsive Particle Swarm Method on Some Difficult Test Problems of Global Optimization(Download).

Download

Global Optimization by Differential Evolution and Particle Swarm Methods:Evaluation on Some Benchmark Functions

In this paper we compare the performance of the Differential Evolution (DE) and the Repulsive Particle Swarm (RPS) methods of global optimization. To this end, seventy test functions have been chosen. Among these test functions, some are new while others are well known in the literature; some are unimodal, the others multi-modal; some are small in dimension (no. of variables, x in f(x)), while the others are large in dimension; some are algebraic polynomial equations, while the other are transcendental, etc. A FORTRAN program of DE and RPS have been appended.
 
Among 70 functions, a few have been run for small as well as large dimensions. In total, 73 optimization exercises have been done. DE has succeeded in 65 cases while RPS has succeeded in 54 cases. In almost all cases, DE has converged faster and given much more accurate results. The convergence of RPS is much slower even for lesser stringency on accuracy. Some test functions have been hard for both the methods. These are: Zero-Sum (30D), Perm#1, Perm#2, Power-sum and Bukin functions.
 
From what we find, one cannot reach at the definite conclusion that the DE performs better or worse than the RPS. None could assure a supremacy over the other. Each one faltered in some cases; each one succeeded in some others. However, DE is unquestionably faster, more accurate and more frequently successful than the RPS. It may be argued, nevertheless, that alternative choice of adjustable parameters could have yielded better results in either method’s case. The protagonists of either method could suggest that. Our purpose is not to join with the one or the other. We simply want to highlight that in certain cases they both succeed, in certain other case they both fail and each one has some selective preference over some particular type of surfaces. What is needed is to identify such structures and surfaces that suit a particular method most. It is needed that we find out some criteria to classify the problems that suit (or does not suit) a particular method. This classification will highlight the comparative advantages of using a particular method for dealing with a particular class of problems.
 
Keywords: Global optimization, Stochastic search, Repulsive particle swarm, Differential Evolution, Clustering algorithm, Simulated annealing, Genetic algorithm, Tabu search, Ant Colony algorithm, Monte Carlo method, Box algorithm, Nelder-Mead, Nonlinear programming, FORTRAN computer program, local optima, Benchmark, test functions.
 
Download

The Barter Method: A New Heuristic for Global Optimization and its Comparison with the Particle Swarm and the Differential Evolution Methods

The objective of this paper is to introduce a new population-based (stochastic) heuristic to search the global optimum of a (continuous) multi-modal function and to assess its performance (on a fairly large number of benchmark functions) vis--vis that of two other well-established and very powerful methods, namely, the Particle Swarm (PS) and the Differential Evolution (DE) methods of global optimization. We will call this new method the Barter Method of global optimization.
 
This method is based on the well-known proposition in welfare economics that competitive equilibria, under fairly general conditions, tend to be Pareto optimal In its simplest version, implementation of this proposition may be outlined as follows:
 
Let there be a fairly large number of individuals in a population and let each individual own (or draw from the environment) an m-element real vector of resources, xi = (xi1, xi2, , xim). For every xi there is a (single-valued) function f(x) that may be used as a measure of the worth of xi that the individual would like to optimize. The optimand function f(.) is unique and common to all the individuals. Now, let the individuals in the (given) population enter into a barter of their resources with the condition that (i) a transaction is feasible across different persons and different resources only, and (ii) the resources will change hands (materialize) only if such a transaction is beneficial to (more desired by) both the parties (in the barter). The choice of the individuals, (i ,k) and the resources, (j, l) in every transaction and the quantum of transaction would be stochastic in nature. If such transactions are allowed for a large number of times, then at the end of the session: (a) every individual would be better off than what he was at the initial position, and (b) at least one individual would reach the global optimum.
 
We have used 75 test functions. The DE succeeds in 70 cases, the RPS succeeds in 60 cases, while the Barter method succeeds for a modest number of 52 cases. The DE as well as Barter methods are unstable for stochastic functions (Yao-Liu#7 and Fletcher-Powell functions). In eight cases, the Barter method could not converge in 10000 iterations (due to slow convergence rate), while in 4 cases the MRPS could not converge. Seen as such, the barter method is inferior to the other two methods. Additionally, the convergence rate of the Barter method is slower than the DE as well as the MRPS. However, the DE and the RPS have a history of a full decade behind them and they have been improved many times. In the present exercise, the RPS is a modified version (MRPS) that has an extra ability for local search. The DE version used here uses the latest (available) schemes of crossover, mutation and recombination. In comparison to this, the Barter method is a nascent one. We need a thorough investigation into the nature and performance of the Barter method. For comparison of the Barter method with the Simulated Annealing method download the paper Performance of the Barter, the Differential Evolution and the Simulated Annealing Methods of Global Optimization on Some New and Some Old Test Functions
 
Download