Difference between revisions of "Evolution strategy (SRES) (analysis)"
(Plugin name added) |
(Class name added) |
||
(One intermediate revision by one user not shown) | |||
Line 3: | Line 3: | ||
;Provider | ;Provider | ||
:[[Institute of Systems Biology]] | :[[Institute of Systems Biology]] | ||
+ | ;Class | ||
+ | :{{Class|ru.biosoft.analysis.optimization.methods.SRESOptMethod}} | ||
;Plugin | ;Plugin | ||
− | :ru.biosoft.analysis.optimization (Common methods of data optimization analysis plug-in) | + | :[[Ru.biosoft.analysis.optimization (plugin)|ru.biosoft.analysis.optimization (Common methods of data optimization analysis plug-in)]] |
=== Stochastic ranking evolution strategy (SRES)<sup>1</sup> === | === Stochastic ranking evolution strategy (SRES)<sup>1</sup> === |
Latest revision as of 11:15, 31 May 2013
- Analysis title
- Evolution strategy (SRES)
- Provider
- Institute of Systems Biology
- Class
SRESOptMethod
- Plugin
- ru.biosoft.analysis.optimization (Common methods of data optimization analysis plug-in)
[edit] Stochastic ranking evolution strategy (SRES)1
In the (μ, λ)-ES algorithm, the individual i is a set of real-valued vectors (xi, σi) ∀ i ∈ {1,...,λ}. The initial population of x is generated according to a uniform n-dimensional probability distribution over the search space S. Let δx be an approximate measure of the expected distance to the global optimum, then the initial setting for the "mean step sizes" should be
where σi, j denotes the jth component of the vector σi. We use these initial values as upper bounds on σ.
Following the bubble-sort-like procedure is used to rank the individuals in a population, and the best (highest ranked) μ individuals out of λ are selected for the next generation. The truncation level is set at μ / λ ≈ 1/7.
Variation of strategy parameters is performed before the modification of objective variables. We generate λ new strategy parameters from μ old ones so that we can use the λ new strategy parameters in generating λ offspring later. The "mean step sizes" are updated according to the log-normal update rule: i = 1,...,μ, h = 1,...,λ, j = 1,...,n,
(1) |
where N(0,1) is a normally distributed one-dimensional random variable with an expectation of 0 and variance 1 and k ∈ {1,...,μ} is an index generated at random and anew for each j. The "learning rates" τ and τ′ are set equal to (4n)−¼ and (2n)−½, respectively. Recombination is performed on the self-adaptive parameters before applying the update rule given by (1).
Having varied the strategy parameters, each individual (xi, σi), ∀ i ∈ {1,...,μ} creates λ/μ offspring on average, so that a total of λ offspring are generated
Recombination is not used in the variation of objective variables. When an offspring is generated outside the parametric bounds defined by the problem, the mutation (variation) of the objective variable will be retried until the variable is within its bounds. In order to save computation time, the mutation is retried only ten times and then ignored, learning the object variable in its original state within the parameter bounds.
[edit] References
- TP Runarsson, X Yao, "Stochastic Ranking for Constrained Evolutionary Optimization". IEEE Transactions on Evolutionary Computation, vol. 4, #3, pp. 284-294, Sept 2000.