Fitness function

Last updated

A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in software architecture and evolutionary algorithms (EA), such as genetic programming and genetic algorithms to guide simulations towards optimal design solutions. [1]

Contents

Software programmers know that they shouldn’t release a bad code, but that priority competes with many other priorities for busy developers. That's why they should use fitness functions to keep their software in check [2] .

In the field of EAs, each design solution is commonly represented as a string of numbers (referred to as a chromosome). After each round of testing, or simulation, the idea is to delete the n worst design solutions, and to breed n new ones from the best design solutions. Each design solution, therefore, needs to be awarded a figure of merit, to indicate how close it came to meeting the overall specification, and this is generated by applying the fitness function to the test, or simulation, results obtained from that solution. [3]

Two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and one where the fitness function is mutable, as in niche differentiation or co-evolving the set of test cases. [4] [5] Another way of looking at fitness functions is in terms of a fitness landscape, which shows the fitness for each possible chromosome. In the following, it is assumed that the fitness is determined based on an evaluation that remains unchanged during an optimization run.

A fitness function does not necessarily have to be able to calculate an absolute value, as it is sometimes sufficient to compare candidates in order to select the better one. A relative indication of fitness (candidate a is better than b) is sufficient in some cases, [6] such as tournament selection or Pareto optimization.

Requirements of evaluation and fitness function

The quality of the evaluation and calculation of a fitness function is fundamental to the success of an EA optimisation. It implements Darwin's principle of "survival of the fittest". Without fitness-based selection mechanisms for mate selection and offspring acceptance, EA search would be blind and hardly distinguishable from the Monte Carlo method. When setting up a fitness function, one must always be aware that it is about more than just describing the desired target state. Rather, the evolutionary search on the way to the optimum should also be supported as much as possible (see also section on auxiliary objectives), if and insofar as this is not already done by the fitness function alone. If the fitness function is designed badly, the algorithm will either converge on an inappropriate solution, or will have difficulty converging at all.

Definition of the fitness function is not straightforward in many cases and often is performed iteratively if the fittest solutions produced by an EA is not what is desired. Interactive genetic algorithms address this difficulty by outsourcing evaluation to external agents which are normally humans.

Computational efficiency

The fitness function should not only correlate closely with the designer's goal, but it also should be computationally efficient. Speed of execution is very important, as a typical genetic algorithm must be iterated many times in order to produce a usable result for a non-trivial problem.

Fitness approximation [7] [8] may be appropriate, especially in the following cases:

Alternatively or also in addition to the fitness approximation, the fitness calculations can also be distributed to a parallel computer in order to reduce the execution times. Depending on the population model of the EA used, both the EA itself and the fitness calculations of all offspring of one generation can be executed in parallel. [10] [11] [12]

Multi-objective optimization

Practical applications usually aim at optimizing multiple and at least partially conflicting objectives. Two fundamentally different approaches are often used for this purpose, Pareto optimization and optimization based on fitness calculated using the weighted sum. [13]

Weighted sum and penalty functions

When optimizing with the weighted sum, the single values of the objectives are first normalized so that they can be compared. This can be done with the help of costs or by specifying target values and determining the current value as the degree of fulfillment. Costs or degrees of fulfillment can then be compared with each other and, if required, can also be mapped to a uniform fitness scale. Without loss of generality, fitness is assumed to represent a value to be maximized. Each objective is assigned a weight in the form of a percentage value so that the overall raw fitness can be calculated as a weighted sum:

A violation of restrictions can be included in the fitness determined in this way in the form of penalty functions. For this purpose, a function can be defined for each restriction which returns a value between and depending on the degree of violation, with the result being if there is no violation. The previously determined raw fitness is multiplied by the penalty function(s) and the result is then the final fitness : [14]

This approach is simple and has the advantage of being able to combine any number of objectives and restrictions. The disadvantage is that different objectives can compensate each other and that the weights have to be defined before the optimization. [13] In addition, certain solutions may not be obtained, see the section on the comparison of both types of optimization.

Pareto optimization

A solution is called Pareto-optimal if the improvement of one objective is only possible with a deterioration of at least one other objective. The set of all Pareto-optimal solutions, also called Pareto set, represents the set of all optimal compromises between the objectives. The figure below on the right shows an example of the Pareto set of two objectives and to be maximized. The elements of the set form the Pareto front (green line). From this set, a human decision maker must subsequently select the desired compromise solution. [13] Constraints are included in Pareto optimization in that solutions without constraint violations are per se better than those with violations. If two solutions to be compared each have constraint violations, the respective extent of the violations decides. [15]

It was recognized early on that EAs with their simultaneously considered solution set are well suited to finding solutions in one run that cover the Pareto front sufficiently well. [15] [16] Besides the SPEA2, [17] the NSGA-II [18] and NSGA-III [19] [20] have established themselves as standard methods.

The advantage of Pareto optimization is that, in contrast to the weighted sum, it provides all alternatives that are equivalent in terms of the objectives as an overall solution. The disadvantage is that a visualization of the alternatives becomes problematic or even impossible from four objectives on. Furthermore, the effort increases exponentially with the number of objectives. [14] If there are more than three or four objectives, some have to be combined using the weighted sum or other aggregation methods. [13]

Comparison of both types of assessment

Relationship between the Pareto front and the weighted sum. The set of feasible solutions
Z
{\displaystyle Z}
is partially bounded by the Pareto front (green). ParetoFront und Gewichtete Summe.png
Relationship between the Pareto front and the weighted sum. The set of feasible solutions is partially bounded by the Pareto front (green).
Example of a non-convex Pareto front ParetoFront nicht konvex.png
Example of a non-convex Pareto front

With the help of the weighted sum, the total Pareto front can be obtained by a suitable choice of weights, provided that it is convex. [21] This is illustrated by the adjacent picture on the left. The point on the green Pareto front is reached by the weights and , provided that the EA converges to the optimum. The direction with the largest fitness gain in the solution set is shown by the drawn arrows.

In case of a non-convex front, however, non-convex front sections are not reachable by the weighted sum. In the adjacent image on the right, this is the section between points and . This can be remedied to a limited extent by using an extension of the weighted sum, the cascaded weighted sum. [14]

Comparing both assessment approaches, the use of Pareto optimization is certainly advantageous when little is known about the possible solutions of a task and when the number of optimization objectives can be narrowed down to three, at most four. However, in the case of repeated optimization of variations of one and the same task, the desired lines of compromise are usually known and the effort to determine the entire Pareto front is no longer justified. This is also true when no human decision is desired or possible after optimization, such as in automated decision processes. [14]

Auxiliary objectives

Example of two schedules of an order consisting of the five work steps a to e that should meet a latest completion time Auxiliary objective example.png
Example of two schedules of an order consisting of the five work steps a to e that should meet a latest completion time

In addition to the primary objectives resulting from the task itself, it may be necessary to include auxiliary objectives in the assessment to support the achievement of one or more primary objectives. An example of a scheduling task is used for illustration purposes. The optimization goals include not only a general fast processing of all orders but also the compliance with a latest completion time. The latter is especially necessary for the scheduling of rush orders. The second goal is not achieved by the exemplary initial schedule, as shown in the adjacent figure. A following mutation does not change this, but schedules the work step d earlier, which is a necessary intermediate step for an earlier start of the last work step e of the order. As long as only the latest completion time is evaluated, however, the fitness of the mutated schedule remains unchanged, even though it represents a relevant step towards the objective of a timely completion of the order. This can be remedied, for example, by an additional evaluation of the delay of work steps. The new objective is an auxiliary one, since it was introduced in addition to the actual optimization objectives to support their achievement. A more detailed description of this approach and another example can be found in. [22]

See also

Related Research Articles

In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.

<span class="mw-page-title-main">Evolutionary computation</span> Trial and error problem solvers with a metaheuristic or stochastic optimization character

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

<span class="mw-page-title-main">Particle swarm optimization</span> Iterative simulation method

In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

In genetic algorithms and evolutionary computation, crossover, also called recombination, is a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and is analogous to the crossover that happens during sexual reproduction in biology. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions may be mutated before being added to the population.

In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems.

Selection is the stage of a genetic algorithm or more general evolutionary algorithm in which individual genomes are chosen from a population for later breeding. Selection mechanisms are also used to choose candidate solutions (individuals) for the next generation. Retaining the best individuals in a generation unchanged in the next generation, is called elitism or elitist selection. It is a successful (slight) variant of the general process of constructing a new population.

<span class="mw-page-title-main">No free lunch in search and optimization</span> Average solution cost is the same with any method

In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. The name alludes to the saying "no such thing as a free lunch", that is, no method offers a "short cut". This is under the assumption that the search space is a probability density function. It does not apply to the case where the search space has underlying structure that can be exploited more efficiently than random search or even has closed-form solutions that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. A colourful way of describing such a circumstance, introduced by David Wolpert and William G. Macready in connection with the problems of search and optimization, is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning. Before Wolpert's article was published, Cullen Schaffer independently proved a restricted version of one of Wolpert's theorems and used it to critique the current state of machine learning research on the problem of induction.

A fully polynomial-time approximation scheme (FPTAS) is an algorithm for finding approximate solutions to function problems, especially optimization problems. An FPTAS takes as input an instance of the problem and a parameter ε > 0. It returns as output a value which is at least times the correct value, and at most times the correct value.

<span class="mw-page-title-main">Pareto front</span> Set of all Pareto efficient situations

In multi-objective optimization, the Pareto front is the set of all Pareto efficient solutions. The concept is widely used in engineering. It allows the designer to restrict attention to the set of efficient choices, and to make tradeoffs within this set, rather than considering the full range of every parameter.

<span class="mw-page-title-main">Estimation of distribution algorithm</span> Family of stochastic optimization methods

Estimation of distribution algorithms (EDAs), sometimes called probabilistic model-building genetic algorithms (PMBGAs), are stochastic optimization methods that guide the search for the optimum by building and sampling explicit probabilistic models of promising candidate solutions. Optimization is viewed as a series of incremental updates of a probabilistic model, starting with the model encoding an uninformative prior over admissible solutions and ending with the model that generates only the global optima.

<span class="mw-page-title-main">Differential evolution</span> Method of mathematical optimization

In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.

A memetic algorithm (MA) in computer science and operations research, is an extension of the traditional genetic algorithm (GA) or more general evolutionary algorithm (EA). It may provide a sufficiently good solution to an optimization problem. It uses a suitable heuristic or local search technique to improve the quality of solutions generated by the EA and to reduce the likelihood of premature convergence.

Kalyanmoy Deb is an Indian computer scientist. Deb is the Herman E. & Ruth J. Koenig Endowed Chair Professor in the Department of Electrical and Computing Engineering at Michigan State University. Deb is also a professor in the Department of Computer Science and Engineering and the Department of Mechanical Engineering at Michigan State University.

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables, referred to as the upper-level variables and the lower-level variables.

Reward-based selection is a technique used in evolutionary algorithms for selecting potentially useful solutions for recombination. The probability of being selected for an individual is proportional to the cumulative reward obtained by the individual. The cumulative reward can be computed as a sum of the individual reward and the reward inherited from parents.

In applied mathematics, test functions, known as artificial landscapes, are useful to evaluate characteristics of optimization algorithms, such as:

Fish School Search (FSS), proposed by Bastos Filho and Lima Neto in 2008 is, in its basic version, an unimodal optimization algorithm inspired on the collective behavior of fish schools. The mechanisms of feeding and coordinated movement were used as inspiration to create the search operators. The core idea is to make the fishes “swim” toward the positive gradient in order to “eat” and “gain weight”. Collectively, the heavier fishes have more influence on the search process as a whole, what makes the barycenter of the fish school moves toward better places in the search space over the iterations.

Lexicographic max-min optimization is a kind of multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Lexmaxmin optimization presumes that the decision-maker would like the smallest objective value to be as high as possible; subject to this, the second-smallest objective should be as high as possible; and so on. In other words, the decision-maker ranks the possible solutions according to a leximin order of their objective function values.

References

  1. Eiben, A.E.; Smith, J.E. (2015). "Evaluation Function (Fitness Function)". Introduction to Evolutionary Computing. Natural Computing Series (2nd ed.). Berlin, Heidelberg: Springer. p. 30. doi:10.1007/978-3-662-44874-8. ISBN   978-3-662-44873-1. S2CID   20912932.
  2. Fundamentals of Software Architecture: An Engineering Approach. O'Reilly Media. 2020. ISBN   978-1492043454.
  3. Eiben, A.E.; Smith, J.E. (2015). "What Is an Evolutionary Algorithm?". Introduction to Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg: Springer. pp. 25–48. doi:10.1007/978-3-662-44874-8. ISBN   978-3-662-44873-1. S2CID   20912932.
  4. Popovici, Elena; Bucci, Anthony; Wiegand, R. Paul; De Jong, Edwin D. (2012), Rozenberg, Grzegorz; Bäck, Thomas; Kok, Joost N. (eds.), "Coevolutionary Principles", Handbook of Natural Computing, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 987–1033, doi:10.1007/978-3-540-92910-9_31, ISBN   978-3-540-92909-3 , retrieved 2023-01-08
  5. Eiben, A.E.; Smith, J.E. (2015). "Coevolutionary Systems". Introduction to Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 223–230. doi:10.1007/978-3-662-44874-8. ISBN   978-3-662-44873-1. S2CID   20912932.
  6. Bäck, Thomas; Fogel, David; Michalewicz, Zbigniew, eds. (2000-11-20). Evolutionary Computation 2: Advanced Algorithms and Operators. Taylor & Francis. doi:10.1201/9781420034349. ISBN   978-0-7503-0665-2.
  7. Jin, Y. (January 2005). "A comprehensive survey of fitness approximation in evolutionary computation". Soft Computing. 9 (1): 3–12. doi:10.1007/s00500-003-0328-5. ISSN   1432-7643. S2CID   7626092.
  8. Jin, Yaochu; Wang, Handing; Chugh, Tinkle; Miettinen, Kaisa (June 2019). "Data-Driven Evolutionary Optimization: An Overview and Case Studies". IEEE Transactions on Evolutionary Computation. 23 (3): 442–458. doi:10.1109/TEVC.2018.2869001. hdl: 10871/34011 . ISSN   1089-778X. S2CID   55809527.
  9. Eiben, A.E.; Smith, J.E. (2015). "Nonstationary and Noisy Function Optimisation". Introduction to Evolutionary Computing. Natural Computing Series (2nd ed.). Berlin, Heidelberg: Springer. pp. 185–194. doi:10.1007/978-3-662-44874-8. ISBN   978-3-662-44873-1. S2CID   20912932.
  10. Sudholt, Dirk (2015), Kacprzyk, Janusz; Pedrycz, Witold (eds.), "Parallel Evolutionary Algorithms", Springer Handbook of Computational Intelligence, Berlin, Heidelberg: Springer, pp. 929–959, doi:10.1007/978-3-662-43505-2_46, ISBN   978-3-662-43504-5 , retrieved 2023-02-27
  11. Khalloof, Hatem; Mohammad, Mohammad; Shahoud, Shadi; Duepmeier, Clemens; Hagenmeyer, Veit (2020-11-02). "A Generic Flexible and Scalable Framework for Hierarchical Parallelization of Population-Based Metaheuristics". Proceedings of the 12th International Conference on Management of Digital EcoSystems. Virtual Event United Arab Emirates: ACM. pp. 124–131. doi:10.1145/3415958.3433041. ISBN   978-1-4503-8115-4. S2CID   227179748.
  12. Jähne, Paul (2016). Mayr, Heinrich Christian; Pinzger, Martin (eds.). Overview of the current state of research on parallelisation of evolutionary algorithms on graphic cards (PDF). Bonn: Gesellschaft für Informatik, FRG. ISBN   978-3-88579-653-4. OCLC   962381748.{{cite book}}: |work= ignored (help)
  13. 1 2 3 4 Miettinen, Kaisa (2008). "Introduction to Multiobjective Optimization: Noninteractive Approaches". In Branke, Jürgen; Deb, Kalyanmoy; Miettinen, Kaisa; Słowiński, Roman (eds.). Multiobjective Optimization: Interactive and Evolutionary Approaches. Lecture Notes in Computer Science. Vol. 5252. Berlin, Heidelberg: Springer. pp. 1–26. doi:10.1007/978-3-540-88908-3. ISBN   978-3-540-88907-6. S2CID   15375227.
  14. 1 2 3 4 5 6 Jakob, Wilfried; Blume, Christian (2014-03-21). "Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts". Algorithms. 7 (1): 166–185. arXiv: 2203.02697 . doi: 10.3390/a7010166 . ISSN   1999-4893.
  15. 1 2 Deb, Kalyanmoy (2008). "Introduction to Evolutionary Multiobjective Optimization". In Branke, Jürgen; Deb, Kalyanmoy; Miettinen, Kaisa; Słowiński, Roman (eds.). Multiobjective Optimization: Interactive and Evolutionary Approaches. Lecture Notes in Computer Science. Vol. 5252. Berlin, Heidelberg: Springer. pp. 58–96. doi:10.1007/978-3-540-88908-3. ISBN   978-3-540-88907-6. S2CID   15375227.
  16. Fonseca, Carlos M.; Fleming, Peter J. (1995). "An Overview of Evolutionary Algorithms in Multiobjective Optimization". Evolutionary Computation. 3 (1): 1–16. doi:10.1162/evco.1995.3.1.1. ISSN   1063-6560. S2CID   8530790.
  17. Eckart, Zitzler; Marco, Laumanns; Lothar, Thiele (2001). "SPEA2: Improving the strength pareto evolutionary algorithm". Technical Report, Nr. 103. Computer Engineering and Networks Laboratory (TIK). ETH Zürich 2001. doi:10.3929/ethz-a-004284029. S2CID   16584254.
  18. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. (2002). "A fast and elitist multiobjective genetic algorithm: NSGA-II". IEEE Transactions on Evolutionary Computation. 6 (2): 182–197. doi:10.1109/4235.996017. S2CID   9914171.
  19. Deb, Kalyanmoy; Jain, Himanshu (2014). "An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints". IEEE Transactions on Evolutionary Computation. 18 (4): 577–601. doi:10.1109/TEVC.2013.2281535. ISSN   1089-778X. S2CID   206682597.
  20. Jain, Himanshu; Deb, Kalyanmoy (2014). "An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach". IEEE Transactions on Evolutionary Computation. 18 (4): 602–622. doi:10.1109/TEVC.2013.2281534. ISSN   1089-778X. S2CID   16426862.
  21. Miettinen, Kaisa (1998). Nonlinear Multiobjective Optimization. International Series in Operations Research & Management Science. Vol. 12. Boston, MA: Springer US. doi:10.1007/978-1-4615-5563-6. ISBN   978-1-4613-7544-9.
  22. 1 2 Jakob, Wilfried (2021), Applying Evolutionary Algorithms Successfully: A Guide Gained from Real-world Applications (KIT Scientific Working Papers, vol. 170), Karlsruhe, Germany: Karlsruhe Institute of Technology (KIT), arXiv: 2107.11300 , doi:10.5445/ir/1000135763, S2CID   236318422