Coevolutionary algorithms

From DDL Wiki

Jump to: navigation, search

Contents

Introduction

Evolutionary algorithms (EAs) are heuristic methods for solving computationally difficult problems using biologically inspired notions of Darwinian evolution. EAs frequently have an advantage over many traditional local search heuristic methods when search spaces are highly modal, discontinuous, or highly constrained. Unfortunately, there are problems on which EAs tend to perform poorly, or for which no simple method for applying them is known. One such situation occurs when problems have very large search domains defined by the Cartesian product of two or more, interacting subspaces. For example, this is often the case when one would like to evolve some functional element in combination with its input data. In extreme settings, the space can be infinite, and some method of focussing on relevant areas is needed by the EA. Another situation in which it is difficult to apply an EA occurs when no intrinsic objective measure exists with which to measure the fitness of individuals. This can be the case when evolving gameplaying strategies, for instance. Finally, when searching spaces of complex structures, EAs often have difficulties when no domain-specific modifications are made to help direct the search. For these kinds of problems, researchers have turned to a natural extension of the evolutionary algorithm: coevolution. Coevolutionary algorithms have a lot of potential in terms of addressing the types of problems just mentioned. As such, they have become an important area of research in the field of evolutionary computation.

A coevolutionary algorithm is an evolutionary algorithm (or collection of evolutionary algorithms) in which the fitness of an individual is subjective; that is, the individuals are evaluated based on their interactions with other individuals. According to the nature of these interactions, coevolutionary algorithms fall into two main groups: Competetive Coevolutionary Algorithms and Cooperative Coevolutionary Algorithms. In the case of cooperative algorithms, individuals are rewarded when they work well with other individuals and punished when they perform poorly together. For example, consider an algorithm where each population represents a piece of a larger problem, and it is the task of those populations to evolve increasingly more fit pieces for the larger holistic problem. In the case of competitive algorithms, however, individuals are rewarded at the expense of those with which they interact. For example, consider a predator-prey model in which individuals in one population represent some kind of device (e.g., a sorting network) and individuals in another population represent some kind of input for the device (e.g., a data set), and the object of the first population is to evolve increasingly better devices to handle the input, while the object of the second population is to evolve increasingly more difficult inputs for the devices.

Background

Historically, competitive CEAs lead the way with the seminal Hillis (1991) paper on coevolving sorting networks and data sets in a predator-prey type relationship. Hillis evolves sorting networks by using an opposing population of coevolving data sets. In this case, an individual in one population, representing a potential sorting network, is awarded a fitness score based on how well it sorts an opponent data set from the other population, and individuals in the second population represent potential data sets whose fitness is based on how well they confuse opponent sorting networks.

In fact, most work in coevolutionary algorithms has been in the area of competitive coevolution. Most popularly competitive coevolution has been applied to game playing strategies (Rosin and Belew 1995; Rosin and Belew 1996; Rosin 1997; Pollack and Blair 1998). Additionally Angeline and Pollack (1993) demonstrate the effectiveness of competition for evolving better solutions by developing a concept of competitive fitness to provide a more robust training environment than independent fitness functions. Competition was also successfully harnessed by Schlierkamp-Voosen and M¨uhlenbein (1994) to facilitate strategy adaptation in their so-called breeder genetic algorithms. Competition has played a vital part in attempts to coevolve complex agent behaviors (Sims 1994; Luke, Hohn, Farris, Jackson, and Hendler 1998). Finally, competitive approaches have been applied to a variety of machine learning problems (Paredis 1994; Juill´e and Pollak 1996; Mayer 1998).

Potter and De Jong (1994) opened the door to research on cooperative CEAs by developing a relatively general framework for such models and applying it, first, to static function optimization and later to neural network learning (Potter and De Jong 2000). In Potter’s model, each population contains individuals representing a component of a larger solution, and evolution of these populations occurs almost independently, in tandem with one another, interacting only to obtain fitness. Such a process can be static, in the sense that the divisions for the separate components are decided a priori and never altered, or dynamically, in the sense that populations of components may be added or removed as the run progresses. Moriarty and Miikkulainen (1997) take a different, somewhat more adaptive approach to cooperative coevolution of neural networks. In this case a parent population represents potential network plans, while an offspring population is used to acquire node information. Plans are evaluated based on how well they solve a problem with their collaborating nodes, and the nodes receive a share of this fitness. Thus a node is rewarded for participating more with successful plans, and thus receives fitness only indirectly. Potter’s methods have also been used or extended by other researchers. Eriksson and Olsson (1997) use a cooperative coevolutionary algorithm for inventory control optimization. Wiegand (1998) attempts to make the algorithm more adaptively allocate resources by allowing migrations of individuals from one population to another in a method similar to the Schlierkamp-Voosen and M¨uhlenbein (1994) competitive mechanisms.

Cooperative Coevolutionary Algorithm: A Tool for Decomposing Complex Problems

While traditional evolution may be fully applicable to static single-objective optimization problems of arbitrary complexity, the decompositional nature of coevolution (whether implicit or explicit) may afford CEAs with some advantages for dealing with problems that are complex, but highly structured. Assuming that the algorithm either endogenously or exogenously decomposes the problem in an appropriate way, it seems natural that a Coevolutionary Algorithm (in particular a Cooperative Coevolutionary Algorithm) could coevolve the various components independently more efficiently than could a traditional EA evolve the entire structure. Indeed, this has been the primary motivating factor for cooperative coevolutionary approaches from the beginning, starting with early attempts to perform optimization tasks with explicit, static decompositions (Potter and De Jong 1994). Soon researchers began to explore controlled, but dynamic decompositions (Potter and De Jong 2000), and even more recently CCEA approaches have exploited this notion further, employing a kind of shaping technique called complexification (Stanley and Miikkulainen 2002; Stanley and Miikulainen 2002). Here the structure that is coevolved starts out simple and is gradually expanded to help direct the search space more hierarchically from simple representation spaces (where the search space is much smaller) to larger representation spaces (where the search space is constrained by the earlier parts of the search).

A General Cooperative Coevolution Framework

While there are a wide variaty of cooperative coevolutionary algorithms, here we just consider the simplset case which uses explicit fitness assignment methods and static problem decompositions (Figure 1). When applying a CCEA to a particular problem, typically one decomposes the problem into components and assigns each component to a population. Save for evaluation, each population is evolved more or less independently of one another. Since any given individual from a particular population represents only a component of a potential solution to the problem, collaborators are selected from the other populations to represent the remaining components. The individual is combined with its collaborators to form a complete solution and the objective function is evaluated. If there is more than one set of collaborators (depending on the collaborator sample size) for a given individual, the objective function may be evaluated many times, but a single score is somehow attributed to the individual for fitness (depending on the collaborator credit assignment method). It should be noted that the populations of a CCEA may or may not be homogeneous with respect to the representation used for the encoding of individuals in them, or the underlying EAs being used to evolve a particular component.

Image:Fig1.gif

Figure 1. A Round of a Three Population Cooperative Coevolutionary Algorithm

Pathological Behaviours of Coevolutionary Algorithms

1. Loss of gradient – The coevolutionary behavior that occurs when one population or group reaches a state such that other groups and populations lose necessary relative fitness diversity from which to continue meaningful progress. A classic example of a lack of gradient between competitive coevolutionary opponents is the situation were a chess Grand Master plays a small child. If the child receives no information other than the outcome of the game, she has almost no means of learning how to improve her game. A “loss” of gradient can occur in this competitive setting when one population suddenly achieves a level so superior to the other, that simply nothing can be learned by either population by competing. Loss of gradient is not strictly a competitive CEA problem, though. In a more general setting including cooperative CEAs, the same behavior can be seen as simply a loss of fitness diversity in one or both populations with respect to the other. That is: one or more populations suddenly loses diversity and the remaining population(s) are reduced to searching the projected space created by this diversity loss.

2. relative overgeneralization -The coevolutionary behavior that occurs when populations in the system are attracted towards areas of the space in which there are many strategies that perform well relative to the interacting partner.Again, this behavior can be observed in both competitive and cooperative algorithms, alike; however, stated as it is above, it is more an issue for cooperative algorithms than for competitive ones. Historically, papers discussing competitive algorithms have more often referred to such behaviors more generally as “relativism”. This alternate term reflects behaviors that result from situations in which the adaptive changes in a coevolutionary system become disconnected from some absolute measure and may be driven to parts of the space that are undesired (Watson and Pollack 2001).

3. mediocre objective stasis – The coevolutionary behavior that occurs when there is no apparent progress according to some reasonable objective measure, despite continued adaptive subjective steps in the interacting individuals and population(s). The term “mediocre objective stasis” corresponds, in part, with the more common term “mediocre stability”as well as some behaviors alternatively classified as “relativistic”.The term specifically refers to a stalling in the objective measure, not necessarily in domain space (Watson and Pollack 2001; Ficici and Pollack 1998).


References

[1] Potter M. and K. De Jong, Evolving Complex Structures via Coperative Coevolution, Fourth Annual Conference on Evolutionary Programming, San Diego, CA, 1995.

[2] Potter M., The Design and Computational Model of Cooperative Coevolution, PhD thesis, George Mason University, Fairfox, Virginia, 1997.

[3] Potter M. and K. De Jong, Cooperative Coevolution: An architecture for evolving coadapted subcomponents, Evolutionary Computation, 8(1): 1-29, 2000.

[4] Weigand P., Liles W., De Jong K., An empirical analysis of collaboration methods in cooperative coevolutionary algorithms. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO) 2001.

[5] Weigand P., An Analysis of Cooperative Coevolutionary Algorithms, PhD thesis, George Mason University, Fairofx, Virginia, 2003.

Personal tools