Evolution with Drifting Targets
We consider the question of the stability of evolutionary algorithms to gradual changes, or drift, in the target concept. We define an algorithm to be resistant to drift if, for some inverse polynomial drift rate in the target function, it converges to accuracy 1 – \epsilon , with polynomial resources, and then stays within that accuracy indefinitely, except with probability \epsilon , at any one time. We show that every evolution algorithm, in the sense of Valiant (2007; 2009), can be converted using the Correlational Query technique of Feldman (2008), into such a drift resistant algorithm. For certain evolutionary algorithms, such as for Boolean conjunctions, we give bounds on the rates of drift that they can resist. We develop some new evolution algorithms that are resistant to significant drift. In particular, we give an algorithm for evolving linear separators over the spherically symmetric distribution that is resistant to a drift rate of O(\epsilon /n), and another algorithm over the more general product normal distributions that resists a smaller drift rate. The above translation result can be also interpreted as one on the robustness of the notion of evolvability itself under changes of definition. As a second result in that direction we show that every evolution algorithm can be converted to a quasi-monotonic one that can evolve from any starting point without the performance ever dipping significantly below that of the starting point. This permits the somewhat unnatural feature of arbitrary performance degradations to be removed from several known robustness translations.
💡 Research Summary
The paper tackles a fundamental limitation of Valiant’s model of evolutionary algorithms: the assumption that the target concept is static. In many realistic scenarios—online learning, changing environments, or biological evolution—the optimal hypothesis drifts gradually over time. To address this, the authors introduce the notion of drift resistance. An algorithm is drift‑resistant if, for a drift rate that is at most inverse‑polynomial in the relevant parameters (e.g., O(εⁿ) for some constant n), it can (i) converge to an ε‑accurate hypothesis using only polynomial time and sample resources, and (ii) thereafter maintain that accuracy indefinitely, with the probability of a single‑step failure bounded by ε.
The central technical contribution is a generic transformation that converts any existing evolutionary algorithm into a drift‑resistant one. This transformation leverages the Correlational Query (CQ) model introduced by Feldman (2008). In the CQ setting, the learner can query the correlation between a candidate hypothesis and the (possibly moving) target distribution. By embedding CQ queries into the mutation‑selection loop, the transformed algorithm can estimate the expected fitness gain of each mutation even when the target has shifted slightly. Mutations whose expected correlation is non‑negative are accepted; otherwise they are rejected. The authors prove that, as long as the drift per generation is bounded by an inverse‑polynomial, the CQ‑augmented process behaves as if the target were stationary, thereby guaranteeing convergence and perpetual ε‑accuracy.
Beyond the general theorem, the paper provides concrete analyses for specific hypothesis classes:
-
Boolean conjunctions – By augmenting the standard mutation set with literal‑addition and literal‑removal operations and using CQ to monitor each literal’s contribution, the authors show that the algorithm tolerates a drift rate of Θ(ε / log n). This is substantially larger than previously known bounds and demonstrates that even simple combinatorial concepts can be evolved robustly under moderate change.
-
Linear separators – Two distributional settings are examined.
For the spherically symmetric distribution (e.g., uniform over the unit sphere), the authors design a mutation operator that performs random rotations of the weight vector. Coupled with CQ estimates of the inner product between the current weight vector and the drifting target, the algorithm can tolerate drift up to O(ε / n). The analysis hinges on concentration of measure on the sphere and shows that each rotation yields a predictable improvement in correlation as long as the drift is sufficiently small.
For product normal distributions (independent Gaussian coordinates), a more delicate construction is required. The algorithm uses coordinate‑wise scaling mutations and CQ to track the contribution of each dimension. The resulting drift tolerance is O(ε / √n), which, while smaller than in the spherical case, still allows non‑trivial drift for high‑dimensional data.
These case studies illustrate that the generic CQ‑based conversion is not merely existential; it yields explicit, efficiently implementable algorithms with provable drift bounds that improve upon prior work.
A further practical concern addressed is the potential for performance degradation in standard evolutionary processes. Classical analyses permit the fitness of the current hypothesis to dip arbitrarily low before eventually rising again, a phenomenon that is undesirable in many applications. To eliminate this, the authors introduce a quasi‑monotonic transformation. The transformed algorithm monitors the empirical performance of each candidate; any mutation that would cause a drop larger than a prescribed threshold is rejected outright. The authors prove that this monotonicity constraint does not interfere with the drift‑resistance guarantees, thereby delivering algorithms that both adapt to changing targets and maintain a non‑decreasing performance trajectory from any starting point.
In summary, the paper makes four major contributions: (1) a rigorous definition of drift resistance for evolutionary algorithms; (2) a universal CQ‑based method to convert any evolvable algorithm into a drift‑resistant one; (3) concrete drift‑tolerant algorithms for Boolean conjunctions and linear separators under realistic distributions, complete with quantitative drift bounds; and (4) a quasi‑monotonic refinement that removes the unrealistic performance dips present in earlier models. Collectively, these results broaden the theoretical foundation of evolvability, showing that the concept is robust not only to changes in the hypothesis class or representation but also to gradual shifts in the underlying target itself. This opens the door to applying evolutionary computation in dynamic environments such as adaptive control, continual learning, and modeling of biological evolution where the fitness landscape evolves over time.
Comments & Academic Discussion
Loading comments...
Leave a Comment