Abstract
The research reported in this paper is concerned with assessing the usefulness of reinforcment learning (RL) for on-line calibration
of parameters in evolutionary algorithms (EA). We are running an RL procedure and the EA simultaneously and the RL is changing
the EA parameters on-the-fly. We evaluate this approach experimentally on a range of fitness landscapes with varying degrees
of ruggedness. The results show that EA calibrated by the RL-based approach outperforms a benchmark EA.
Original language | English |
---|---|
Title of host publication | Engineering Self-Organising Systems |
Pages | 151-160 |
Publication status | Published - 2007 |