Increasing the Diversity of Benchmark Function Sets Through Affine Recombination
Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/Gutachten › Beitrag in Konferenzband › Beigetragen › Begutachtung
Beitragende
Abstract
The Black Box Optimization Benchmarking (BBOB) set provides a diverse problem set for continuous optimization benchmarking. At its core lie 24 functions, which are randomly transformed to generate an infinite set of instances. We think this has two benefits: it discourages over adaptation to the benchmark by generating some diversity and it encourages algorithm designs that are invariant to transformations. Using Exploratory Landscape Analysis (ELA) features, one can show that the BBOB functions are not representative of all possible functions. Muñoz and Smith-Miles [15] show that one can generate space-filling test functions using genetic programming. Here we propose a different approach that, while not generating a space-filling function set, is much cheaper. We take affine recombinations of pairs of BBOB functions and use these as additional benchmark functions. This has the advantage that it is trivial to implement, and many of the properties of the resulting functions can easily be derived. Using dimensionality reduction techniques, we show that these new functions “fill the gaps” between the original benchmark functions in the ELA feature space. We therefore believe this is a useful tool since it allows one to span the desired ELA-region from a few well-chosen prototype functions.
Details
Originalsprache | Englisch |
---|---|
Titel | 17th International Conference on Parallel Problem Solving from Nature |
Seitenumfang | 13 |
Publikationsstatus | Veröffentlicht - 2022 |
Peer-Review-Status | Ja |
Externe IDs
Scopus | 85137000246 |
---|
Schlagworte
ASJC Scopus Sachgebiete
Schlagwörter
- Benchmarking, Black box continuous optimization, Exploratory landscape analysis, Instance generator