Increasing the Diversity of Benchmark Function Sets Through Affine Recombination
Research output: Contribution to book/conference proceedings/anthology/report › Conference contribution › Contributed › peer-review
Contributors
Abstract
The Black Box Optimization Benchmarking (BBOB) set provides a diverse problem set for continuous optimization benchmarking. At its core lie 24 functions, which are randomly transformed to generate an infinite set of instances. We think this has two benefits: it discourages over adaptation to the benchmark by generating some diversity and it encourages algorithm designs that are invariant to transformations. Using Exploratory Landscape Analysis (ELA) features, one can show that the BBOB functions are not representative of all possible functions. Muñoz and Smith-Miles [15] show that one can generate space-filling test functions using genetic programming. Here we propose a different approach that, while not generating a space-filling function set, is much cheaper. We take affine recombinations of pairs of BBOB functions and use these as additional benchmark functions. This has the advantage that it is trivial to implement, and many of the properties of the resulting functions can easily be derived. Using dimensionality reduction techniques, we show that these new functions “fill the gaps” between the original benchmark functions in the ELA feature space. We therefore believe this is a useful tool since it allows one to span the desired ELA-region from a few well-chosen prototype functions.
Details
Original language | English |
---|---|
Title of host publication | 17th International Conference on Parallel Problem Solving from Nature |
Number of pages | 13 |
Publication status | Published - 2022 |
Peer-reviewed | Yes |
External IDs
Scopus | 85137000246 |
---|
Keywords
ASJC Scopus subject areas
Keywords
- Benchmarking, Black box continuous optimization, Exploratory landscape analysis, Instance generator