Impact of Training Instance Selection on Automated Algorithm Selection Models for Numerical Black-box Optimization

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Abstract

The recently proposed MA-BBOB function generator provides a way to create numerical black-box benchmark problems based on the well-established BBOB suite. Initial studies on this generator highlighted its ability to smoothly transition between the component functions, both from a low-level landscape feature perspective, as well as with regard to algorithm performance. This suggests that MA-BBOB-generated functions can be an ideal testbed for automated machine learning methods, such as automated algorithm selection (AAS). In this paper, we generate 11800 functions in dimensions $d=2$ and $d=5$, respectively, and analyze the potential gains from AAS by studying performance complementarity within a set of eight algorithms. We combine this performance data with exploratory landscape features to create an AAS pipeline that we use to investigate how to efficiently select training sets within this space. We show that simply using the BBOB component functions for training yields poor test performance, while the ranking between uniformly chosen and diversity-based training sets strongly depends on the distribution of the test set.

Details

Original languageEnglish
Title of host publicationProceedings of the Genetic and Evolutionary Computation Conference
PublisherAssociation for Computing Machinery (ACM)
Pages1007 - 1016
Number of pages10
ISBN (electronic)9798400704949
Publication statusPublished - 14 Jul 2024
Peer-reviewedYes

External IDs

ORCID /0000-0003-2862-1418/work/163766077
Mendeley 643691fb-feb1-3709-8c4c-f5c50e6c3a7d
Scopus 85206933108