Multi-objective Ranking using Bootstrap Resampling

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

1 Citation (Scopus)
70 Downloads (Pure)

Abstract

Benchmarking and related competitions are widely used for assessing and comparing solver performances. However, underlying uncertainty due to the composition of the instance set and bias induced by choosing specific performance criteria is often not sufficiently addressed. Moreover, performance assessment is almost always multi-objective in nature and no objective, totally neutral approach to it exists. We build on recent work of robust ranking for single-objective solver performance assessment based on bootstrap resampling and introduce a multi-objective robust ranking extension shown to provide new and promising perspectives onto existing competition rankings.

Original languageEnglish
Title of host publicationProceedings of the Genetic and Evolutionary Computation Conference Companion
PublisherACM Publishing
Pages155-158
Number of pages4
ISBN (Electronic)9798400704956
ISBN (Print)979-8-4007-0495-6
DOIs
Publication statusPublished - 1 Aug 2024
EventGenetic and Evolutionary Computation Conference, GECCO 2024 - Melbourne, Australia
Duration: 14 Jul 202418 Jul 2024

Conference

ConferenceGenetic and Evolutionary Computation Conference, GECCO 2024
Abbreviated titleGECCO 2024
Country/TerritoryAustralia
CityMelbourne
Period14/07/2418/07/24

Keywords

  • 2024 OA procedure

Fingerprint

Dive into the research topics of 'Multi-objective Ranking using Bootstrap Resampling'. Together they form a unique fingerprint.

Cite this