Abstract
Benchmarking and related competitions are widely used for assessing and comparing solver performances. However, underlying uncertainty due to the composition of the instance set and bias induced by choosing specific performance criteria is often not sufficiently addressed. Moreover, performance assessment is almost always multi-objective in nature and no objective, totally neutral approach to it exists. We build on recent work of robust ranking for single-objective solver performance assessment based on bootstrap resampling and introduce a multi-objective robust ranking extension shown to provide new and promising perspectives onto existing competition rankings.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the Genetic and Evolutionary Computation Conference Companion |
| Publisher | ACM Publishing |
| Pages | 155-158 |
| Number of pages | 4 |
| ISBN (Electronic) | 9798400704956 |
| ISBN (Print) | 979-8-4007-0495-6 |
| DOIs | |
| Publication status | Published - 1 Aug 2024 |
| Event | Genetic and Evolutionary Computation Conference, GECCO 2024 - Melbourne, Australia Duration: 14 Jul 2024 → 18 Jul 2024 |
Conference
| Conference | Genetic and Evolutionary Computation Conference, GECCO 2024 |
|---|---|
| Abbreviated title | GECCO 2024 |
| Country/Territory | Australia |
| City | Melbourne |
| Period | 14/07/24 → 18/07/24 |
Keywords
- 2024 OA procedure