Abstract
Benchmarking is one of the key ways in which we can gain insight into the strengths and weaknesses of optimization algorithms. In sampling-based optimization, considering the anytime behavior of an algorithm can provide valuable insights for further developments. In the context of multi-objective optimization, this anytime perspective is not as widely adopted as in the single-objective context. In this paper, we propose a new software tool which uses principles from unbounded archiving as a logging structure. This leads to a clearer separation between experimental design and subsequent analysis decisions. We integrate this approach as a new Python module into the IOHprofiler framework and demonstrate the benefits of this approach by showcasing the ability to change indicators, aggregations, and ranking procedures during the analysis pipeline.
| Original language | English |
|---|---|
| DOIs | |
| Publication status | Published - 10 Dec 2024 |
Keywords
- cs.NE
Fingerprint
Dive into the research topics of 'MO-IOHinspector: Anytime Benchmarking of Multi-Objective Algorithms using IOHprofiler'. Together they form a unique fingerprint.Datasets
-
MO-IOH: reproducibility files and additional examples
Vermetten, D. (Creator), Rook, J. (Creator), Preuß, O. L. (Creator), de Nobel, J. (Creator), Doerr, C. (Creator), López-Ibañez, M. (Creator), Trautmann, H. (Creator) & Bäck, T. (Creator), Zenodo, 25 Sept 2024
DOI: 10.5281/zenodo.13843073, https://zenodo.org/records/13843073 and 2 more links, https://doi.org/10.5281/zenodo.13843074, https://zenodo.org/records/13843074 (show fewer)
Dataset