Section: New Results
Benchmarking Numerical Optimizers
Participants: D. Brockhoff, B. Derbel, A. Liefooghe, T.-D. Tran, D. Tušar, T. Tušar (DOLPHIN), O. Ait Elhara, A. Atamna, A. Auger, N. Hansen (TAO team, Inria Saclay), P. Preux (Univ. Lille 3), O. Mersmann, T. Wagner (TU Dortmund University, Germany), B. Bischl (LMU Munich, Germany), Y. Akimoto (Shinshu University, Japan)
In terms of benchmarking numerical optimization algorithms, our research effort went into two different directions. On the one hand, we continued our work on benchmarking single-objective optimization algorithms via the Coco platform in which we started to focus on algorithms for expensive optimization (problems for which only a few function evaluations are affordable). In particular, we benchmarked algorithm variants from the MATSuMoTo library [52] , [50] and from the bandits-based global optimizer SOO (Simultaneous optimistic optimization) [33] , and organized two workshops at CEC 2015 and GECCO 2015 (see also http://coco.gforge.inria.fr/ ). On the other hand, we started to develop an extension of the Coco platform towards multiobjective optimization and tried to establish the state of the art in single-objective benchmarking (target-based runtimes, data profiles, ...) also in the multi-objective case [30] . At the same time, we proposed a new bi-objective test suite, consisting of 300 well-understood, scalable test problems.