Dear colleagues,
Please find below the announcement for the next
GECCO workshop on Real-Parameter Black-Box Optimization Benchmarking (BBOB 2018).
Black-box optimization problems occur in many application areas and
several types of optimization algorithms have been proposed for this
class of problems. One of the main questions when having to solve
a black-box problem in practice is to decide about which algorithm
(type) to use.
Most of the time, we fall back on numerical benchmarking of such
algorithms to understand and recommend algorithms but this is a
non-trivial and tedious task. In recent years, the Comparing
Continuous Optimizers platform (COCO, github.com/numbbo/coco) has
been developed to free algorithm designers and users of optimization
software alike from the tedious tasks of setting up experiments
and analyzing the performance data of algorithms by automatizing
the benchmarking process. Three black-box optimization benchmarking
(BBOB) suites have been made available so far with (jointly) more
than 200 algorithm data sets available to the optimization community.
Though the basis of the BBOB workhop series is the COCO platform,
we are looking forward to any submission related to black-box
optimization benchmarking of continuous optimizers in the widest sense,
for example papers that:
- discuss, compare, and improve upon any benchmarking methodology
for continuous optimizers such as design of experiments,
performance measures, presentation methods, benchmarking frameworks,
test functions...
- describe and benchmark new or not-so-new algorithms on one of the
following testbeds,
- bbob testbed with 24 noiseless single-objective functions
- bbob-noisy with 30 noisy single-objective functions
- bbob-biobj with 55 noiseless bi-objective functions
- compare new or existing algorithms from the COCO/BBOB database,
- analyze the data obtained in previous editions of BBOB.
In the latest COCO release (version 2.2) we offer quite some new
features that will improve your efficiency when using COCO and that
will allow in particular to focus on using and investigating the
already existing data sets:
- automated download of algorithm data sets via cocopp.data_archive,
- updated convergence plots,
- LaTeX tables are reorganized to have better control within LaTeX,
- noisy LaTeX template moved from the svn repository to github and was updated,
- postprocessing cocopp module now python 3 compatible,
- all figures (and their inclusion to the LaTeX templates) have been updated
and beautified in order to comply with the newest matplotlib version,
- zip files are supported as input files.
To be notified about further releases of the COCO code and information
related to the workshop, please register at
http://numbbo.github.io/register.
Don't hesitate to forward this message to possibly interested people
and all our apologies already if you received this file multiple
times.
Thanks,
The BBOBies
WORKSHOP ON REAL-PARAMETER BLACK-BOX OPTIMIZATION BENCHMARKING
with a continued focus on multi-objective problems
to be held as part of the
2018 Genetic and Evolutionary Computation Conference (GECCO-2018)
July 15-19, Kyoto, Japan
Organized by ACM SIGEVO
http://gecco-2018.sigevo.org
GECCO 2018 WORKSHOP
8th Real-Parameter Black-Box Optimization Benchmarking (BBOB-2018)
Submission Deadline: Tuesday, March 27, 2017
Webpage:
http://numbbo.github.io/workshops/BBOB-2018/
Register for news via
http://numbbo.github.io/register
Source code:
http://github.com/numbbo/coco (scroll down for a quick start)
The Black-Box-Optimization Benchmarking (BBOB) methodology associated
to the BBOB GECCO workshops has become a well-established standard for
benchmarking stochastic and deterministic continuous optimization
algorithms in recent years (
https://github.com/numbbo/coco). So far,
the BBOB GECCO workshops have covered benchmarking of blackbox
optimization algorithms for single- and bi-objective, unconstrained
problems in exact and noisy, as well as expensive and non-expensive
scenarios. A substantial portion of the success can be attributed
to the Comparing Continuous Optimization benchmarking platform (COCO)
that builds the basis for all BBOB GECCO workshops and that
automatically allows algorithms to be benchmarked and performance data
to be visualized effortlessly.
Like for the previous editions of the workshop, we will provide source
code in various languages (C/C++, Matlab/Octave, Java, and Python) to
benchmark algorithms on three different test suites (single-objective
with and without noise a well as a noiseless bi-objective suite).
Postprocessing data and comparing algorithm performance will be
equally automatized with COCO (up to already prepared LaTeX templates
for writing papers). As a new feature for the 2018 edition, we provide
significantly easier access to the already benchmarked data sets such
that the analysis of already available COCO data becomes simple(r).
Analyzing the vast amount of available benchmarking data (from 200+
experiments collected throughout the years) will be therefore a
special focus of BBOB-2018. Given that the field of (multiobjective)
Bayesian optimization received renewed interest in the recent past,
we would also like to re-focus our efforts towards benchmarking
algorithms for expensive problems (aka surrogate-assisted algorithms
developed for limited budgets). Moreover, several classical
multiobjective optimization algorithms have not yet been benchmarked
on the bbob-biobj test suite, provided since 2016, such that we
encourage contributions on these three following topics in particular:
- expensive/Bayesian/surrogate-assisted optimization,
- multiobjective optimization,
- analysis of existing benchmarking data.
Interested participants of the workshop are invited to submit a paper
(not limited to the above topics) which might or might not use the
provided LaTeX templates to visualize the performance of unconstrained
single- or multiobjective black-box optimization algorithms of their
choice on any of the provided testbeds. We encourage particularly
submissions about algorithms from outside the evolutionary computation
community as well as any papers related to topics around optimization
algorithm benchmarking.
For details, please see the separate BBOB-2018 web page at:
http://numbbo.github.io/workshops/BBOB-2018/
Submissions:
We encourage any submission that is concerned with black-box
optimization benchmarking of continuous optimizers, for example papers
that:
- describe and benchmark new or not-so-new algorithms on one of the
above testbeds,
- compare new or existing algorithms from the COCO/BBOB database,
- analyze the data obtained in previous editions of BBOB, or
- discuss, compare, and improve upon any benchmarking methodology
for continuous optimizers such as design of experiments,
performance measures, presentation methods, benchmarking
frameworks, test functions, ...
Paper submissions are expected to be done through the official GECCO
submission system at
https://ssl.linklings.net/conferences/gecco/
until the (hard) deadline on March 27, 2018.
In order to finalize your submission, we kindly ask you to fill in
addition the form at
http://numbbo.github.io/submit where you are
supposed to provide a link to your data as well if this applies. To
upload your data to the web, you might want to use
https://zenodo.org/
which offers uploads of data sets up to 50GB in size or any other
provider of online data storage. Please let us know briefly in the mandatory Data field, why you do not provide any data for example in
case you submit a paper unrelated to the above BBOB test suites.
Source Code:
The basis of the workshop is the Comparing Continuous Optimizer
platform (COCO,
https://github.com/numbbo/coco), now rewritten fully
in ANSI C with other languages calling the C code. Languages currently
available are C, Java, MATLAB/Octave, and Python. Please use the
latest version of the code, but at least COCO version 2.2.
Please note that the `bbob-noisy` test suite is not yet supported
by the new code. In case you want to submit a paper related to this
test suite, please use the previous code base at
http://coco.gforge.inria.fr/doku.php?id=downloads
Important Dates:
- 2018-02-27 paper submission system opened
- 2018-03-01 release 2.2 of the COCO platform: https://github.com/numbbo/coco/releases/
- 2018-03-27 paper and data submission deadline (not extendable!)
- 2018-04-10 decision notification
- 2018-04-24 deadline camera-ready papers
- 2018-07-15 or 2018-07-16 workshop
Organizers:
- Anne Auger, Inria Saclay - Ile-de-France, France
- Julien Bect, CentraleSupélec, France
- Dimo Brockhoff, Inria Saclay - Ile-de-France, France
- Nikolaus Hansen, Inria Saclay - Ile-de-France, France
- Rodolphe Le Riche, Ecole Nationale Supérieure des Mines de Saint–Etienne, France
- Victor Picheny, INRA Occitanie-Toulouse, France
- Tea Tušar, Jožef Stefan Institute, Ljubljana, Slovenia
GECCO is sponsored by the Association for Computing Machinery Special
Interest Group on Genetic and Evolutionary Computation (SIGEVO). SIG
Services: 2 Penn Plaza, Suite 701, New York, NY, 10121, USA,
1-800-342-6626 (USA and Canada) or +212-626-0500 (Global).