mercredi 11 juillet 2018

UQLab - The Uncertainty Quantification Framework - v.1.1 now available


The new version of UQLab (V1.1) was released on July 5th, 2018.

UQLab is a general purpose Uncertainty Quantification software developed at ETH Zurich (Switzerland) made of interacting modules that solve all types of uncertainty quantification problems in engineering, economics and applied sciences: uncertainty propagation through Monte Carlo simulation (e.g. using surrogate models such as Gaussian processes and polynomial chaos expansions), global sensitivity analysis, rare event estimations, etc.

Three brand new modules are released in V1.1:

  • UQLink allows for a seamless connection of third-party software to UQLab using universal "wrapping" of external codes through templates and a mark-up system.
  • Support vector machines for regression (SVR) and classification (SVC) allow the analyst to build predictive models from data. In the context of uncertainty quantification, SVR can be used as surrogate models of complex simulators using designs of computer experiments. SVC can be used in the context of rare event estimation.


Visit our website www.uqlab.com. Register, download, enjoy!

---
The UQLab Development Team
ETH Zürich | Chair of Risk, Safety & Uncertainty Quantification
Prof. Bruno Sudret

vendredi 22 juin 2018

Soutenance de thèse de Rémi Stroh (mardi 26 juin)

Rémi Stroh soutiendra sa thèse mardi prochain, 26 juin, à 14h à CentraleSupélec.

Cette soutenance sera (évidemment) suivie d'un pot...

Pour plus de détails, voir ci-dessous.  Venez nombreux !



        

L'Ecole doctorale STIC
(Sciences et Technologies de l'Information
et de la Communication)

et le Laboratoire de recherche L2S
(Laboratoire des Signaux et Systèmes)


présentent

l’AVIS DE SOUTENANCE de Monsieur Rémi STROH


Autorisé à présenter ses travaux en vue de l’obtention du Doctorat de l'Université Paris-Saclay, préparé à CentraleSupélec en :

Traitement du signal et des images


« Planification d’expériences numériques en multi-fidélité. Application à un simulateur d’incendies. »


le MARDI 26 JUIN 2018 à 14h00

à

Amphithéâtre Janet
CentraleSupélec, bâtiment Breguet,
3, rue Joliot-Curie; 91190 Gif-sur-Yvette.


Membres du jury :

M. David GINSBOURGER - Rapporteur
Chercheur senior, Institut de Recherche Idiap (UQOD), Suisse
Professeur titulaire, Université de Berne (IMSV), Suisse

Mme Amandine MARREL - Rapporteur
Ingénieur de Recherche, CEA (DER/SESI), France

M. Hervé MONOD - Examinateur
Directeur de Recherche, INRA (MaIAGE), France

M. Bernard PORTERIE - Examinateur
Professeur, Université d’Aix-Marseille (IUSTI), France

M. Julien BECT - Directeur de thèse
Maître de conférences, CentraleSupélec (L2S), France

Mme Séverine DEMEYER - Encadrante
Docteur-ingénieur, LNE (FranceDMS),


M. Bertrand IOOSS - Invité
Chercheur senior, EDF R&D (PRISME), France

M. Nicolas FISCHER - Encadrant, invité
Docteur-ingénieur, LNE (DMS), France

M. Damien MARQUIS - Encadrant, invité
Docteur-ingénieur, LNE (DCFSI), France

M. Emmanuel VAZQUEZ - Encadrant, invité
Maître de conférences, CentraleSupélec (L2S), France





Résumé :

Les travaux présentés portent sur l'étude de modèles numériques multi-fidèles, déterministes ou stochastiques. Plus précisément, les modèles considérés disposent d'un paramètre réglant la qualité de la simulation, comme une taille de maille dans un modèle par différences finies, ou un nombre d'échantillons dans un modèle de Monte-Carlo. Dans ce cas, il est possible de lancer des simulations basse fidélité, rapides mais grossières, et des simulations haute fidélité, fiables mais coûteuses. L'intérêt d'une approche multi-fidèle est de combiner les résultats obtenus aux différents niveaux de fidélité afin d'économiser du temps de simulation. La méthode considérée est fondée sur une approche bayésienne. Le simulateur est décrit par un modèle de processus gaussiens multi-niveaux développé dans la littérature que nous adaptons aux cas stochastiques dans une approche complètement bayésienne. Ce méta-modèle du simulateur permet d'obtenir des estimations de quantités d'intérêt, accompagnés d'une mesure de l'incertitude associée. L'objectif est alors de choisir de nouvelles expériences à lancer afin d'améliorer les estimations. En particulier, la planification doit sélectionner le niveau de fidélité réalisant le meilleur compromis entre coût d'observation et gain d'information. Pour cela, nous proposons une stratégie séquentielle adaptée au cas où les coûts d'observation sont variables. Cette stratégie, intitulée "Maximal Rate of Uncertainty Reduction" (MRUR), consiste à choisir le point d'observation maximisant le rapport entre la réduction d'incertitude et le coût. La méthodologie est illustrée en sécurité incendie, où nous cherchons à estimer des probabilités de défaillance d'un système de désenfumage.

Abstract :

The presented works focus on the study of multi-fidelity numerical models, deterministic or stochastic. More precisely, the considered models have a parameter which rules the quality of the simulation, as a mesh size in a finite difference model or a number of samples in a Monte-Carlo model. In that case, the numerical model can run low-fidelity simulations, fast but coarse, or high-fidelity simulations, accurate but expensive. A multi-fidelity approach aims to combine results coming from different levels of fidelity in order to save computational time. The considered method is based on a Bayesian approach. The simulator is described by a state-of-art multilevel Gaussian process model which we adapt to stochastic cases in a fully-Bayesian approach. This meta-model of the simulator allows estimating any quantity of interest with a measure of uncertainty. The goal is to choose new experiments to run in order to improve the estimations. In particular, the design must select the level of fidelity meeting the best trade-off between cost of observation and information gain. To do this, we propose a sequential strategy dedicated to the cases of variable costs, called Maximum Rate of Uncertainty Reduction (MRUR), which consists of choosing the input point maximizing the ratio between the uncertainty reduction and the cost. The methodology is illustrated in fire safety science, where we estimate probabilities of failure of a fire protection system.

lundi 11 juin 2018

Prix de thèse AMIES 2018

Les candidatures pour le prix de thèse AMIES 2018 sont ouvertes jusqu'au 3 juillet.




Extrait du site de l'AMIES : "Ce prix de thèse, a été créé en 2013 pour promouvoir les thèses de Mathématiques en entreprise. Il est destiné à récompenser un travail de thèse réalisé en partie en collaboration avec une entreprise et ayant directement des retombées pour celle-ci. Ce travail doit également être pertinent du point de vue de la recherche en mathématique. La thèse doit avoir été soutenue en France dans l'année civile précédant la remise du prix."



vendredi 8 juin 2018

Workshop optimization-and-machine-learning September 2018 Toulouse

I'd like to advertize the following workshop.
http://www.cimi.univ-toulouse.fr/optimisation/en/workshop-optimization-and-machine-learning
Optimization-and-machine-learning
September 10-13
Institut de Mathématiques de Toulouse

jeudi 17 mai 2018

ENBIS-18 (2-6 septembre)

La conférence ENBIS-18 aura lieu à Nancy du 2 au 6 septembre 2018.


Liste non-exhaustive des thèmes de la conférence :
  • Design of Experiments
  • Measurement Uncertainty
  • Process Modeling and Control
  • Reliability and Safety
  • Statistics in the Pharmaceutical Industry
  • Statistical Computing
  • Statistical Methods for Industrial Hygiene
  • Operational Risk Management
  • Statistics in Practice
  • Stochastic Modelling
  • Quality Improvement and Six Sigma
  • Data Mining and Warehousing
  • Machine Learning
  • Predictive Analytics

La deadline pour soumettre un abstract est demain (18 mai), donc il est encore temps 😀



mercredi 14 mars 2018

MASCOT-NUM 2018 : demandez le programme !




Le programme complet des journées du GdR est consultable sur le site :


Rendz-vous la semaine prochaine à Nantes !



lundi 12 mars 2018

8th GECCO workshop on Black-Box Optimization Benchmarking (BBOB 2018)

Dear colleagues,

Please find below the announcement for the next GECCO workshop on Real-Parameter Black-Box Optimization Benchmarking (BBOB 2018).

Black-box optimization problems occur in many application areas and several types of optimization algorithms have been proposed for this class of problems. One of the main questions when having to solve a black-box problem in practice is to decide about which algorithm (type) to use.

Most of the time, we fall back on numerical benchmarking of such algorithms to understand and recommend algorithms but this is a non-trivial and tedious task. In recent years, the Comparing Continuous Optimizers platform (COCO, github.com/numbbo/coco) has been developed to free algorithm designers and users of optimization software alike from the tedious tasks of setting up experiments and analyzing the performance data of algorithms by automatizing the benchmarking process. Three black-box optimization benchmarking (BBOB) suites have been made available so far with (jointly) more than 200 algorithm data sets available to the optimization community.

Though the basis of the BBOB workhop series is the COCO platform, we are looking forward to any submission related to black-box optimization benchmarking of continuous optimizers in the widest sense, for example papers that:
  • discuss, compare, and improve upon any benchmarking methodology for continuous optimizers such as design of experiments, performance measures, presentation methods, benchmarking frameworks, test functions...
  • describe and benchmark new or not-so-new algorithms on one of the following testbeds,
    • bbob testbed with 24 noiseless single-objective functions
    • bbob-noisy with 30 noisy single-objective functions
    • bbob-biobj with 55 noiseless bi-objective functions
  • compare new or existing algorithms from the COCO/BBOB database,
  • analyze the data obtained in previous editions of BBOB.

In the latest COCO release (version 2.2) we offer quite some new features that will improve your efficiency when using COCO and that will allow in particular to focus on using and investigating the already existing data sets:
  • automated download of algorithm data sets via cocopp.data_archive,
  • updated convergence plots,
  • LaTeX tables are reorganized to have better control within LaTeX,
  • noisy LaTeX template moved from the svn repository to github and was updated,
  • postprocessing cocopp module now python 3 compatible,
  • all figures (and their inclusion to the LaTeX templates) have been updated  and beautified in order to comply with the newest matplotlib version,
  • zip files are supported as input files.

To be notified about further releases of the COCO code and information related to the workshop, please register at http://numbbo.github.io/register

Don't hesitate to forward this message to possibly interested people
and all our apologies already if you received this file multiple
times.

Thanks,
The BBOBies



WORKSHOP ON REAL-PARAMETER BLACK-BOX OPTIMIZATION BENCHMARKING
with a continued focus on multi-objective problems

to be held as part of the

2018 Genetic and Evolutionary Computation Conference (GECCO-2018)
July 15-19, Kyoto, Japan

Organized by ACM SIGEVO
http://gecco-2018.sigevo.org


GECCO 2018 WORKSHOP
8th Real-Parameter Black-Box Optimization Benchmarking (BBOB-2018)
Submission Deadline: Tuesday, March 27, 2017
Webpage: http://numbbo.github.io/workshops/BBOB-2018/
Register for news via http://numbbo.github.io/register
Source code: http://github.com/numbbo/coco (scroll down for a quick start)


The Black-Box-Optimization Benchmarking (BBOB) methodology associated to the BBOB GECCO workshops has become a well-established standard for  benchmarking stochastic and deterministic continuous optimization  algorithms in recent years (https://github.com/numbbo/coco). So far,  the BBOB GECCO workshops have covered benchmarking of blackbox optimization algorithms for single- and bi-objective, unconstrained  problems in exact and noisy, as well as expensive and non-expensive  scenarios. A substantial portion of the success can be attributed to the Comparing Continuous Optimization benchmarking platform (COCO) that builds the basis for all BBOB GECCO workshops and that  automatically allows algorithms to be benchmarked and performance data to be visualized effortlessly.

Like for the previous editions of the workshop, we will provide source code in various languages (C/C++, Matlab/Octave, Java, and Python) to benchmark algorithms on three different test suites (single-objective with and without noise a well as a noiseless bi-objective suite). Postprocessing data and comparing algorithm performance will be equally automatized with COCO (up to already prepared LaTeX templates for writing papers). As a new feature for the 2018 edition, we provide significantly easier access to the already benchmarked data sets such that the analysis of already available COCO data becomes simple(r).

Analyzing the vast amount of available benchmarking data (from 200+ experiments collected throughout the years) will be therefore a special focus of BBOB-2018. Given that the field of (multiobjective) Bayesian optimization received renewed interest in the recent past, we would also like to re-focus our efforts towards benchmarking algorithms for expensive problems (aka surrogate-assisted algorithms developed for limited budgets). Moreover, several classical multiobjective optimization algorithms have not yet been benchmarked on the bbob-biobj test suite, provided since 2016, such that we encourage contributions on these three following topics in particular:
  • expensive/Bayesian/surrogate-assisted optimization,
  • multiobjective optimization,
  • analysis of existing benchmarking data.

Interested participants of the workshop are invited to submit a paper (not limited to the above topics) which might or might not use the provided LaTeX templates to visualize the performance of unconstrained single- or multiobjective black-box optimization algorithms of their choice on any of the provided testbeds. We encourage particularly submissions about algorithms from outside the evolutionary computation community as well as any papers related to topics around optimization algorithm benchmarking.

For details, please see the separate BBOB-2018 web page at:
http://numbbo.github.io/workshops/BBOB-2018/

Submissions:
We encourage any submission that is concerned with black-box optimization benchmarking of continuous optimizers, for example papers that:
  •  describe and benchmark new or not-so-new algorithms on one of the above testbeds,
  • compare new or existing algorithms from the COCO/BBOB database,
  • analyze the data obtained in previous editions of BBOB, or
  • discuss, compare, and improve upon any benchmarking methodology for continuous optimizers such as design of experiments, performance measures, presentation methods, benchmarking frameworks, test functions, ...

Paper submissions are expected to be done through the official GECCO  submission system at https://ssl.linklings.net/conferences/gecco/ until the (hard) deadline on March 27, 2018.

In order to finalize your submission, we kindly ask you to fill in addition the form at http://numbbo.github.io/submit where you are  supposed to provide a link to your data as well if this applies. To upload your data to the web, you might want to use https://zenodo.org/ which offers uploads of data sets up to 50GB in size or any other  provider of online data storage. Please let us know briefly in the mandatory Data field, why you do not provide any data for example in case you submit a paper unrelated to the above BBOB test suites.

Source Code:
The basis of the workshop is the Comparing Continuous Optimizer platform (COCO, https://github.com/numbbo/coco), now rewritten fully in ANSI C with other languages calling the C code. Languages currently available are C, Java, MATLAB/Octave, and Python. Please use the latest version of the code, but at least COCO version 2.2.

Please note that the `bbob-noisy` test suite is not yet supported by the new code. In case you want to submit a paper related to this test suite, please use the previous code base at http://coco.gforge.inria.fr/doku.php?id=downloads

Important Dates:
  • 2018-02-27 paper submission system opened
  • 2018-03-01 release 2.2 of the COCO platform: https://github.com/numbbo/coco/releases/ 
  • 2018-03-27 paper and data submission deadline (not extendable!)
  • 2018-04-10 decision notification
  • 2018-04-24 deadline camera-ready papers
  • 2018-07-15 or 2018-07-16 workshop

Organizers:
  • Anne Auger, Inria Saclay - Ile-de-France, France
  • Julien Bect, CentraleSupélec, France
  • Dimo Brockhoff, Inria Saclay - Ile-de-France, France
  • Nikolaus Hansen, Inria Saclay - Ile-de-France, France
  • Rodolphe Le Riche, Ecole Nationale Supérieure des Mines de  Saint–Etienne, France
  • Victor Picheny, INRA Occitanie-Toulouse, France
  • Tea Tušar, Jožef Stefan Institute, Ljubljana, Slovenia

GECCO is sponsored by the Association for Computing Machinery Special Interest Group on Genetic and Evolutionary Computation (SIGEVO). SIG Services: 2 Penn Plaza, Suite 701, New York, NY, 10121, USA, 1-800-342-6626 (USA and Canada) or +212-626-0500 (Global).