Iterative Optimization Heuristics Profiler

Development Team

  • Hao Wang, Leiden Institute of Advanced Computer Science,
  • Carola Doerr, CNRS and Sorbonne University,
  • Furong Ye, Leiden Institute of Advanced Computer Science,
  • Sander van Rijn, Leiden Institute of Advanced Computer Science,
  • Thomas Bäck, Leiden Institute of Advanced Computer Science.

Reference

When using IOHprofiler and parts thereof, please kindly cite this work as

Carola Doerr, Hao Wang, Furong Ye, Sander van Rijn, Thomas Bäck: IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics, arXiv e-prints:1810.05281, 2018.

@ARTICLE{IOHprofiler,
  author = {Carola Doerr and Hao Wang and Furong Ye and Sander van Rijn and Thomas B{\"a}ck},
  title = {{IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics}},
  journal = {arXiv e-prints:1810.05281},
  archivePrefix = "arXiv",
  eprint = {1810.05281},
  year = 2018,
  month = oct,
  keywords = {Computer Science - Neural and Evolutionary Computing},
  url = {https://arxiv.org/abs/1810.05281}
}

Acknowledgements

Carola Doerr thanks Anne Auger and Dimo Brockhoff from INRIA Saclay, France, for very helpful discussions about the performance criteria used by the COCO (COmparing Continuous Optimisers) platform (https://github.com/numbbo/coco). Furong Ye acknowledges financial support from the China Scholarship Council, CSC No. 201706310143.

License

This application is governed by the BSD 3-Clause license.

BSD 3-Clause License

Copyright © 2018, All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  • Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

IOHprofiler: Post-Processing

This is the post-processing tool of the project Iterative Optimization Heuristics Profiler (IOHprofiler). This tool provides a web-based interface to analyze and visualization the benchmark data, collected from previous experiments. Importantly, we do support the widely used COCO data format (aka. Black-Box Optimization Benchmarking).

This tool is mainly built on R package Shiny, plotly and Rcpp. To use this tool, two options are available:

  1. local installation and execution (see installation instructions) and
  2. a web-based service that you can use right away.

Documentation

The details on the experimentation and post-processing tool can be found on arXiv.org.

Installation

This software is mainly written in R. To run it directly from the source code, please install R environment first. The binary file and installation manual for R can be found here https://cran.r-project.org/.

After R environment is correctly installed on you machine, several R packages are needed to execute the sorftware. Please start up the R console, which can be done (in case you're not familiar with R) by either executing command R in your system terminal or open the R application. Once it is done, please copy-paste and execute the following commands into the R console to install all depedencies.

install.packages(c('shiny', 'shinyjs', 'shinydashboard', 'magrittr', 'dplyr', 'reshape2', 'data.table', 'markdown', 'Rcpp', 'plotly'))

Note that it is important to check if aforementioned packages are correctly installed. The easiest method is to test if those packasges can be loaded:

library(shiny)
library(shinyjs)
library(shinydashboard)
library(magrittr)
library(dplyr)
library(reshape2)
library(data.table)
library(markdown)
library(Rcpp)
library(plotly)

Error messages will be shown in your R console if there is any installation issue.

Then, please clone (or downlaod) this repository into your own system. To clone the repository, please execute the following command in your system console (terminal):

> git clone git@github.com:IOHprofiler/Post-Processing.git
> git clone https://github.com/IOHprofiler/Post-Processing.git

To download, please click the green download button on this page.

To start the post-processing tool, please execute the following commands in the R console:

> shiny::runApp('/path/to/the/clone/folder')

Online Service

Alternatively, we have built a server to put this tool online, which is currently hosted in Leiden Institute of Advanced Computer Science, Leiden University. The server can be accessed via http://iohprofiler.liacs.nl.

Data Preparation

Data preparation is fairly easy for this tool. Just compress the data folder obtained from the experiment into a zip file and uploaded it. Currently, we support two data formats:

Programing Interface

In addition to the graphical user interface, it is possible to directly call several procedures to analyze the data.

  • To read and align all the data set in a folder
> ds <- read_dir('/path/to/data/folder')
> ds
DataSetList:
1: DataSet((1+1)-Cholesky-CMA on f1 2D)
2: DataSet((1+1)-Cholesky-CMA on f1 5D)
3: DataSet((1+1)-Cholesky-CMA on f1 10D)
4: DataSet((1+1)-Cholesky-CMA on f1 20D)
5: DataSet((1+1)-Cholesky-CMA on f10 2D)
6: DataSet((1+1)-Cholesky-CMA on f10 5D)
7: DataSet((1+1)-Cholesky-CMA on f10 10D)
8: DataSet((1+1)-Cholesky-CMA on f10 20D)
9: DataSet((1+1)-Cholesky-CMA on f11 2D)
10: DataSet((1+1)-Cholesky-CMA on f11 5D)

The return value is a list of DataSets. Each data set consists of:

  1. runtime samples (aligned by target values),
  2. function values samples (aligned by runtime) and
  3. endogenous parameter samples of your optimization algorithm (aligned by target values).
  • To get a general summary of one data set, you can use function summary:
> summary(ds[[1]])
DataSet Object: ((1+1)-Cholesky-CMA, f1, 2D)
80 instance are contained: 1,2,3,4,5,6,7,...,73,74,75,76,77,78,79,80

               target runtime.mean runtime.median runtime.sd succ_rate
   1:     70.10819126       1.0000            1.0  0.0000000    1.0000
   2:     66.42131777       1.0125            1.0  0.1118034    1.0000
   3:     62.98712083       1.1125            1.0  0.8999824    1.0000
   4:     62.54395893       1.1375            1.0  0.9242684    1.0000
   5:     61.73051944       1.2000            1.0  1.1295793    1.0000
  ---                                                                 
1478: 9.473524187e-10     182.6000          182.0 24.0894168    0.0625
1479: 2.759534823e-10     192.0000          188.5 13.5892114    0.0500
1480: 2.463309556e-10     195.6667          195.0 14.0118997    0.0375
1481: 5.223910193e-11     196.0000          196.0 19.7989899    0.0250
1482: 1.638511549e-11     210.0000          210.0         NA    0.0125

    budget  Fvalue.mean Fvalue.median    Fvalue.sd
 1:      1 1.672518e+01  1.171157e+01 1.626487e+01
 2:      2 1.341813e+01  7.960940e+00 1.466877e+01
 3:      3 1.100825e+01  6.439678e+00 1.261937e+01
 4:      4 9.326633e+00  5.492333e+00 1.213908e+01
 5:      5 7.501883e+00  2.946388e+00 1.204200e+01
---                                               
90:    229 4.902827e-09  4.506106e-09 2.863671e-09
91:    231 4.902827e-09  4.506106e-09 2.863671e-09
92:    238 4.902827e-09  4.506106e-09 2.863671e-09
93:    251 4.737548e-09  4.461953e-09 2.526087e-09
94:    257 4.737548e-09  4.461953e-09 2.526087e-09

Attributes: names, class, funcId, DIM, Precision, algId, comment, datafile, instance, maxEvals, finalFunvals
  • To get a summary of one data set at target values/budget values (e.g., the runtime distribution), you can use function get_RT_summary and get_FV_summary:
> get_RT_summary(ds[[1]], ftarget = 1e-1, maximization = FALSE)
             algId       f(x) runs  mean median       sd 2% 5% 10% 25% 50% 75% 90% 95% 98%
(1+1)-Cholesky-CMA 0.09986529   80 36.55   37.5 17.11236  4  5  14  22  37  49  57  67  68
> get_FV_summary(ds[[1]], runtimes = 100, maximization = FALSE)
             algId runtime runs           mean       median           sd           2%           5%          10%          25%          50%          75%          90%         95%         98%
(1+1)-Cholesky-CMA     100   80   0.0002333208 3.797025e-05 0.0004581431 9.843261e-08 4.168509e-07 8.343177e-07 6.090179e-06 3.797025e-05 0.0001831323 0.0006597004 0.001072814 0.001900295
> get_RT_sample(ds[[1]], ftarget = 1e-1, maximization = F, format = 'long')
                algId          f(x) run RT
1  (1+1)-Cholesky-CMA 0.09986528573   1 69
2  (1+1)-Cholesky-CMA 0.09986528573   2 39
3  (1+1)-Cholesky-CMA 0.09986528573   3 38
4  (1+1)-Cholesky-CMA 0.09986528573   4 34
5  (1+1)-Cholesky-CMA 0.09986528573   5 67
6  (1+1)-Cholesky-CMA 0.09986528573   6  3
7  (1+1)-Cholesky-CMA 0.09986528573   7 36
8  (1+1)-Cholesky-CMA 0.09986528573   8 41
9  (1+1)-Cholesky-CMA 0.09986528573   9 14
10 (1+1)-Cholesky-CMA 0.09986528573  10 30
  • It is also possible to generate some diagnostic plots (using ggplot2):
> ds <- read_dir('~/Dropbox/data/LO_adap_lambda/')
> plot(ds[[1]])
show data aligned by runtime?

:construction: TODO

The technical tasks to do are listed as follows:

  • [ ] convert data processing code into a package
  • [ ] add more stastistical tests
  • [ ] implement the standard R summary method for DataSet and DataSetList classes
  • [ ] add ggplot2 based static plotting procedures for the programming interface
  • [ ] make the data analysis part as a separate R package
  • [ ] to determine the data source to align the data set using runtimes

Contact

If you have any questions, comments, suggestions or pull requests, please don't hesitate contacting us IOHprofiler@liacs.leidenuniv.nl!

Cite us

The development team is:

  • Hao Wang, Leiden Institute of Advanced Computer Science,
  • Carola Doerr, CNRS and Sorbonne University,
  • Furong Ye, Leiden Institute of Advanced Computer Science,
  • Sander van Rijn, Leiden Institute of Advanced Computer Science,
  • Thomas Bäck, Leiden Institute of Advanced Computer Science.

When using IOHprofiler and parts thereof, please kindly cite this work as

Carola Doerr, Hao Wang, Furong Ye, Sander van Rijn, Thomas Bäck: IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics, arXiv e-prints:1810.05281, 2018.

@ARTICLE{IOHprofiler,
  author = {Carola Doerr and Hao Wang and Furong Ye and Sander van Rijn and Thomas B{\"a}ck},
  title = {{IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics}},
  journal = {arXiv e-prints:1810.05281},
  archivePrefix = "arXiv",
  eprint = {1810.05281},
  year = 2018,
  month = oct,
  keywords = {Computer Science - Neural and Evolutionary Computing},
  url = {https://arxiv.org/abs/1810.05281}
}

Upload Data

When the data set is huge, the alignment can take a very long time. In this case, you could toggle the efficient mode to subsample the data set. However, the precision of data will be compromised.

Remove all data you uploaded

Data Processing Prompt


                    

List of Processed Data

Runtime Statistics at Chosen Target Values

Set the range and the granularity of the results. The table will show fixed-target runtimes for evenly spaced target values.

Save this table as csv

This table summarizes for each algorithm and each target value chosen on the left:

  • runs: the number of runs that have found at least one solution of the required target quality \(f(x)\),
  • mean: the average number of function evaluations needed to find a solution of function value at least \(f(x)\)
  • median, \(2\%, 5\%,\ldots,98\%\) : the quantiles of these first-hitting times

When not all runs managed to find the target value, the statistics hold only for those runs that did. That is, the mean value is the mean of the successful runs. Same for the quantiles. An alternative version with simulated restarts is currently in preparation.

Original Runtime Samples

Set the range and the granularity of the results. The table will show fixed-target runtimes for evenly spaced target values.

Save the aligned runtime samples as csv

This table shows for each selected algorithm \(A\), each selected target value \(f(x)\), and each run \(r\) the number \(T(A,f(x),r)\) of evaluations performed by the algorithm until it evaluated for the first time a solution of quality at least \(f(x)\).

Expected Runtime (per function)

Range of the displayed target values

The mean, median and standard deviation of the runtime samples are depicted against the best objective values. The displayed elements (mean, median, standard deviations) can be switched on and off by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure.

Histogram of Fixed-Target Runtimes

Choose whether the histograms are overlaid in one plot or separated in several subplots:

This histogram counts how many runs needed between \(t\) and \(t+1\) function evaluations. The bins \([t,t+1)\) are chosen automatically. The bin size is determined by the so-called Freedman–Diaconis rule: \(\text{Bin size}= 2\frac{Q_3 - Q_1}{\sqrt[3]{n}}\), where \(Q_1, Q_3\) are the \(25\%\) and \(75\%\) percentile of the runtime and \(n\) is the sample size. The displayed algorithms can be selected by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure.

Empirical Probability Mass Function of the Runtime

Select the target value for which the runtime distribution is shown

Warning! The probability mass function of the runtime is approximated by the treating the runtime as a continuous random variable and applying the kernel estimation (KDE):

The plot shows the distribution of the first hitting times of the individual runs (dots), and an estimated distribution of the probability mass function. The displayed algorithms can be selected by clicking on the legend on the right. A tooltip and toolbar appear when hovering over the figure. This also includes the option to download the plot as png file. A csv file with the runtime data can be downlowaded from the Data Summary tab.

Empirical Cumulative Distribution of the runtime: Aggregation

Set the range and the granularity of the quality targets taken into account in the ECDF curve. The plot will show the ECDF curves for evenly spaced target values.

The evenly spaced target values are:


                          

The fraction of (run,target value) pairs \((i,v)\) satisfying that the best solution that the algorithm has found in the \(i\)-th run within the given time budget \(t\) has quality at least \(v\) is plotted against the available budget \(t\). The displayed elements can be switched on and off by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure.


Area Under the ECDF

Set the range and the granularity of the evenly spaced quality targets taken into account in the plot.

The area under the ECDF is caculated for the sequence of target values specified on the left. The displayed values are normalized against the maximal number of function evaluations for each algorithm. Intuitively, the larger the area, the better the algorithm. The displayed algorithms can be selected by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure. This also includes the option to download the plot as png file.

Empirical Cumulative Distribution of the Runtime: Single Target

Select the target values for which EDCF curves are displayed

Each EDCF curve shows the proportion of the runs that have found a solution of at least the required target value within the budget given by the \(x\)-axis. The displayed curves can be selected by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure. This also includes the option to download the plot as png file.

Target Statistics at Chosen Budget Values

Set the range and the granularity of the results. The table will show function values that have been reached within evenly spaced evaluation budgets.

Save this table as csv

This table summarizes for each algorithm and each budget \(B\) chosen on the left:

  • runs: the number of runs that have performed at least \(B\) evaluations,
  • mean: the average best-so-far function value obtain within a budget of \(B\) evaluations,
  • median, \(2\%, 5\%,\ldots,98\%\) : the quantiles of the best function values found within the first \(B\) evaluations.

When not all runs evaluated at least \(B\) search points, the statistics hold for the subset of runs that did. Alternative statistics using simulated restarted algorithms are in preparation.

Original Target Samples

Set the range and the granularity of the results. The table will show function values that have been reached within evenly spaced evaluation budgets.

Save the aligned runtime samples as csv
This table shows for each selected algorithm \(A\), each selected target value \(f(x)\), and each run \(r\) the number \(T(A,f(x),r)\) of evaluations performed by the algorithm until it evaluated for the first time a solution of quality at least \(f(x)\).

Histogram of Fixed-Budget Targets

Choose whether the histograms are overlaid in one plot or separated in several subplots:

This histogram counts the number of runs whose best-so-far function values within the first \(B\) evaluations is between \(v_i\) and \(v_{i+1}\). The buckets \([v_i,v_{i+1})\) are chosen automatically according to the so-called Freedman–Diaconis rule: \(\text{Bin size}= 2\frac{Q_3 - Q_1}{\sqrt[3]{n}}\), where \(Q_1, Q_3\) are the \(25\%\) and \(75\%\) percentile of the runtime and \(n\) is the sample size. The displayed algorithms can be selected by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure.

Empirical Probability Density Function of Fixed-Budget Function Values

Select the budget for which the distribution of best-so-far function values is shown

The plot shows, for the budget selected on the left, the distribution of the best-so-far function values of the individual runs (dots), and an estimated distribution of the probability mass function. The displayed algorithms can be selected by clicking on the legend on the right. A tooltip and toolbar appear when hovering over the figure. A csv file with the runtime data can be downlowaded from the Data Summary tab.

Expected Target Value (per function)

Range of the displayed budget values

The mean, median and standard deviation of the best function values found with a fixed-budget of evaluations are depicted against the budget. The displayed elements can be switched on and off by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure.

Empirical Cumulative Distribution of the Fixed-Budget Values: Aggregation

Set the range and the granularity of the budgets taken into account in the ECDF curve. The plot will show the ECDF curves for evenly spaced budgets.

The evenly spaced budget values are:


                          

The fraction of (run,budget) pairs \((i,B)\) satisfying that the best solution that the algorithm has found in the \(i\)-th run within the first \(B\) evaluations has quality at most \(v\) is plotted against the target value \(v\). The displayed elements can be switched on and off by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure.


Area Under the ECDF

Set the range and the granularity of the evenly spaced budgets.

The area under the ECDF is caculated for the sequence of budget values specified on the left. The displayed values are normalized against the maximal target value recorded for each algorithm. Intuitively, the smaller the area, the better the algorithm. The displayed algorithms can be selected by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure.

Empirical Cumulative Distribution of the Fixed-Budget Values: Single Budgets

Select the budgets for which EDCF curves are displayed

Each EDCF curve shows the proportion of the runs that have found within the given budget B a solution of at least the required target value given by the x-axis. The displayed curves can be selected by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure.

Expected Parameter Value (per function)

Range of the function values (\(x\) axis)

The mean or median of internal parameters of the algorithm found with a fixed-budget of evaluations are depicted against the budget. The displayed elements can be switched on and off by clicking on the legend on the right. A tooltip and toolbar appears when hovering over the figure.

Parameter Statistics at Chosen Target Values

Set the range and the granularity of the results. The table will show fixed-target parameter values for evenly spaced target values.

Save this table as csv

This table summarizes for each algorithm and each target value chosen on the left:

  • runs: the number of runs that have found at least one solution of the required target quality \(f(x)\),
  • mean: the average number of function evaluations needed to find a solution of function value at least \(f(x)\)
  • median, \(2\%, 5\%,\ldots,98\%\) : the quantiles of these first-hitting times

When not all runs managed to find the target value, the statistics hold only for those runs that did. That is, the mean value is the mean of the successful runs. Same for the quantiles. An alternative version with simulated restarts is currently in preparation.