Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance assessment against existing implementations #82

Closed
beniz opened this issue Oct 27, 2014 · 1 comment
Closed

Performance assessment against existing implementations #82

beniz opened this issue Oct 27, 2014 · 1 comment

Comments

@beniz
Copy link
Collaborator

beniz commented Oct 27, 2014

ES optimizers have the reputation of being slow, so libcmaes tries to yield the best performances both in terms of computational cost of the optimization step, and wrt the number of function evaluations.

This ticket is about benchmarking libcmaes against other existing implementations. My current list contains:

It will be possible to add others as needed.

@beniz
Copy link
Collaborator Author

beniz commented Oct 30, 2014

Results and details are below. The main information being that libcmaes beats all other tested implementations above 10-D, and equals them for lower dimensions. Shark is last and the C version comes in between.

====Settings====

  • Tested Shark, C version and libcmaes
  • Tested CMA-ES, active CMA-ES (libcmaes only) and VD-CMA (libcmaes and Shark)
  • All tests are on Rosenbrock, in dimensions, 2,3,5,10,20,40,50,100 and sometimes 200 (VD-CMA)
  • All algorithms were run 10 times in every dimension, and results are the mean values, and run which failed to find the optima were removed from the set
  • All software were compiled with g++ and optimization flag -O4 for best performances
  • Shark didn't offer useful stopping criteria, so I did add a stagnation criteria in order to detect failures on Rosenbrock. The lack of stopping criteria may make Shark's performances slightly better in comparison to C and libcmaes. Sharks' VDCMA was buggy and I told them in order that they could provide a fix before comparison
  • C version was modified in order to remove all file read/write from within the loop, and eigen value decomposition was forced at every iteration in order to compare true computing performances (@nikohansen I wonder how generic your 'return on time percentage' trick can be, in order to possibly integrate a similar one in the libcmaes)

The code for running the benchmark can be reviewed here: https://gist.github.com/beniz/f5512f4be4526a7ef24a

====Results====
These are the mean values over 10 runs for all algorithms' successful runs only.

Note: the reported time use the system's clock and can be slightly affected by the operating system operations. In other words, the variance is higher on very low (~20 milliseconds) results.

bench_cmaes2

The graph above are in y-logscale: libcmaes is most of the time twice faster than Shark for standard CMA-ES, and one third faster than the C version. Differences slightly vary with dimension.

bench_cmaes_fevals

bench_vdcma

EDIT: table of results for CMA-ES in 100-D (mean over 10 runs), all timings in milliseconds:

Soft dim niter total_time step_time fevals
libcmaes 100 29967.8 115584 3.85736 509452
Shark 100 29910.4 141571 4.73233 508476
C 100 30596 121976 3.98683 520132

@beniz beniz closed this as completed Nov 3, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant