[SCIP] Strategies for Tuning Parameters in MILP problems & Pyomo compatibility

James Cussens james.cussens at bristol.ac.uk
Fri Jun 10 10:06:03 CEST 2022


Dear Aiman,

Automatic parameter tuning methods are available in the sense that there is parameter tuning software that is agnostic as to what the target algorithm is. One such example is SMAC3 https://github.com/automl/SMAC3 . One just writes an appropriate wrapper, chooses some SCIP parameters to tune and a bunch of training instances (and set the objective to runtime). Perhaps someone has already done this - I would like to for my own SCIP application, but I never find the time!

James


James Cussens
Room MVB 3.26
Dept of Computer Science, University of Bristol
https://jcussens.github.io/
Funded PhDs available in Bristol in the following areas: Data Science<http://www.bristol.ac.uk/cdt/compass/>, Interactive AI<http://www.bristol.ac.uk/cdt/interactive-ai/>, Cyber Security<http://www.bristol.ac.uk/cdt/cyber-security/> or Digital Health<http://www.bristol.ac.uk/cdt/digital-health/>.
________________________________
From: Scip <scip-bounces at zib.de> on behalf of aiman social <aimansocialacc at gmail.com>
Sent: 10 June 2022 04:25
To: scip at zib.de <scip at zib.de>
Subject: [SCIP] Strategies for Tuning Parameters in MILP problems & Pyomo compatibility

Dear SCIP team,

I am currently working on an MILP scheduling problem using SCIP (with python Pyomo framework).
I’m hoping to get some clarity on the following questions (or if there are docs for reference do let me know).

Using Pyomo framework, SCIP 6.0.0 does not seem to report back the final “dualbound” and “primalbound”. Is this expected?
I'm currently reading the gap results based on the SCIP terminal print out.

I'm also looking to improve the current performance (time to gap) of the model.
I’ve tried the preset changes suggested by SCIP (set emphasis feasibility gives the best current time to gap), but would like to make more granular changes.
For this we have a few questions:

  1.  Are there any suggested strategies to employ in changing individual params?
  2.  Should I tackle granular settings in a specific order? (Try presolve first, then heuristics, and later nodeselection etc.)
  3.  Are automated parameter tuning methods available? (similar to Bayesian parameter tuning in ML)
  4.  Are there other low hanging fruits I should try to improve performance before making granular changes?

Additionally, I've experienced that different hardware led to different performance outcomes (the hardware had ample headroom in each test).
Is this behavior expected?

Any help would be much appreciated.

Regards,
Aiman Nazmi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.zib.de/pipermail/scip/attachments/20220610/b25d1d16/attachment.html>


More information about the Scip mailing list