[SCIP] Some insights about the Lagrangian relaxation

Abbas Omidi abb.omidi at gmail.com
Sat Feb 19 06:20:24 CET 2022


Dear Prof. Lübbecke,

Thank you so much for your attention. I am actually working on the problem
in GAMS software and trying to solve the algorithm by SCIP solver. I can
send the source code and would appreciate anyone can check the model,
specifically for the second question.

For your second recommendation (B&P), I knew that the SCIP suite comes with
a useful example to implement B&P on the BPP and I tried that, but at the
moment extending this to the multi-dimensional BPP is hard work and this is
why I am struggling to implement that by LR. As I have seen Lagrangian
relaxation is frequently studied in the papers, specifically in solving the
real-world optimization problem, I am very interested in knowing the way
that how this method can be applied to solve such a problem if its benefit
is only producing a lower/upper bound? any implementation (source code) or
detailed descriptions would be very helpful.

Whiles ago, I tried to run GCG on the Windows-based platform, but at that
time it did not support the Windows system as well. Would you say please,
is there any way to use it on the Windows in the latest version? If so, I
was wondering if, advise me.


Best Regards
Abbas

On Mon, Feb 14, 2022 at 3:17 AM Marco Lübbecke <
marco.luebbecke at rwth-aachen.de> wrote:

> Hi Abbas,
>
> these are very general questions (1 and 3) about using Lagrangian
> relaxation, or specific to your implementation (2) which we cannot
> see.
>
> In question 3 you already realize that LR does not "solve" a problem,
> it only produces a dual bound. If you want primal solutions, maybe
> alternatively use Dantzig-Wolfe and branch-and-price? Since the
> relaxation is "the same", but B&P can additionally produce primal
> solutions, this may help with the "repair" step. Incidentally, the
> SCIP opt suite comes with a B&P solver, which is GCG. You can directly
> feed your original model into it. It will also automatically try to
> answer your question 1.
>
> All the best, Marco
>
>
> Am So., 13. Feb. 2022 um 15:14 Uhr schrieb Abbas Omidi <
> abb.omidi at gmail.com>:
> >
> > Dear support team,
> >
> > I hope you all be fine.
> > I am working on the Lagrangian relaxation algorithm to solve a
> scheduling problem. And I need some of the insights about the subgradient
> optimization for updating the multipliers. Let's say, as I would like to
> understand this algorithm mechanism, I have tried to implement an instance
> on the generalized assignment problem, as it is frequently studied in the
> literature. In short, it works very well for all of the instances I have
> tested. Nowadays, I am struggling to implement this method on the variant
> of the multi-dimensional bin-packing problem, but first, I want to do it on
> the standard BPP as, {min y | x = 1, x <= y, ax <= b; x, y in (0,1)}.
> >
> > Without changing the SCIP parameters, the original problem can be solved
> in around 130 secs. After dualizing the assignment constraints
> (sum(bin,x(item, bin) = 1, forall item) and by using the subgradient
> method, the problem can be solved in less than 20 secs and stopped in the
> pre-defined tolerance. For the first step, it is very good and the reported
> lower bound is around 12. (the optimal solution of the original MIP problem
> is 17). What I have tried to understand is that:
> >
> > 1) Is there any way to know which one of the constraints has to be
> dualized to speed up the convergence of the SB method?
> >
> > 2) I have worked on dualizing the second constraint. I set the
> multiplier's initial value on the capacity constraint marginal value in the
> original problem, but as it corresponds to the upper bound of the
> constraint, the capacity of each bin, it could not produce a reasonable
> lower bound.  Also, I set a random number to pre-define the multipliers,
> but the result is very strange.
> > ==================================================
> > * The output of the algorithm by dualizing the second constaint:
> >          dual_object    theta         norm                       step
>             gap
> > iter1     337.000      2.000       1733184.000      -3.69263E-4
>  0.470
> > iter2    -303.000      2.000       1733184.000       3.692626E-4
>  0.470
> > iter3     337.000      1.000       1733184.000      -1.84631E-4
>  0.235
> > iter4      17.000       1.000       1733184.000        2.62377E-19
> 4.44089E-16
> > ===================================================
> > Would you say please, are the above solutions correct?
> >
> > 3) As the Lagrangian method in many cases cannot produce a primal
> (feasible) solution, we will have to check the final solution (related to
> the maximized LR bound as the original problem is minimization) and try to
> reproduce a feasible solution based on. Unfortunately, in the case of BPP,
> the achieved solution by dualizing the first constraint is far quite from
> the optimal solution. I was wondering if, is there any way to repair the
> infeasible solution inside the SB algorithm iterations to invoke a feasible
> solution?
> >
> > Best regards
> > Abbas
> > _______________________________________________
> > Scip mailing list
> > Scip at zib.de
> > https://listserv.zib.de/mailman/listinfo/scip
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.zib.de/pipermail/scip/attachments/20220219/5f2aef15/attachment.html>


More information about the Scip mailing list