[Scip] Design choices

Stefan Lörwald stefan.loerwald at gmail.com
Mon Jul 1 12:36:15 MEST 2013


Hi Marc

>> Well, again, let's just try and compare to values. This time for the
>> _whole_ SCIP suite.
>>
>> Time compiling SCIP as C: 3min42s
>> Time compiling SCIP as C++: 3min43s
>
> This is a misunderstanding: Writing C code instead of complicated C++
> code with templates will compile faster. Of course, the price you have
> to pay is that you do not have the benefits.

A statement that still lacks proof. You can write C++ code the exact same
way as C code, though this would be ugly (well, just as C is). C++ may be
more difficult to master, but C++ code in general becomes less complicated
due to data encapsulation and abstraction. This also enables further
optimizations, e.g. by having some methods declared const which means that
the method does not modify the object (bitwise const, pre-C++11) or is
const in the sense of thread-safety (inherently synchronized, >=C++11).
Templates reduce code duplication (with free maintainability bonus! you
don't have to change identical bits in several methods) and are only
compiled if needed, so no slow-down through that.

It's funny though. On the one hand, compilation time of C code vs C++ code
is relevant, on the other hand compilation times don't matter (see your
first email in this conversation).

>> There are _many_ nodes and just _one_ currently best global solution.
>
> There is only one current node and one current LP solution for it.

Yes, only one current node. Currently. See parallelization.

>> This immediately suggest using the one value NULL as global, and
>> specific ones, namely connecting to a node, for the LP.
>
> The point is that no solution might exist. SCIPgetBestSol() allows you to
test this.

Then just don't use NULL at all. Use specific values for the nodes and
SCIPgetBestSol for the global state.

>> It might be handy in the SCIP core _now_, but I doubt this will be good
>> for the future.
>
> O.k. - if the need arises, we can think about changing it.

Good luck with that. (No, seriously, I actually mean it!)

>> In our area, practically every problem is so difficult
 >> that sequential algorithms are doomed to fail terribly without hope
 >> since the next generation of hardware may be vastly improved, but it
 >> still only suffices for a tiny improvement in terms of practically
 >> solvable problem instances. The need for parallelization is outrageously
 >> evident although still this is limited.

> Instead of brute force computation, you can also develop better
 > mathematical methods to solve the problems.

I wasn't suggesting brute force. Branch-and-cut isn't brute force. Though
it's still not an efficient algorithm. The main point is: it's not yet
efficiently parallelized, but it's evident that it should be (or it should
be replaced completely). The time where serial algorithms were top notch is
over.

 >> But what if we can develop an efficient way of parallelizing
 >> branch-and-cut? I know that this is extremely difficult, but let's just
 >> imagine. So you probably solve two cleverly picked nodes at the same
 >> time. Which solution does the currently working method refer to? Sure,
 >> NULL as in "the current LP solution". Failure. There are two current LP
 >> solutions.
 >
 >You should use a separate function then, since this involves
communication.

A separate function does not suffice, the underlying data structure must
reflect the presence of multiple active nodes. Hence NULL is bad.

Yours,
Stefan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://listserv.zib.de/mailman/private/scip/attachments/20130701/7c4a6013/attachment.html


More information about the Scip mailing list