cates a solution and acts as a separator, the line of
equals indicates that the last solution is an optimal
We now describe the aims and practical issues of
running the MiniZinc Challenge.
Aims of the MiniZinc Challenge
The principle aim of the MiniZinc Challenge is, unsurprisingly, to compare the state of the art among
constraint-programming (CP) solvers. Over the years
this has broadened. Since MiniZinc is solver agnostic, we can use it to compare any combinatorial optimization technology, not just constraint programming. Beyond this principle aim there are other
important motivations in running the challenge: to
collect a wide variety of combinatorial optimization
benchmarks and instances; to encourage constraint-programming solvers to include a wide variety of efficient global propagators within their system; and to
create a de facto standard for expressing combinatorial optimization problems. MiniZinc and all its processing tools are developed as open source and distributed using a BSD-style license.
1 n = 2;
2 m = 3;
3 o = 2;
4 span = 100;
6 d = [| 3, 2, 4 | 4, 1, 3 |];
7 mc = [| 1, 2, 1 | 2, 1, 1 |];
Figure 2. Data ( jsd.dzn) for the Job Shop Scheduling Model.
minizinc js.mzn jsd.dzn --all-solutions
which might print
Selecting Benchmark Problems
Every MiniZinc Challenge has required the solvers to
solve 100 problem instances each within a 15-minute
time limit. One key driver throughout the life of the
MiniZinc Challenge has been to ensure that each
year’s challenge uses new problems. There is no way
to test all the features that exist in constraint-programming solvers, let alone other solving technologies, using 100 instances. When selecting which
models to use we try to cover a number of spectra:
[0, 3, 5, 5, 9, 10] \\
[0, 4, 8, 0, 4, 5] \\
Figure 3. Executing the Model with Data from the Command Line.
Global Constraints: We try to include models that make
use of a reasonable set of global constraints, say 5–10.
Problem Nature: We try to include some problems that
are from the real world, some problems that are purely combinatorial in nature, and some reasonably artificial problems, for example, puzzles.
Optimization/Satisfaction: We include some satisfaction
problems each year, but the focus is on optimization
problems since they are more interesting.
Technology Bias: We try to have some models that look
good for other technologies, that is, they seem likely
to be good for MIP or SAT solvers.
We then select instances for each model trying to
avoid the case where instances are either all too hard
or too easy and hence do not differentiate solvers.
Different Categories in the Challenge
The challenge has run a number of different cate-
gories, which have been slowly growing over the
years, including fixed, free, parallel, and open cate-
gories. In the fixed category each solver must follow
a given search strategy. This category is typically on-
ly supported by constraint-programming solvers,
since they are designed to support user-specified
search. In the free category, a solver is free to use any
SUMMER 2014 57
search strategy it desires, although it is still passed the
fixed search strategy to make use of if it so desires.
This is the broadest category, supported by almost all
entries. In the parallel category, the solver is free to
use any search, and is run on a multicore processor
with a 15-minute wall-clock time limit. The first
three categories are restricted to single engine solvers,
while the new open search category allows multiple
distinct solving engines.
Scoring the Challenge
Most of the other solver competitions tackle satisfia-bility problems, and the scoring is principally how
many problem instances can be shown to be satisfiable or unsatisfiable in a given time limit. This is inadequate for comparing solvers on optimization
Our scoring assigns one point for each pair of
solvers and each instance. If one solver is better than
the other (proves optimality faster, or finds a better
solution) on the instance it gets the point, if the
solvers are indistinguishable (both find the same
quality solution), they each get half a point. In the
last iteration of the competition we updated the scoring scheme slightly. If both solvers prove optimality,
they split the point in the reverse ratio of time taken.
This helps to differentiate cases in which one solver