is only slightly faster than another from cases in
which it is orders of magnitude faster.
Lessons of the Challenge
Entrants over the years to the MiniZinc Challenge include constraint-programming solvers, optimization
research group, mixed-integer programming solvers,
SAT and SAT modulo theory solvers, and hybrid
Constraint Programming Solvers:
Gecode, 2 ECLiPSe (Apt and Wallace 2007), SICStus
Prolog, 3 JaCoP, 4 BProlog, 5 Choco (Laburthe 2000),
Mistral, 6 OR-tools, 7 gecoxicals, picat, 8 and Opturion
CPX, 9 as well as CP solvers developed by the NICTA
Optimization Research Group: g12-fd, g12-lazyfd (
Feydy and Stuckey 2009), and Chuffed.
Mixed Integer Programming Solvers:
SCIP, 10 as well as interfaces to MIP solvers developed
by the Optimization research group: CPLEX, 11
Gurobi, 12 and CBC. 13
SAT and SAT Modulo Theory Solvers:
fzntini (Huang 2008), BEE (Metodi, Codish, and Stuckey 2013), fzn2smt (Bofill et al. 2012), minisatid. 14
A list of all the medal winners from 2008–2013 is
shown in table 1.
After running the competition for 6 years we have
certainly learned many things. Some things were
simply lessons about how to run the competition. At
the start of running the challenge we included system stress benchmarks designed to stress a particular
part of the solver. These included stressing search,
and propagation. These are interesting for CP solver
developers but it’s not clear what they measure for
other solving technologies, and they can also end up
simply testing whether some feature, for example,
learning, is implemented. We removed these kind of
benchmarks after the second competition.
In 2010, the first time we ran the parallel category,
we found that comparing the results of the free and
parallel categories demonstrated the lack of robustness of the purse-based scoring system15 we used —
the best solver in the free category did even better in
the parallel results but ended up being equalled by
the second solver in the free category (which did not
run in parallel). This was the impetus to move to a
simpler scoring approach.
In 2013, we added the new category open to cater
for our first portfolio solver entrant. Unfortunately
some bugs in the entry that were not picked up in the
initial testing meant that it did not perform at all
well, so the open category was effectively identical to
the parallel category. We hope to have meaningful results in this category (and at least two portfolio entrants) in 2014.
We certainly learned more about solver-agnostic
combinatorial modeling and helped redefine MiniZ-
inc by running the challenge. The number of global
constraints supported by MiniZinc has grown steadi-
ly over the life of the challenge from an initial 10 or
so to around 100 today. Many of these have never ap-
peared in a challenge benchmark, and the bench-
marks are dominated by just a few, such as alldiffer-
ent and cumulative. In addition, the language
MiniZinc has been pretty stable over its lifetime; ma-
jor changes were better handling of output, which
was partly driven by the challenge.
The language FlatZinc, which is the actual input
language read by the solvers, has also been highly stable. This is good because it means solver writers are
not having to constantly change their input handling. The tools for converting from MiniZinc to
FlatZinc, have become more robust and flexible over
the life of the challenge.
As a side effect of the challenge we have collected
more than 70 benchmarks, each with usually at least
20 instances each, and some with 100s of instances.
It is pleasing that these benchmark problems have
been used in a number of publications, including by
solvers that do not support MiniZinc.
The more interesting lessons are perhaps what we
learned about the solving technology. We discuss the
possible lessons we can draw from competition results in the following paragraphs.
First, constraint-programming solvers typically
maintain state by trailing, that is, recording changes
in state so that they can be undone. Another approach is copying, which copies state every time a
new state is required. Many people in the CP community judged copying as uncompetitive, but
Gecode, a copying-based solver, won every gold
medal in every challenge from 2008–2012. Clearly
copying is competitive.
Second, the combination of constraint-programming propagation with learning, so called lazy-clause
generation (Ohrimenko, Stuckey, and Codish 2009),
leads to solvers that are remarkably effective. Such
solvers consistently performed well over the challenge although they were excluded from medals since
they were constructed by our group until the external
solver Opturion/CPX was entered in 2013.
Third, one of the pleasing lessons from the challenge is that technology-agnostic competition is possible. The SAT modulo theory–based solver fzn2smt
(Nieuwenhuis, Oliveras, and Tinelli 2006) has performed very well in the challenge. Similarly the MIP
solvers Cplex and Gurobi dominate on various problems but are also reasonably competitive overall in
Our final lesson should be highly encouraging for
the community. The solver izplus was created by an
outsider to the community, using a hybrid of complete and local search, and won bronze medals in
2012 and 2013. This shows there is plenty still to
learn in building solver technology, and the barrier
to entry for new ideas is not prohibitively high.