The language core is propositional logic, enhanced
with a number of syntactic extensions for ease of
modeling. The accompanying tool set currently com-
prises a number of diagnostic engines and a simula-
tor tool (Feldman, Provan, and van Gemund 2009).
NGDE is an Allegro Common Lisp implementa-
tion of the classic general diagnostic engine (GDE).
NGDE (de Kleer 2009) uses a minimum-cardinality
candidate generator to construct diagnoses from con-
flicts. For ADAPT-Lite it uses interval constraints.
ProADAPT (Mengshoel 2007) processes all sensor
data and then acts as a gateway to a probabilistic in-
ference engine. The inference engine uses an arith-
metic circuit evaluator that is compiled from
Bayesian network models. The primary advantage of
using arithmetic circuits is speed, which is key in re-
RacerX is a detection-only algorithm that detects a
percentage change in individual filtered sensor values to raise a fault detection flag.
RODON (Karin, Lunde, and Münker 2006) is based
on the principles of the GDE as described by de Kleer
and Williams (1987) and the G+DE (Heller and Struss
2001). RODON uses contradictions (conflicts) between the simulated and the observed behavior to
generate hypotheses about possible causes for the observed behavior. If the model contains failure modes
in addition to the nominal behavior, these can be
used to verify the hypotheses, which speeds up the
diagnostic process and improves the results.
RulesRule is a rule-based isolation-only algorithm.
The rule base was developed by analyzing the sample
data and determining characteristic features of faults.
There is no explicit fault detection though isolation
implicitly means that a fault has been detected.
StanfordDA is an optimization-based approach to
estimating fault states in direct current power systems. The model includes faults changing the system
topology along with sensor faults. The approach can
be considered as a relaxation of the mixed estimation
problem. The authors have developed a linear model of the circuit and use convex optimization to estimate the faults and other hidden states. A sparse fault
vector solution is computed by using L1 regularization (Zymnis, Boyd, and Gorinevsky 2009).
Wizards of Oz (Grastien and Kan-John 2009) is a
consistency-based algorithm. The model of the system completely defines the stable (static) output of
the system in case of normal and faulty behavior.
Given a new command or new observations, the algorithm waits for a stable state and computes the
minimum diagnoses consistent with the observations and the previous diagnoses.
In 2013 we started to prepare a thermal track. The
creation of this track has been motivated by a survey
of the U.S. Department of Energy, which states that
54 percent of the total energy consumption of the
United States in 1986 was for space heating, ventilation, and air conditioning. Modern air-handling
units share design properties such as compensating
control. Early detection of faults in heating units will
lead to timelyrepair and subsequently decrease the
total energy consumption. The sampling frequency
of the thermal track scenarios is one minute. The typical scenario is 24 hours. The thermal scenarios also
depend on environmental factors such as outside
temperature and humidity, which are supplied as inputs to the diagnostic algorithms.
We are planning to introduce a robotic track. The
subject of this track is going to be a rover. Its position
is supplied by a localization system (for example an
overhead camera). The goal of this diagnostic track
would be to infer motor failure from changes of trajectory.
There is more planned work on control systems
such as the ones in chemical plants, software, and
Communication between the diagnostic algorithm
and diagnostic framework is complex. DAs receive
sensor streams, report tentative results as a stream,
possibly propose information-gathering actions to
take on the simulated faulty system, possibly taking
repair actions, and so on. This raised two serious chal-
lenges. First, designing a scoring metric that could
not be gamed and was close to real costs incurred in
diagnosing actual systems. For example, how should
one score a DA that first reports a correct result at t1
and an incorrect result at t2. Or, consider a null DA
that always reported there was nothing wrong at
every time. As components fail rarely, this null DA
might have a very high score. Second, most partici-
pants underestimated how much effort it would take
to modify their algorithm to interact with the world
as represented by the diagnostic framework. To ame-
liorate this we found it important to distribute the
full diagnostic framework to participants well before
the competition deadline. Nevertheless, the number
of participants declined over the years, which we be-
lieve is due to the inherent complexity of the frame-
work and the fact that the best algorithms turned out
be very hard to beat, even with more years of re-
search. We believe we need to publicize this compe-
tition more widely. This article is one way to achieve
Details for participating in the fifth competition
are described at dxc-2014.org.
de Kleer, J. 2009. Minimum Cardinality Candidate Genera-
tion. Paper presented at the 20th International Workshop
on Principles of Diagnosis (DX-09), Stockholm, Sweden, 14–
de Kleer, J., and Williams, B. 1987. Diagnosing Multiple