ed Nations Secretary General Ban Ki-moon to revoke
Ahmadinejad’s invitation to the assembly and warned
Washington should reconsider support for the world
body if he did not [e] (Graff and Cieri 2003).
Both contexts are complex enough to offer multiple
candidate antecedents for the ellipsis: in example 5
the elided verb phrase could be headed by celebrate,
live, want or be; and in example 6 it could be headed
by call on, revoke, warn or reconsider. However,
whereas example 5 can be successfully simplified
using automatic syntactic tree trimming procedures,
example 6 cannot. Specifically, when processing
example 5, the system can leverage a pruning function that crosses out the material prior to the first
comma, leaving a much simpler context from which
to select the antecedent. Since such generic text sim-plification procedures are not applicable to example
6, the context remains complex and ambiguous.
Once the string-level antecedent in example 5 has
been identified, its semantic analysis is incorporated
into the overall text meaning representation. This
involves not only concept selection but also the
determination of whether there is a type-coreference
or instance-coreference relationship between the
antecedent and the elided category.
Another type of ellipsis is characterized by frag-
ments occurring in typical dialogue strategies, such as
question-answer pairs. As described in McShane,
Nirenburg, and Beale (2005), the text meaning repre-
sentation of a question (How much ice cream do you eat
every week?) includes the expectation that its answer
will follow (A half gallon). So, when the question-
answer pair occurs in sequence — which, however, is
far from always the case in real language use — incor-
porating the meaning of the fragment into the mean-
ing of the overall context is straightforward.
Although the aforementioned strategies are largely
domain-independent, they are best supplemented by
domain-sensitive ones, when available. For example,
imagine that a surgeon, assisted by a LEIA robot, yells
to the robot, “Scalpel!” We know that the surgeon
wants to be handed a scalpel, but how do we prepare
the robot to understand that? On the one hand, we
could write a rule saying that “Scalpel!” always means
“Hand me a scalpel”; but unless the robot has an
extremely narrow repertoire of capabilities, this type
of listing will be inefficient and ultimately unsatis-
factory. Zooming out one level of abstraction, we
could generalize that “PHYSICAL-OBJECT!” always means
“Hand me a PHYSICAL-OBJECT.” This will often work —
except when it doesn’t: Nuts! Lawyers! My foot! Con-
straining the applicable objects still further to IMPLE-
MENTs would help — and, in fact, it might be entirely
sufficient for our robot. However, a more fundamen-
Figure 5. Example Maryland Virtual Patient Clinician Training Application System Run.
MD: I suggest having an EGD, which is a diagnostic procedure.
VP: I do not know the word “EGD”.
I record the concept EGD as a child of DIAGNOSTIC-PROCEDURE in my ontology.
I record the word “EGD” as a noun in my lexicon, mapped to the concept EGD.
I don’t know enough about the properties of EGD to agree.
I need to ask questions about RISK and PAIN.
How risky is it?
MD: It’s not risky at all.
VP: I add the property-value pair “RISK: 0” to the ontological specification of EGD.
Is it painful?
MD: It’s only a little uncomfortable.
VP: I add the property-value pair “PAIN: . 1” to the ontological specification of EGD.
I can tolerate that amount of risk and pain.
OK, I’ll agree to that.