Event

Dustin Smith Thesis Defense

Wednesday
August 28, 2013

Location

MIT Media Lab, E14-525

Description

Referring expressions with vague and ambiguous modifiers, such as “a quick visit” and “the big meeting,” are difficult for computers to interpret because their meanings are defined in part by context. For the hearer to arrive at the speaker's intended meaning, he must consider the alternative decisions that the speaker was faced with in context.

To address these challenges, Dustin Smith proposes a new approach to both generating and interpreting referring expressions based on belief-state planning and plan recognition. Planning in belief space offers a way to capture the uncertainty of interpretations. Both generation and interpretation procedures are incremental, because each belief state represents a complete interpretation. The contributions of his thesis are:

(1) A computational model of reference generation and interpretation that is fast, incremental, and non-deterministic. This model includes a lexical semantics for a fragment of English noun phrases, which specifies the encoded meanings of determiners (quantifiers and articles), gradable and ambiguous modifiers. It performs in real time, even when the hypothesis space grows very large. Because it's incremental, it avoids considering possibilities that will later turn out to be irrelevant.
(2) Integrating generation and interpretation into a single process.

Interpretation is guided by comparison to alternatives produced by the generation module. When faced with an underspecified description, the system uses what it could have said and compares that to what the user did say. Reasoning about hypothetical generation decisions facilitates interpretive inferences of this sort: "She ate some of the tuna" means not all of it, otherwise you would have said,``She ate the tuna". 

This approach has been implemented and evaluated using a computational model, AIGRE. Smith also created a testbed for comparing human judgments of referring expressions to those produced by our algorithm (or others). In an online user experiment with Mechanical Turk, he got 97% coverage of human responses in a simple geometrical domain where correct responses were clear, and lower, but still encouraging, coverage in a more complex, real-world domain where the responses were outside the language model.

The model, AIGRE, demonstrates that vagueness and ambiguity in natural language, while still not easy, is manageable. The day where we will routinely talk to our computers in unconstrained natural language is not far off.

Additional Featured Research By

(Unpublished) Software Agents

Host/Chair: (Unpublished) Henry A. Lieberman

Participant(s)/Committee

Marvin L. Minsky, Agustín Rayo

More Events