Language Technologies Thesis Defense

  • PRADEEP DASIGI
  • Ph.D. Student
  • Language Technologies Institute
  • Carnegie Mellon University
Thesis Orals

Knowledge-Aware Natural Language Understanding

Natural Language Understanding (NLU) systems need to encode human generated text (or speech) and reason over it at a deep semantic level. Any NLU system typically involves two main components: The first is an encoder, which composes words (or other basic linguistic units) within the input utterances compute encoded representations, which are then used as features in the second component, a reasoner, to reason over the encoded inputs and produce the desired output. We argue that the utterances themselves do not contain all the information needed for understanding them and identify two kinds of additional knowledge needed to fill the gaps: background knowledge and contextual knowledge.

The first part of the thesis deals with encoding background knowledge. Distributional methods for represent meaning only as a function of context, and thus, the other aspects of semantics that context does not provide are out of their reach. These are related to commonsense or real world information which is part of shared human knowledge but is not explicitly present in the inputs. We address this limitation by having the encoders also encode background knowledge, and present two approaches for doing so: 1) leveraging explicit symbolic knowledge from WordNet to learn ontology-grounded token-level representations of words, 2) modeling selectional restrictions verbs place on their semantic role fillers to encode implicit background knowledge.

The second part focuses on reasoning with contextual knowledge. We focus on Question-Answering (QA) tasks where reasoning can be expressed as sequences of discrete operations, (i.e. semantic parsing problems), and the answer can be obtained by executing the sequence of operations (or logical form) grounded in some context. We do not assume the availability of logical forms, and build weakly supervised semantic parsers. This training setup comes with significant challenges since it involves searching over an exponentially large space of logical forms. To deal with these challenges, we propose 1) using a grammar to constrain the output space of the semantic parser; 2) leveraging a lexical coverage measure to ensure the relevance of produced logical forms to input utterances; and 3) a novel iterative training scheme that alternates between searching for logical forms, and maximizing the likelihood of the retrieved ones.

Overall, this thesis presents a general framework for NLU with encoding and reasoning as the two core components, and how additional knowledge can augment them.

Thesis Committee:
Eduard Hovy (Chair)
Chris Dyer (CMU/Google)
William Cohen
Luke Zettlemoyer (University of Washington)

Copy of Thesis Document

For More Information, Please Contact: 
Keywords: