Language Technologies Institute Colloquium

  • CHRISTOPHER MANNING
  • Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
  • Director, Stanford Artificial Intelligence Laboratory (SAIL)
  • Associate Director, Human-Centered Artificial Intelligence Institute, Stanford University
Colloquium

Multi-step reasoning for answering complex questions

Current neural network systems have had enormous success on matching but still struggle in supporting multi-step inference. In this talk, I will examine two recent lines of work to address this gap, done with Drew Hudson and Peng Qi. In one line of work we have developed neural networks with explicit structure to support memory, attention, composition, and reasoning, with an explicitly iterative inference architecture. Our Neural State Machine design also emphasizes the use of a more symbolic form of internal computation, represented as attention over symbols, which have distributed representations. Such designs encourage modularity and generalization from limited data. We show the models' effectiveness on visual question answering datasets. The second line of work seeks to make progress in doing multi-step question answering over a large open-domain text collection. Most previous work on open-domain question answering employs a retrieve-and-read strategy, which fails when the question requires complex reasoning, because simply retrieving with the question seldom yields all necessary supporting facts. I present a model for explainable multi-hop reasoning in open-domain QA that iterates between finding supporting facts and reading the retrieved context. This GoldEn Retriever model is not only explainable but shows strong performance on the recent HotpotQA dataset for multi-step reasoning (codeveloped with people at CMU!). 

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University and Director of the Stanford Artificial Intelligence Laboratory (SAIL). His research goal is computers that can intelligently process, understand, and generate human language material. Manning is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, the GloVe model of word vectors, sentiment analysis, neural network dependency parsing, neural machine translation, question answering, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies.

Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is the founder of the Stanford NLP group and manages development of the Stanford CoreNLP software.

For More Information, Please Contact: 
Keywords: