Language Technologies Institute Colloquium
- Posner Hall A35 and Zoom
- In Person and Virtual Presentation - ET
- ROGER LEVY
- Professor, Department of Brain and Cognitive Sciences
- Director, Computational Psycholinguistics Laboratory
- Massachusetts Institute of Technology
How Language Understanding Unfolds in Minds and Machines
Language allows us to package our thoughts into symbolic forms and transmit some approximation of them into each other's minds. We do this hundreds of times a day as listeners, speakers, readers, and writers. How we're able to achieve this is one of the great scientific questions in the study of mind and brain. In this talk I describe two of our research group's recent theoretical and empirical advances in our work on this question. First, we evaluate and calibrate contemporary deep-learning models for human-like processing using numerous controlled experimental benchmarks and human behavioral datasets. Our results shed light on classic questions of the learnability of syntactic structures from linguistic input, and also highlight the continued importance of model architecture for human-like linguistic generalization. Second, we offer new results from a theory of how memory constrains human understanding: namely, that context representations are "lossy" in an information-theoretic sense. This theory novelly links memory representations for grammatical structures to the statistics of the natural language environment, explaining recent results showing how the same grammatical configuration can differ in difficulty for native speakers of different languages. The theory also predicts new generalizations about word order that we empirically confirm in multiple languages.
Roger Levy is a Professor in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology, where I direct MIT's Computational Psycholinguistics Laboratory. Before coming to MIT, I was faculty in the Department of Linguistics at UC San Diego. My research focuses on theoretical and applied questions in the processing and acquisition of natural language. Linguistic communication involves the resolution of uncertainty over a potentially unbounded set of possible signals and meanings. How can a fixed set of knowledge and resources be deployed to manage this uncertainty? And how is this knowledge acquired? To address these questions I combine computational modeling, psycholinguistic experimentation, and analysis of large naturalistic language datasets. This work furthers our understanding of the cognitive underpinning of language processing and acquisition, and helps us design models and algorithms that will allow machines to process human language.
The LTI Colloquium is generously sponsored by Abridge.
Zoom Participation. See announcement.