Towards Understanding Deep Learning For Natural Language Processing
Deep learning is revolutionizing natural language processing (NLP), with innovations such as word embeddings and long short-term memory (LSTM) playing a key role in virtually every state-of-the-art NLP system today. However, what these neural components learn in practice is somewhat of a mystery. This talk dives into the inner workings of word embeddings and LSTMs, in an attempt to gain a better mathematical and linguistic understanding of what they do, how they do it, and why it works.
Omer Levy is a research scientist at Facebook AI Research in Seattle. Previously, I was a post-doc at the University of Washington, working with Prof. Luke Zettlemoyer. I completed my PhD at Bar-Ilan University with the guidance of Prof. Ido Dagan and Dr. Yoav Goldberg. I am interested in designing algorithms that mimic the basic language abilities of humans, and using them to realize semantic applications such as question answering and summarization that help people cope with information overload.
I am also interested in deepening our qualitative understanding of how machine learning is applied to language and why it succeeds (or fails), in hope that better understanding will foster better methods.
Light Refreshments at 4:00 pm, LTI 5th floor kitchen.