Standardized tests are often used to test students as they progress in the formal education system. These tests are widely available and measurable with clear evaluation procedures and metrics. Hence, these can serve as good tests for AI. We propose approaches for solving some of these tests. We broadly categorize these tests into two categories: open domain question answering tests such as reading comprehensions and elementary school science tests, and closed domain question answering tests such as intermediate or advanced math and science tests.
We present alignment based approach with multi-task learning for the former. For closed domain tests, we propose a parsing to programs approach which can be seen as a natural language interface to expert systems. We also describe approaches for question generation based on instructional material in both open domain as well as closed domain settings. Finally, we show that we can improve both the question answering and question generation models by learning them jointly. This mechanism also allows us to leverage cheap unlabelled data for learning the two models. Our work can potentially be applied for the social good in the education domain. We perform studies on human subjects who found our approaches useful as assistive tools in education.
Thesis Committee Eric Xing (Chair) Jaime Carbonell Tom Mitchell Dan Roth (University of Pennsylvania)