Machine Learning Thesis Proposal

  • Gates Hillman Centers
  • Traffic21 Classroom 6501
  • ZHITING HU
  • Ph.D. Student
  • Machine Learning Department
  • Carnegie Mellon University
Thesis Proposals

Towards Functional Equivalence of Learning Paradigms and Its Applications

Machine learning is broadly defined as computational methods that enable machines to improve performance from experience. Decades of research has developed a heterogeneity of learning paradigms, which establish different formalisms of learning and ingest distinct types of experience, e.g., supervised learning with data examples, posterior regularization with structured knowledge, reinforcement learning with environment rewards, and adversarial learning with auxiliary models, etc. Two of the broad questions have arisen extensively in ML practice: 1) Given a type of experience available, how can we make the best use of it to improve machine performance? 2) Can we have a learning mechanism capable of ingesting any sorts of experience, and thus applicable to vastly different problems? Abundant research has centered around the the still-elusive challenges, producing growing algorithms and remarkable applications. However, due to the heterogeneity between the learning paradigms, the studies are largely isolated: innovating on different fronts requires disparate craftsmanships, and an advance in a paradigm is usually treated as unrelated to other paradigms instead of taken advantage of in a broader context.

In this thesis, we aim to systematize the diverse learning paradigms, by finding the commonalities and reducing the variabilities between them, to have a more holistic view of learning, empower current rich innovations originally crafted to a paradigm to be repeatable in others, and spawn new algorithms ingesting multiple forms of experience.

Specifically, we propose “functional equivalence” between the above paradigms, showing that the diverse experience, despite the variability in forms and structures, is equivalent in the way they function as supervisions, which is described by a single unifying formulation. We then present two general means of using the framework as a tool to methodically create new algorithms and improve applications, as a step towards tackling the above two questions. 1) Drawing on the equivalence, we can extrapolate an algorithm from one paradigm to another to enable new learning capability. As an example, we show how the off-the-shelf reward learning algorithms from the RL literature can be applied to learning structured knowledge in posterior regularization, and learning data augmentation/weighting in supervised setting. 2) We learn models with multiple experience types in a modularized way. Designing a solution to a problem boils down to choosing “what” experience to use without worrying too much about “how” to use the experience. We show applications in controllable text generation.

Thesis Committee: 
Eric Xing (Chair)
Tom Mitchell
Ruslan Salakhutdinov
Dan Roth (University of Pennsylvania)

For More Information, Please Contact: 
Keywords: