Machine Learning Thesis Proposal

  • Gates Hillman Centers
  • Reddy Conference Room 4405
  • YICHONG XU
  • Ph.D. Student
  • Machine Learning Department
  • Carnegie Mellon University
Thesis Proposals

Learning and Decision Making from Diverse Forms of Information

Classical machine learning posits that data are independently and identically distributed, in a single format usually the same as test data. In modern applications however, additional information in other formats might be available freely or at a lower cost. For example, in data crowdsourcing we can collect preferences over the data points instead of directly asking the labels of a single data point at a lower cost. In natural language understanding problems, we might have limited amount of data in the target domain, but can use a large amount of general domain data for free.

The main topic of this thesis is to study how to efficiently incorporate these diverse forms of information into the learning and decision making process. We study two representative paradigms in this thesis. Firstly, we study learning and decision making problems with direct labels and comparisons. Our algorithms can efficiently combine comparisons with direct labels so that the total learning cost can be greatly reduced. Secondly, we study multi-task learning problems from multiple domain data, and design algorithms to transfer the data from a general, abundant domain to the target domain. We show theoretical guarantees of our algorithms as well as their statistical minimaxity through information-theoretic limits. On the practical side, we demonstrate promising experimental results on price estimation and natural language understanding tasks.

Thesis Committee:
Artur Dubrawski (Co-Chair)
Aarti Singh (Co-Chair)
Sivaraman Balakrishnan
John Langford (Microsoft Research)

Additional Proposal Information

For More Information, Please Contact: 
Keywords: