Robotics Thesis Defense
- Remote Access - Zoom
- Virtual Presentation - ET
- JINGYAN WANG
- Ph.D. Student
- Robotics Institute
- Carnegie Mellon University
Understanding and Mitigating Biases in Evaluation
There are many problems in real life that involve collecting and aggregating evaluation from people, such as hiring, peer grading and conference peer review. In this thesis, we focus on three sources of biases that arise in such problems, and propose methods to mitigate them. First, we study human bias, that is, the bias in the evaluation reported by the evaluators. We consider miscalibration, where different people have different calibration scales. We propose randomized algorithms that provably extract useful information under arbitrary miscalibration, and subsequently propose a heuristic to correct the scores computationally. We also consider the bias induced by the outcome experienced by people, and propose an adaptive algorithm that debiases people's ratings under mild assumptions of the biases. Second, we study estimation bias, where algorithms yield different performance on different subgroups of the population. We analyze the statistical bias (defined as the expected value of the estimate minus the true value) when using the maximum-likelihood estimator on pairwise comparison data, and then propose a simple modification of the estimator to reduce the bias. Third, we study policy bias, where the design of evaluation procedure may induce undesirable outcomes. We compare two different schemes of distributing a large-scale multi-faceted evaluation task to many evaluators, in terms of accuracy and fairness. Finally, we briefly describe our outreach efforts to reduce the bias caused by the alphabetical-ordering authorship in scientific publications, and to analyze the gender distribution of conference paper awards.
Nihar Shah (Chair)
Ariel Procaccia (Harvard University)
Avrim Blum (Toyota Technological Institute at Chicago)
Zoom Participation. See announcement.