Human-Computer Interaction Thesis Defense

  • Remote Access Enabled - Zoom
  • Virtual Presentation
  • Ph.D. Student
  • Human-Computer Interaction Institute
  • Carnegie Mellon University
Thesis Orals

Human-AI Systems for Visual Information Access

In my work, I create hybrid human- and AI-powered intelligent interactive systems to provide access to visual information in the real world. By combining the advantages of humans and AI, these systems can be nearly as robust and flexible as humans, and nearly as quick and low-cost as automated AI, enabling us to solve problems that are currently impossible with either alone.

I develop and deploy human-AI systems for two application domains: accessibility and environmental sensing. To make physical interfaces accessible for blind people, I develop systems to interpret static and dynamic interfaces, enabling blind people to independently access them through audio feedback or tactile overlays. For environmental sensing, I develop and deploy a camera sensing system that collects human labels to bootstrap automatic processes to answer real-world visual questions, allowing end users to actionalize AI in their everyday lives.

AI systems often require a huge amount of up front training data to get started, but targeted human intelligence can bootstrap the systems with relatively little data. Although humans may be slower initially, quickly bootstrapping to automated approaches provides a good balance, enabling human-AI systems to be scalable and rapidly deployable.

Thesis Committee:
Jeffrey Bigham (Chair)
Chris Harrison
Jodi Forlizzi
Meredith Ringel Morris (Microsoft Research)

Additional Thesis Information

Zoom Participation Enabled. See announcement.

For More Information, Please Contact: