One major challenge in deploying autonomous systems today is the inability to quickly debug their decision-making or understand why they chose their actions. In this talk, I first propose a crowd-sourcing approach to scale the collection of the domain-specific vocabulary and patterns of language that are needed to implement explanations on new applications and tasks. Then, I will focus on two ways in which new explanation algorithms can influence how users interact with intelligent systems in the future. I will present one of my recent studies demonstrating that explanations can counteract novelty effects and improve trust with robots, and discuss ongoing work towards explaining learned expert strategies to improve student learning.
Stephanie Rosenthal is an Assistant Professor of Applied Data Analytics at Chatham University. Her research at the intersection of AI and HCI aims to improve the decision-making, performance and usability intelligent systems. Prior to joining Chatham, she was a Research Scientist at Carnegie Mellon University’s Software Engineering Institute. Dr. Rosenthal received her PhD in Computer Science from Carnegie Mellon in 2012. She is the recipient of Computing Research Association’s Outstanding Undergraduate Award, and National Science Foundation, National Physical Science Consortium, Siebel, and Google Anita Borg Fellowships. This year, Dr. Rosenthal was awarded both the EAAI New and Future AI Educator Award and Pittsburgh’s “Who’s Next” in Technology and Education Award for her innovative approaches to integrating AI concepts into introductory programming courses.
Computer Science Department