Vision and Autonomous Systems Seminar
- Gates Hillman Centers
- Traffic21 Classroom 6501
- ZHIDING YU
- Research Scientist
- NVIDIA Research
Towards Weakly-Supervised Visual Understanding
Learning with weak and self-supervisions recently emerged as compelling tools towards leveraging vast amounts of unlabeled or partially-labeled data. In this talk, I will present some of the latest advances in weakly-supervised visual scene understanding from NVIDIA. Specifically, I will summarize and discuss some challenges and potential solutions in weakly-supervised learning, and introduce our works mainly from three directions: 1. learning with Inaccurate supervision, 2. learning with Incomplete supervision, and 3. learning with Inexact supervision. I will talk about specific works and applications that are aligned with these directions, including semantic boundary detection under noisy annotations, unsupervised domain adaptation, person re-id and weakly-supervised object detection, with their applications in autonomous driving, robotics, medical imaging and AI city.
Zhiding Yu joined NVIDIA Research as a Research Scientist in 2018. Before that, he obtained Ph.D. degree in ECE from Carnegie Mellon University in 2017. His research interests mainly focus on deep representation learning, weakly/semi-supervised learning, transfer learning and deep structured prediction, with their applications to: (1) general scene/video understanding, (2) bottom-up perceptual grouping and mid-level vision, (3) robust representation for cross-domain/cross-task/open-set generalization and adaptation, (4) learning with interactive weak supervisions. He is a winner of the Domain Adaptation for Semantic Segmentation Challenge in Workshop on Autonomous Driving (WAD), CVPR18. He is a co-author of the best student paper in ISCSLP14, and the winner of the best paper award in WACV15. He was twice the recipient of the HKTIIT Post-Graduate Excellence Scholarships in 2010 and 2012. His intern work on deep facial expression recognition at Microsoft Research won first runner-up at the EmotiW-SFEW Challenge 2015 and was integrated into the Microsoft Emotion Recognition API under the Microsoft Azure Cognitive Services.
Sponsored in part by Facebook Reality Labs Pittsburgh