The proliferation of synthetic media are subject to malicious usages such as disinformation campaigns, posing potential threats to media integrity and democracy. A way to combat this is developing forensics algorithms to identify manipulated media.
In the beginning of the talk, I will discuss how one can train a model to detect photos manipulated by image editing tools, such as Adobe Photoshop. In particular, we present a method for detecting one popular Photoshop manipulation — image warping of human faces. We show that our model outperforms humans at the task of recognizing manipulated images, can predict the specific location of edits, and in some cases can be used to "undo" a manipulation to reconstruct the original, unedited image.
I will continue with our findings on the generalizability of detecting CNN-synthesized images. We investigate systematic artifacts in images generated by convolutional neural networks (CNNs), and demonstrate that with careful processing, a standard image classifier trained on only one specific CNN generator is able to generalize surprisingly well to unseen architectures, datasets, and training methods. This suggests that today’s CNN generators share some common systematic flaws.
Sheng-Yu Wang is a Robotics PhD student at CMU advised by Jun-Yan Zhu. Previously, he obtained his BA in Computer Science and Applied Mathematics at UC Berkeley, where he worked on studying and detecting artifacts generated by image manipulation and synthesis, under the supervision of Alexei A. Efros.
The VASC Seminar is sponsored in part by Facebook Reality Labs Pittsburgh.
Zoom Participation. See announcement.