Graphics Lab Meeting / Talk

  • SAMANEH AZADI
  • Ph.D. Candidate in Computer Science
  • Department of Electrical Engineering and Computer Sciences
  • University of California, Berkeley
Talks

Towards Content-Creative AI

In the last few years, Generative Adversarial Networks (GANs) have made remarkable advancements in learning complex data manifolds by generating novel data points. The novel contents created by such models have been well received and broadly applied in art, design, and technology.  In this talk, I will present our efforts towards creating new content in structural image domains from hand-designed fonts to natural complex scenes. I will discuss how we take advantage of the existing structure in the English alphabets to generate unobserved glyphs of a stylized typeface. In the domain of natural images, we learn the scene structure and arrangement of objects to compose them in a reasonable fashion or to generate photorealistic complex scenes. Beyond, since GANs are not fully optimized due to their training difficulties, I will introduce an approach that corrects the GAN generator distribution by discarding defective synthesized examples.

Samaneh Azadi is a Ph.D. candidate in Computer Science at UC Berkeley, advised by Prof. Trevor Darrell. Her research has been focused on machine learning and computer vision particularly in creative image generation through structural generative adversarial modeling. She has been awarded the Facebook Graduate Fellowship and the UC Berkeley Graduate Fellowship and is named a Rising Star in EECS in 2019 and 2020. She has spent time as a research intern at Google Brain and Adobe Research. Samaneh is co-organizing the NeurIPS 2020 Workshop on Machine Learning for Creativity and Design and has co-organized the Women in Computer Vision Workshop (WiCV) twice at CVPR 2016 and CVPR 2017.

Zoom Participation. See announcement.

For More Information, Please Contact: 
Keywords: