Learning to synthesize image and video contents

Speaker: Prof. Ming-Hsuan Yang


Abstract

In this talk, I will first review our recent work on synthesizing image and video contents. The underlying theme is to exploit different priors to synthesize diverse content with robust formulations. I will then present our recent work on image synthesis, video synthesis, and frame interpolation. I will also present our recent work on learning to synthesize images with limited training data. When time allows, I will also discuss some recent findings for other vision tasks.

Bio

Ming-Hsuan Yang is a professor at UC Merced and a research scientist with Google. He received Google Faculty Award in 2009 and Faculty Early Career Development (CAREER) award from the National Science Foundation in 2012. Yang received paper awards at UIST 2017, CVPR 2018, and ACCV 2018. He served as a program co-chair for ACCV 2016 and ICCV 2019. Yang is a Fellow of the IEEE and ACM.