GANs, differential equations, kernels, and Fourier Series

 

Generative Adversarial Networks (GANs) comprise a generator network, which transforms noise into images, and a discriminator, which differentiates between the reals and the fakes. In this series of works, we analyze the underlying functional optimization of the discriminator in gradient-regularized GANs, and show that the optimal discriminator is the solution to a Poisson partial differential equation (PDE). We derive closed-form kernel based implementations to the discriminator, drawing links to the Coulomb potential from electrostatic theory. We also derive a Fourier-series approximation to solve the PDE. We demonstrate applications latent-space prior matching in Wasserstein autoencoders on benchmark image datasets, wherein the proposed approaches achieves comparable reconstruction error and Fréchet inception distance with faster convergence and up to two-fold improvement in image sharpness.

REFERENCES:

S. Asokan and C. S. Seelamantula, “Euler-Lagrange Analysis of Generative Adversarial Networks,” Journal of Machine Learning Research (JMLR), 1– 100, 2023

S. Asokan and C. S. Seelamantula, “Bridging the Gap Between Coulomb GAN and Gradient-regularized WGAN,” In Proceedings on “The Symbiosis of Deep Learning and Differential Equations (DLDE) – II” at NeurIPS Workshops 2022, New Orleans, USA (Spotlight Presentation)

ONLINE RESOURCES:

Paper Website:

https://www.jmlr.org/papers/v24/20-1390.html

https://openreview.net/forum?id=CVbDSCQN4P

PDF Links:

https://www.jmlr.org/papers/volume24/20-1390/20-1390.pdf

https://openreview.net/pdf?id=CVbDSCQN4P

Video Links:

https://slideslive.com/38994053

GitHub Links:

https://github.com/DarthSid95/ELF_GANs

https://github.com/DarthSid95/RBFCoulombGANs

Other Links:

https://www.siddarthasokan.com/ELAnalysisGANs/

Faculty: Prof. Chandra Sekhar Seelamantula
Click image to view enlarged version

Scroll Up