GANs, differential equations, kernels, and Fourier Series


Generative Adversarial Networks (GANs) comprise a generator network, which transforms noise into images, and a discriminator, which differentiates between the reals and the fakes. In this series of works, we analyze the underlying functional optimization of the discriminator in gradient-regularized GANs, and show that the optimal discriminator is the solution to a Poisson partial differential equation (PDE). We derive closed-form kernel based implementations to the discriminator, drawing links to the Coulomb potential from electrostatic theory. We also derive a Fourier-series approximation to solve the PDE. We demonstrate applications latent-space prior matching in Wasserstein autoencoders on benchmark image datasets, wherein the proposed approaches achieves comparable reconstruction error and Fréchet inception distance with faster convergence and up to two-fold improvement in image sharpness.


S. Asokan and C. S. Seelamantula, “Euler-Lagrange Analysis of Generative Adversarial Networks,” Journal of Machine Learning Research (JMLR), 1– 100, 2023

S. Asokan and C. S. Seelamantula, “Bridging the Gap Between Coulomb GAN and Gradient-regularized WGAN,” In Proceedings on “The Symbiosis of Deep Learning and Differential Equations (DLDE) – II” at NeurIPS Workshops 2022, New Orleans, USA (Spotlight Presentation)


Paper Website:

PDF Links:

Video Links:

GitHub Links:

Other Links:

Faculty: Prof. Chandra Sekhar Seelamantula
Click image to view enlarged version

Scroll Up