top of page

Generative Adversarial Networks and Applications

Authors: Rafael Januzi and Luiz Buris and Prof. Dr. Raoni Texeira  and Prof. Dr. Gustavo Carneiro and Prof. Dr. Fabio A. Faria
This project focuses on one of the most fascinating and successful, but challenging generative models in the literature: the Generative Adversarial Networks (GAN). Recently, GAN has attracted much attention by the scientific community and the entertainment industry due to its effectiveness in generating complex and high-dimension data, which makes it a superior model for producing new samples, compared with other types of generative models. The traditional GAN (referred to as the Vanilla GAN) is composed of two neural networks, a generator and a discriminator, which are modeled using a minimax optimization. The generator creates samples to fool the discriminator that in turn tries to distinguish between the original and created samples. This optimization aims to train a model that can generate samples from the training set distribution. In addition to defining and explaining the Vanilla GAN and its main variations (e.g., DCGAN, WGAN, StyleGAN, and SAGAN), this project aims to apply GANs into the entertainment industry (e.g., style-transfer, face aging, and image-to-image translation) as well as to improve the CNN training process.

   Abstract

palestra.png
Escudo_ufmt.jpg
irislogo.jpg
uoa.png
1543275833729.jpeg
Face Morphing through GANs

Face Morphing through GANs

Play Video
R_F.png

   Published Papers

  • TEXEIRA, R. F. da S. ;  Rafael B. JANUZI, R. B. ;  A. FARIA, F. A. Os Dados dos Brasileiros sob Risco na Era da Inteligência Artificial? Encontro Nacional de Inteligência Artificial e Computacional (ENIAC), 2022. [link]
  • FARIA, F A.; CARNEIRO, G. Why are Generative Adversarial Networks so Fascinating and Annoying?.  33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), 2020, pp. 1-8. [link]

  Code and Datasets

coming soon

   Acknowledge

award.jpeg
The authors would like to thank the research funding agencies CAPES (scholarship), CNPq through the Universal Project (grant #408919/2016-7), and FAPESP (grant #2018/23908-1).

News

bottom of page