Inducing optimal attribute representations for conditional GANs

Binod Bhattarai, Tae-Kyun Kim

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution


Conditional GANs (cGANs) are widely used in translating an image from one category to another. Meaningful conditions on GANs provide greater flexibility and control over the nature of the target domain synthetic data. Existing conditional GANs commonly encode target domain label information as hard-coded categorical vectors in the form of 0s and 1s. The major drawbacks of such representations are inability to encode the high-order semantic information of target categories and their relative dependencies. We propose a novel end-to-end learning framework based on Graph Convolutional Networks to learn the attribute representations to condition the generator. The GAN losses, the discriminator and attribute classification loss, are fed back to the graph resulting in the synthetic images that are more natural and clearer with respect to the attributes generation. Moreover, prior-arts are mostly given priorities to condition on the generator side, not on the discriminator side of GANs. We apply the conditions on the discriminator side as well via multi-task learning. We enhanced four state-of-the-art cGANs architectures: Stargan, Stargan-JNT, AttGAN and STGAN. Our extensive qualitative and quantitative evaluations on challenging face attributes manipulation data set, CelebA, LFWA, and RaFD, show that the cGANs enhanced by our methods outperform by a large margin, compared to their counter-parts and other conditioning methods, in terms of both target attributes recognition rates and quality measures such as PSNR and SSIM.
Original languageEnglish
Title of host publicationEuropean Conference on Computer Vision (ECCV 2020)
Number of pages17
ISBN (Electronic)978-3-030-58571-6
ISBN (Print)978-3-030-58570-9
Publication statusPublished - 9 Nov 2020

Publication series

NameLecture Notes in Computer Science


Dive into the research topics of 'Inducing optimal attribute representations for conditional GANs'. Together they form a unique fingerprint.

Cite this