Skip to the content.

This is the official project page of our paper “Spatially Constrained GAN for Face and Fashion Synthesis” that has been accepted to FG 2021 as oral and received the NVIDIA CCS Best Student Paper Award!

by Songyao Jiang, Hongfu Liu, Yue Wu and Yun Fu.

Smile Lab @ Northeastern University

Problem Definition

Goal

SCGAN decouples the image synthesis task into three dimensions (i.e., spatial, attribute and latent dimensions), control the spatial and attribute-level contents, and randomize the other unregulated contents.

Our goal can be described as finding the mapping

where is the generating function, is the latent vector of size (), and is the conditionally generated image which complies with the target conditions and .

Motivations

Key Contributions

Method

SCGAN Framework

Our proposed SCGAN consists of three networks shown below, which are a generator network G, a discriminator network D, and a segmentor network S.

Objective Functions

Training Algorithm

Pseudo-code to train the proposed SCGAN can be found here.

Network Architecture

Experiment

We verify the effectiveness of SCGAN on a face dataset CelebA and a fashion dataset DeepFashion. We show both visual and quantitative results compared with four representative methods.

Datasets

CelebA is a face attribute dataset:

DeepFashion is a large-scale clothes database with 50 categories, 1,000 descriptive attributes. We use its Fashion Synthesis subset:

Qualitative Results

Results on CelebA dataset:


Results on DeepFashion dataset:

NoSmile2Smile Interpolation:

Left2Right interpolation:

Quantitative Evaluation

Evaluation:

Metrics:

Ablation Study of Generator Architecture

Our proposed architecture:

Alternative architecture:

Compared alternative generator, our proposed step-by-step generator has:

Ablation Study of Model Convergence

Settings:

Benefits of Segmentor S:

Citation

If you find this repo useful in your research, please consider citing

@inproceedings{jiang2021spatially,
  title={Spatially Constrained GAN for Face and Fashion Synthesis},
  author={Jiang, Songyao and Hongfu Liu and Yue Wu and Fu, Yun},
  booktitle={2021 16th IEEE International Conference on Automatic Face \& Gesture Recognition (FG 2021)},
  year={2021},
  organization={IEEE}
}

@inproceedings{jiang2019segmentation,
  title={Segmentation guided image-to-image translation with adversarial networks},
  author={Jiang, Songyao and Tao, Zhiqiang and Fu, Yun},
  booktitle={2019 14th IEEE International Conference on Automatic Face \& Gesture Recognition (FG 2019)},
  pages={1--7},
  year={2019},
  organization={IEEE}
}

Contacts