A layout to image (L2I) generation model aims to generate a complicated image containing multiple objects (things) against natural background (stuff), conditioned on a given layout. Built upon the recent advances in generative adversarial networks (GANs), existing L2I models have made great progress. However, a close inspection of their generated images reveals two major limitations: (1) the object-to-object as well as object-to-stuff relations are often broken and (2) each object’s appearance is typically distorted lacking the key defining characteristics associated with the object class. We argue that these are caused by the lack of context-aware object and stuff feature encoding in their generators, and location-sensitive appearance representation in their discriminators. To address these limitations, two new modules are proposed in this work. First, a context-aware feature transformation module is introduced in the generator to ensure that the generated feature encoding of either object or stuff is aware of other coexisting objects/stuff in the scene. Second, instead of feeding location-insensitive image features to the discriminator, we use the Gram matrix computed from the feature maps of the generated object images to preserve location-sensitive information, resulting in much enhanced object appearance. Extensive experiments show that the proposed method achieves state-of-the-art performance on the COCO-Thing-Stuff and Visual Genome benchmarks. Code available at: https://github.com/wtliao/layout2img.
|Title of host publication||2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)|
|Number of pages||10|
|Publication status||Published - 13 Nov 2021|
|Event||2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) - Nashville, TN, USA|
Duration: 20 Jun 2021 → 25 Jun 2021
|Conference||2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)|
|Period||20/06/21 → 25/06/21|