Flow-based GAN for 3D Point Cloud Generation from a Single Image

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

53 Downloads (Pure)


Generating a 3D point cloud from a single 2D image is of great importance for 3D scene understanding applications. To reconstruct the whole 3D shape of the object shown in the image, the existing deep learning based approaches use either explicit or implicit generative modeling of point clouds, which, however, suffer from limited quality. In this work, we aim to alleviate this issue by introducing a hybrid explicit-implicit generative modeling scheme, which inherits the flow-based explicit generative models for sampling point clouds with arbitrary resolutions while improving the detailed 3D structures of point clouds by leveraging the implicit generative adversarial networks (GANs). We evaluate on the large-scale synthetic dataset ShapeNet, with the experimental results demonstrating the superior performance of the proposed method. In addition, the generalization ability of our method is demonstrated by performing on cross-category synthetic images as well as by testing on real images from PASCAL3D+ dataset.
Original languageEnglish
Title of host publication33rd British Machine Vision Conference 2022
Subtitle of host publicationLondon, UK, November 21-24, 2022
PublisherBMVA Press
Publication statusPublished - Nov 2022
Event33rd British Machine Vision Conference, BMVC 2022 - London, United Kingdom
Duration: 21 Nov 202224 Nov 2022
Conference number: 33


Conference33rd British Machine Vision Conference, BMVC 2022
Abbreviated titleBMVC 2022
Country/TerritoryUnited Kingdom
Internet address


Dive into the research topics of 'Flow-based GAN for 3D Point Cloud Generation from a Single Image'. Together they form a unique fingerprint.

Cite this