Generative Adversarial Networks: A Primer for Radiologists

Jelmer M. Wolterink*, Anirban Mukhopadhyay, Tim Leiner, Thomas J. Vogl, Andreas M. Bucher, Ivana Išgum

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

10 Citations (Scopus)
84 Downloads (Pure)


Artificial intelligence techniques involving the use of artificial neural networks—that is, deep learning techniques—are expected to have a major effect on radiology. Some of the most exciting applications of deep learning in radiology make use of generative adversarial networks (GANs). GANs consist of two artificial neural networks that are jointly optimized but with opposing goals. One neural network, the generator, aims to synthesize images that cannot be distinguished from real images. The second neural network, the discriminator, aims to distinguish these synthetic images from real images. These deep learning models allow, among other applications, the synthesis of new images, acceleration of image acquisitions, reduction of imaging artifacts, efficient and accurate conversion between medical images acquired with different modalities, and identification of abnormalities depicted on images. The authors provide an introduction to GANs and adversarial deep learning methods. In addition, the different ways in which GANs can be used for image synthesis and image-to-image translation tasks, as well as the principles underlying conditional GANs and cycle-consistent GANs, are described. Illustrated examples of GAN applications in radiologic image analysis for different imaging modalities and different tasks are provided. The clinical potential of GANs, future clinical GAN applications, and potential pitfalls and caveats that radiologists should be aware of also are discussed in this review.

Original languageEnglish
Pages (from-to)840-857
Issue number3
Publication statusPublished - 23 Apr 2021


Dive into the research topics of 'Generative Adversarial Networks: A Primer for Radiologists'. Together they form a unique fingerprint.

Cite this