Data-consistent neural networks for solving nonlinear inverse problems

Yoeri E. Boink, Markus Haltmeier, Sean Holman, Johannes Schwab*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)

Abstract

Data assisted reconstruction algorithms, incorporating trained neural networks, are a novel paradigm for solving inverse problems. One approach is to first apply a classical reconstruction method and then apply a neural network to improve its solution. Empirical evidence shows that plain two-step methods provide high-quality reconstructions, but they lack a convergence analysis as known for classical regularization methods. In this paper we formalize the use of such two-step approaches in the context of classical regularization theory. We propose data-consistent neural networks that can be combined with classical regularization methods. This yields a data-driven regularization method for which we provide a convergence analysis with respect to noise. Numerical simulations show that compared to standard two-step deep learning methods, our approach provides better stability with respect to out of distribution examples in the test set, while performing similarly on test data drawn from the distribution of the training set. Our method provides a stable solution approach to inverse problems that beneficially combines the known nonlinear forward model with available information on the desired solution manifold in training data.

Original languageEnglish
Pages (from-to)203-209
Number of pages7
JournalInverse Problems and Imaging
Volume17
Issue number1
Early online dateJul 2022
DOIs
Publication statusPublished - Feb 2023

Keywords

  • NLA
  • convergence rates
  • data-consistency
  • neural networks
  • nonlinear inverse problems
  • regularization
  • and phrases. Deep learning

Fingerprint

Dive into the research topics of 'Data-consistent neural networks for solving nonlinear inverse problems'. Together they form a unique fingerprint.

Cite this