Abstract
Data assisted reconstruction algorithms, incorporating trained neural networks, are a novel paradigm for solving inverse problems. One approach is to first apply a classical reconstruction method and then apply a neural network to improve its solution. Empirical evidence shows that plain two-step methods provide high-quality reconstructions, but they lack a convergence analysis as known for classical regularization methods. In this paper we formalize the use of such two-step approaches in the context of classical regularization theory. We propose data-consistent neural networks that can be combined with classical regularization methods. This yields a data-driven regularization method for which we provide a convergence analysis with respect to noise. Numerical simulations show that compared to standard two-step deep learning methods, our approach provides better stability with respect to out of distribution examples in the test set, while performing similarly on test data drawn from the distribution of the training set. Our method provides a stable solution approach to inverse problems that beneficially combines the known nonlinear forward model with available information on the desired solution manifold in training data.
Original language | English |
---|---|
Pages (from-to) | 203-209 |
Number of pages | 7 |
Journal | Inverse Problems and Imaging |
Volume | 17 |
Issue number | 1 |
Early online date | Jul 2022 |
DOIs | |
Publication status | Published - Feb 2023 |
Keywords
- NLA
- convergence rates
- data-consistency
- neural networks
- nonlinear inverse problems
- regularization
- and phrases. Deep learning