Data-driven reconstruction methods for photoacoustic tomography: Learning structures by structured learning

Yoeri Ewald Boink

Research output: ThesisPhD Thesis - Research UT, graduation UT

696 Downloads (Pure)

Abstract

Photoacoustic tomography (PAT) is an imaging technique with potential applications in various fields of biomedicine. By visualising vascular structures, PAT could help in the detection and diagnosis of diseases related to their dysregulation. In PAT, tissue is illuminated by light. After entering the tissue, the light undergoes scattering and absorption. The absorbed energy is transformed into an initial pressure by the photoacoustic effect, which travels to ultrasound detectors outside the tissue.

This thesis is concerned with the inverse problem of the described physical process: what was the initial pressure in the tissue that gave rise to the detected pressure outside? The answer to this question is difficult to obtain when light penetration in tissue is not sufficient, the measurements are corrupted, or only a small number of detectors can be used in a limited geometry. For decades, the field of variational methods has come up with new approaches to solve these kind of problems. these kind of problems: the combination of new theory and clever algorithms has led to improved numerical results in many image reconstruction problems. In the past five years, previously state-of-the-art results were greatly surpassed by combining variational methods with artificial neural networks, a form of artificial intelligence.

In this thesis we investigate several ways of combining data-driven artificial neural networks with model-driven variational methods. We combine the topics of photoacoustic tomography, inverse problems and artificial neural networks.

Chapter 3 treats the variational problem in PAT and provides a framework in which hand-crafted regularisers can easily be compared. Both directional and higher-order total variation methods show improved results over direct methods for PAT with structures resembling vasculature.

Chapter 4 provides a method to jointly solve the PAT reconstruction and segmentation problem for absorbing structures resembling vasculature. Artificial neural networks are embodied in the algorithmic structure of primal-dual methods, which are a popular way to solve variational problems. It is shown that a diverse training set is of utmost importance to solve multiple problems with one learned algorithm.

Chapter 5 provides a convergence analysis for data-consistent networks, which combine classical regularisation methods with artificial neural networks. Numerical results are shown for an inverse problem that couples the Radon transform with a saturation problem for biomedical images.

Chapter 6 explores the idea of fully-learned reconstruction by connecting two nonlinear autoencoders. By enforcing a dimensionality reduction in the artificial neural network, a joint manifold for measurements and images is learned. The method, coined learned SVD, provides advantages over other fully-learned methods in terms of interpretability and generalisation. Numerical results show high-quality reconstructions, even in the case where no information on the forward process is used.

In this thesis, several ways of combining model-based methods with data-driven artificial neural networks were investigated. The resulting hybrid methods showed improved tomography reconstructions. By allowing data to improve a structured method, deeper vascular structures could be imaged with photoacoustic tomography.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • University of Twente
Supervisors/Advisors
  • Brune, Christoph, Supervisor
  • Manohar, Srirang, Supervisor
Award date5 Feb 2021
Place of PublicationEnschede
Publisher
Print ISBNs978-90-365-5087-1
DOIs
Publication statusPublished - 5 Feb 2021

Fingerprint

Dive into the research topics of 'Data-driven reconstruction methods for photoacoustic tomography: Learning structures by structured learning'. Together they form a unique fingerprint.

Cite this