TY - UNPB
T1 - Dynamic Depth-Supervised NeRF for Multi-View RGB-D Operating Room Images
AU - Gerats, Beerend G. A.
AU - Wolterink, Jelmer M.
AU - Broeders, Ivo A. M. J.
N1 - Accepted to the Workshop on Ambient Intelligence for HealthCare 2023
PY - 2022/11/22
Y1 - 2022/11/22
N2 - The operating room (OR) is an environment of interest for the development of sensing systems, enabling the detection of people, objects, and their semantic relations. Due to frequent occlusions in the OR, these systems often rely on input from multiple cameras. While increasing the number of cameras generally increases algorithm performance, there are hard limitations to the number and locations of cameras in the OR. Neural Radiance Fields (NeRF) can be used to render synthetic views from arbitrary camera positions, virtually enlarging the number of cameras in the dataset. In this work, we explore the use of NeRF for view synthesis of dynamic scenes in the OR, and we show that regularisation with depth supervision from RGB-D sensor data results in higher image quality. We optimise a dynamic depth-supervised NeRF with up to six synchronised cameras that capture the surgical field in five distinct phases before and during a knee replacement surgery. We qualitatively inspect views rendered by a virtual camera that moves 180 degrees around the surgical field at differing time values. Quantitatively, we evaluate view synthesis from an unseen camera position in terms of PSNR, SSIM and LPIPS for the colour channels and in MAE and error percentage for the estimated depth. We find that NeRFs can be used to generate geometrically consistent views, also from interpolated camera positions and at interpolated time intervals. Views are generated from an unseen camera pose with an average PSNR of 18.2 and a depth estimation error of 2.0%. Our results show the potential of a dynamic NeRF for view synthesis in the OR and stress the relevance of depth supervision in a clinical setting.
AB - The operating room (OR) is an environment of interest for the development of sensing systems, enabling the detection of people, objects, and their semantic relations. Due to frequent occlusions in the OR, these systems often rely on input from multiple cameras. While increasing the number of cameras generally increases algorithm performance, there are hard limitations to the number and locations of cameras in the OR. Neural Radiance Fields (NeRF) can be used to render synthetic views from arbitrary camera positions, virtually enlarging the number of cameras in the dataset. In this work, we explore the use of NeRF for view synthesis of dynamic scenes in the OR, and we show that regularisation with depth supervision from RGB-D sensor data results in higher image quality. We optimise a dynamic depth-supervised NeRF with up to six synchronised cameras that capture the surgical field in five distinct phases before and during a knee replacement surgery. We qualitatively inspect views rendered by a virtual camera that moves 180 degrees around the surgical field at differing time values. Quantitatively, we evaluate view synthesis from an unseen camera position in terms of PSNR, SSIM and LPIPS for the colour channels and in MAE and error percentage for the estimated depth. We find that NeRFs can be used to generate geometrically consistent views, also from interpolated camera positions and at interpolated time intervals. Views are generated from an unseen camera pose with an average PSNR of 18.2 and a depth estimation error of 2.0%. Our results show the potential of a dynamic NeRF for view synthesis in the OR and stress the relevance of depth supervision in a clinical setting.
KW - cs.CV
KW - I.4.5; I.4.9; I.4.10
U2 - 10.48550/arXiv.2211.12436
DO - 10.48550/arXiv.2211.12436
M3 - Preprint
BT - Dynamic Depth-Supervised NeRF for Multi-View RGB-D Operating Room Images
PB - ArXiv.org
ER -