TY - JOUR
T1 - NeRF-OR
T2 - neural radiance fields for operating room scene reconstruction from sparse-view RGB-D videos
AU - Gerats, Beerend G.A.
AU - Wolterink, Jelmer M.
AU - Broeders, Ivo A.M.J.
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/9/13
Y1 - 2024/9/13
N2 - Purpose: RGB-D cameras in the operating room (OR) provide synchronized views of complex surgical scenes. Assimilation of this multi-view data into a unified representation allows for downstream tasks such as object detection and tracking, pose estimation, and action recognition. Neural radiance fields (NeRFs) can provide continuous representations of complex scenes with limited memory footprint. However, existing NeRF methods perform poorly in real-world OR settings, where a small set of cameras capture the room from entirely different vantage points. In this work, we propose NeRF-OR, a method for 3D reconstruction of dynamic surgical scenes in the OR. Methods: Where other methods for sparse-view datasets use either time-of-flight sensor depth or dense depth estimated from color images, NeRF-OR uses a combination of both. The depth estimations mitigate the missing values that occur in sensor depth images due to reflective materials and object boundaries. We propose to supervise with surface normals calculated from the estimated depths, because these are largely scale invariant. Results: We fit NeRF-OR to static surgical scenes in the 4D-OR dataset and show that its representations are geometrically accurate, where state of the art collapses to sub-optimal solutions. Compared to earlier work, NeRF-OR grasps fine scene details while training 30× faster. Additionally, NeRF-OR can capture whole-surgery videos while synthesizing views at intermediate time values with an average PSNR of 24.86 dB. Last, we find that our approach has merit in sparse-view settings beyond those in the OR, by benchmarking on the NVS-RGBD dataset that contains as few as three training views. NeRF-OR synthesizes images with a PSNR of 26.72 dB, a 1.7% improvement over state of the art. Conclusion: Our results show that NeRF-OR allows for novel view synthesis with videos captured by a small number of cameras with entirely different vantage points, which is the typical camera setting in the OR. Code is available via: github.com/Beerend/NeRF-OR.
AB - Purpose: RGB-D cameras in the operating room (OR) provide synchronized views of complex surgical scenes. Assimilation of this multi-view data into a unified representation allows for downstream tasks such as object detection and tracking, pose estimation, and action recognition. Neural radiance fields (NeRFs) can provide continuous representations of complex scenes with limited memory footprint. However, existing NeRF methods perform poorly in real-world OR settings, where a small set of cameras capture the room from entirely different vantage points. In this work, we propose NeRF-OR, a method for 3D reconstruction of dynamic surgical scenes in the OR. Methods: Where other methods for sparse-view datasets use either time-of-flight sensor depth or dense depth estimated from color images, NeRF-OR uses a combination of both. The depth estimations mitigate the missing values that occur in sensor depth images due to reflective materials and object boundaries. We propose to supervise with surface normals calculated from the estimated depths, because these are largely scale invariant. Results: We fit NeRF-OR to static surgical scenes in the 4D-OR dataset and show that its representations are geometrically accurate, where state of the art collapses to sub-optimal solutions. Compared to earlier work, NeRF-OR grasps fine scene details while training 30× faster. Additionally, NeRF-OR can capture whole-surgery videos while synthesizing views at intermediate time values with an average PSNR of 24.86 dB. Last, we find that our approach has merit in sparse-view settings beyond those in the OR, by benchmarking on the NVS-RGBD dataset that contains as few as three training views. NeRF-OR synthesizes images with a PSNR of 26.72 dB, a 1.7% improvement over state of the art. Conclusion: Our results show that NeRF-OR allows for novel view synthesis with videos captured by a small number of cameras with entirely different vantage points, which is the typical camera setting in the OR. Code is available via: github.com/Beerend/NeRF-OR.
KW - UT-Hybrid-D
KW - Dense depth estimation
KW - Neural radiance fields
KW - Operating room videos
KW - RGB-D imaging
KW - 3D scene reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85204153847&partnerID=8YFLogxK
U2 - 10.1007/s11548-024-03261-5
DO - 10.1007/s11548-024-03261-5
M3 - Article
AN - SCOPUS:85204153847
SN - 1861-6410
JO - International journal of computer assisted radiology and surgery
JF - International journal of computer assisted radiology and surgery
ER -