Monocular Endoscopic Depth Estimation and 3D Reconstruction Fusing Anatomical Priors and NeRF
DOI:
https://doi.org/10.71465/fht687Keywords:
Neural Radiance Fields, Monocular Depth Estimation, Endoscopic Reconstruction, Anatomical PriorsAbstract
Minimally invasive surgery has fundamentally altered the landscape of modern medicine, yet the reliance on monocular endoscopic feeds presents a persistent challenge regarding the loss of depth perception. This limitation forces surgeons to infer three-dimensional geometric structures from two-dimensional projections, increasing cognitive load and the risk of procedural error. While recent advancements in computer vision have introduced deep learning techniques for depth estimation, the specific domain of endoscopy suffers from unique difficulties, including texture scarcity, specular reflections, and complex deformable topology. This paper introduces a novel framework that integrates Neural Radiance Fields (NeRF) with domain-specific anatomical priors to achieve robust dense depth estimation and high-fidelity 3D reconstruction from monocular endoscopic video sequences. By leveraging the implicit continuous representation capabilities of NeRF, we overcome the discretization errors inherent in traditional voxel-based methods. Furthermore, we constrain the optimization process using geometric priors derived from the tubular and cavity-like structures typical of the gastrointestinal tract, thereby regularizing the solution space in ill-posed regions. We present a comprehensive evaluation of our method against state-of-the-art self-supervised learning approaches. Our results demonstrate that fusing anatomical priors with neural implicit representations significantly improves depth consistency and reconstruction accuracy, offering a promising pathway toward real-time intraoperative surgical navigation.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Thomas J. Reynolds, Christopher D. Evans, Martha L. Hughes (Author)

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.