27.02.2024 | Original Article
Neural radiance fields-based multi-view endoscopic scene reconstruction for surgical simulation
verfasst von:
Zhibao Qin, Kai Qian, Shaojun Liang, Qinhong Zheng, Jun Peng, Yonghang Tai
Erschienen in:
International Journal of Computer Assisted Radiology and Surgery
|
Ausgabe 5/2024
Einloggen, um Zugang zu erhalten
Abstract
Purpose
In virtual surgery, the appearance of 3D models constructed from CT images lacks realism, leading to potential misunderstandings among residents. Therefore, it is crucial to reconstruct realistic endoscopic scene using multi-view images captured by an endoscope.
Methods
We propose an Endoscope-NeRF network for implicit radiance fields reconstruction of endoscopic scene under non-fixed light source, and synthesize novel views using volume rendering. Endoscope-NeRF network with multiple MLP networks and a ray transformer network represents endoscopic scene as implicit field function with color and volume density at continuous 5D vectors (3D position and 2D direction). The final synthesized image is obtained by aggregating all sampling points on each ray of the target camera using volume rendering. Our method considers the effect of distance from the light source to the sampling point on the scene radiance.
Results
Our network is validated on the lung, liver, kidney and heart of pig collected by our device. The results show that the novel views of endoscopic scene synthesized by our method outperform existing methods (NeRF and IBRNet) in terms of PSNR, SSIM, and LPIPS metrics.
Conclusion
Our network can effectively learn a radiance field function with generalization ability. Fine-tuning the pre-trained model on a new endoscopic scene to further optimize the neural radiance fields of the scene, which can provide more realistic, high-resolution rendered images for surgical simulation.