Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS.
New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures.
Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises.
We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.
Smith R, Patel V, Satava R (2014) Fundamentals of robotic surgery: a course of basic robotic surgery skills based upon a 14-society consensus template of outcomes measures and curriculum development. Int J Med Robot Comput Assist Surg 10(3):379–384 CrossRef
Vetter MH et al (2015) Incorporating resident/fellow training into a robotic surgery program. J Surg Oncol
Veneziano D et al (2015) Construct, content and face validity of the camera handling trainer (CHT): a new E-BLUS training task for 30° laparoscope navigation skills. World J Urol 1–6
Graafland M et al (2013) A multicenter prospective cohort study on camera navigation training for key user groups in minimally invasive surgery. Surg Innov 27(3):312–319. doi: 10.1177/1553350613505714
Schreuder H et al (2012) Training and learning robotic surgery, time for a more structured approach: a systematic review. BJOG Int J Obstet Gynaecol 119(2):137–149 CrossRef
Blavier A et al (2006) Impact of 2D and 3D vision on performance of novice subjects using da Vinci robotic system. Acta Chir Belg 106(6)
Weber B, Schneider S (2014) The effects of force feedback on surgical task performance: a meta-analytical integration. In: Auvray M, Duriez C (eds) Haptics: neuroscience, devices, modeling, and applications. Springer, New York, pp 150–157
Aiono S et al (2002) Controlled trial of the introduction of a robotic camera assistant (Endo Assist) for laparoscopic cholecystectomy. Surg Endosc Other Interv Tech 16(9):1267–1270 CrossRef
King BW et al (2013) Towards an autonomous robot for camera control during laparoscopic surgery. J Laparoendosc Adv Surg Tech 23(12):1027–1030 CrossRef
Zahiri M et al (2016) Integration of automated camera steering for robotic single-site surgery. In: Zeghloul S, Laribi MA, Gazeau J-P (eds) Robotics and mechatronics. Springer, New York, pp 153–160 CrossRef
Ali S et al (2007) Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop. Stud Health Technol Inf 132:4–7
- Viewpoint matters: objective performance metrics for surgeon endoscope control during robot-assisted surgery
Anthony. M. Jarc
Myriam J. Curet
- Springer US
Neu im Fachgebiet Chirurgie
e.Med Kampagnen-Visual, Mail Icon II