Skip to main content
Erschienen in: International Journal of Computer Assisted Radiology and Surgery 12/2023

30.04.2023 | Original Article

Evaluation of single-stage vision models for pose estimation of surgical instruments

verfasst von: William Burton, Casey Myers, Matthew Rutherford, Paul Rullkoetter

Erschienen in: International Journal of Computer Assisted Radiology and Surgery | Ausgabe 12/2023

Einloggen, um Zugang zu erhalten

Abstract

Purpose

Multiple applications in open surgical environments may benefit from adoption of markerless computer vision depending on associated speed and accuracy requirements. The current work evaluates vision models for 6-degree of freedom pose estimation of surgical instruments in RGB scenes. Potential use cases are discussed based on observed performance.

Methods

Convolutional neural nets were developed with simulated training data for 6-degree of freedom pose estimation of a representative surgical instrument in RGB scenes. Trained models were evaluated with simulated and real-world scenes. Real-world scenes were produced by using a robotic manipulator to procedurally generate a wide range of object poses.

Results

CNNs trained in simulation transferred to real-world evaluation scenes with a mild decrease in pose accuracy. Model performance was sensitive to input image resolution and orientation prediction format. The model with highest accuracy demonstrated mean in-plane translation error of 13 mm and mean long axis orientation error of 5\(^{\circ }\) in simulated evaluation scenes. Similar errors of 29 mm and 8\(^{\circ }\) were observed in real-world scenes.

Conclusion

6-DoF pose estimators can predict object pose in RGB scenes with real-time inference speed. Observed pose accuracy suggests that applications such as coarse-grained guidance, surgical skill evaluation, or instrument tracking for tray optimization may benefit from markerless pose estimation.
Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Saun TJ, Zuo KJ, Grantcharov TP (2019) Video technologies for recording open surgery: a systematic review. Surg Innov 26(5):599–612PubMedCrossRef Saun TJ, Zuo KJ, Grantcharov TP (2019) Video technologies for recording open surgery: a systematic review. Surg Innov 26(5):599–612PubMedCrossRef
2.
Zurück zum Zitat Ahmadi E, Masel DT, Metcalf AY, Schuller K (2019) Inventory management of surgical supplies and sterile instruments in hospitals: a literature review. Health Syst 8(2):134–151CrossRef Ahmadi E, Masel DT, Metcalf AY, Schuller K (2019) Inventory management of surgical supplies and sterile instruments in hospitals: a literature review. Health Syst 8(2):134–151CrossRef
3.
Zurück zum Zitat Patel A, Ashok A, Rao AS, Singh HN, Tripathi S (2022) Robotic assistant to surgeons for inventory handling. In: IEEE international conference on electronics, computing and communication technologies. 1–4 Patel A, Ashok A, Rao AS, Singh HN, Tripathi S (2022) Robotic assistant to surgeons for inventory handling. In: IEEE international conference on electronics, computing and communication technologies. 1–4
4.
Zurück zum Zitat Rodrigues M, Mayo M, Patros P (2022) OctopusNet: machine learning for intelligent management of surgical tools. Smart Health 23:100244CrossRef Rodrigues M, Mayo M, Patros P (2022) OctopusNet: machine learning for intelligent management of surgical tools. Smart Health 23:100244CrossRef
5.
Zurück zum Zitat Rodrigues M, Mayo M, Patros P (2022) Evaluation of deep learning techniques on a novel hierarchical surgical tool dataset. In: Australasian joint conference on artificial intelligence. 169–180 Rodrigues M, Mayo M, Patros P (2022) Evaluation of deep learning techniques on a novel hierarchical surgical tool dataset. In: Australasian joint conference on artificial intelligence. 169–180
6.
Zurück zum Zitat Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: IEEE winter conference on applications of computer vision. 691–699 Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: IEEE winter conference on applications of computer vision. 691–699
7.
Zurück zum Zitat Zia A, Sharma Y, Bettadapura V, Sarin EL, Essa I (2018) Video and accelerometer-based motion analysis for automated surgical skills assessment. Int J Comput Assist Radiol Surg 13:443–455PubMedCrossRef Zia A, Sharma Y, Bettadapura V, Sarin EL, Essa I (2018) Video and accelerometer-based motion analysis for automated surgical skills assessment. Int J Comput Assist Radiol Surg 13:443–455PubMedCrossRef
8.
Zurück zum Zitat Khalid S, Goldenberg M, Grantcharov T, Taati B, Rudzicz F (2020) Evaluation of deep learning models for identifying surgical actions and measuring performance. JAMA Netw Open 3:201664CrossRef Khalid S, Goldenberg M, Grantcharov T, Taati B, Rudzicz F (2020) Evaluation of deep learning models for identifying surgical actions and measuring performance. JAMA Netw Open 3:201664CrossRef
9.
Zurück zum Zitat McKnight RR, Pean CA, Buck JS, Hwang JS, Hsu JR, Pierrie SN (2020) Virtual reality and augmented reality —translating surgical training into surgical technique. Curr Rev Musculoskelet Med 13(6):663–674PubMedPubMedCentralCrossRef McKnight RR, Pean CA, Buck JS, Hwang JS, Hsu JR, Pierrie SN (2020) Virtual reality and augmented reality —translating surgical training into surgical technique. Curr Rev Musculoskelet Med 13(6):663–674PubMedPubMedCentralCrossRef
10.
Zurück zum Zitat Liu D, Li Q, Jiang T, Wang Y, Miao R, Shan F, Li Z (2021) Towards unified surgical skill assessment. In: IEEE conference on computer vision and pattern recognition. 9522–9531 Liu D, Li Q, Jiang T, Wang Y, Miao R, Shan F, Li Z (2021) Towards unified surgical skill assessment. In: IEEE conference on computer vision and pattern recognition. 9522–9531
12.
Zurück zum Zitat Kadkhodamohammadi A (2016) 3D detection and pose estimation of medical staff in operating rooms using RGB-D images. Dissertation, Strasbourg Kadkhodamohammadi A (2016) 3D detection and pose estimation of medical staff in operating rooms using RGB-D images. Dissertation, Strasbourg
13.
Zurück zum Zitat Jin Y, Dou Q, Chen H, Yu L, Qin J, Fu CW, Heng PA (2017) SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network. IEEE Trans Med Imag 37(5):1114–1126CrossRef Jin Y, Dou Q, Chen H, Yu L, Qin J, Fu CW, Heng PA (2017) SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network. IEEE Trans Med Imag 37(5):1114–1126CrossRef
14.
Zurück zum Zitat Padoy N (2019) Machine and deep learning for workflow recognition during surgery. Minim Invasive Therapy Allied Technol 28(2):82–90CrossRef Padoy N (2019) Machine and deep learning for workflow recognition during surgery. Minim Invasive Therapy Allied Technol 28(2):82–90CrossRef
15.
Zurück zum Zitat Doughty M, Singh K, Ghugre NR (2021) SurgeonAssist-Net: towards context-aware head-mounted display-based augmented reality for surgical guidance. In: international conference on medical image computing and computer-assisted intervention. 667–677 Doughty M, Singh K, Ghugre NR (2021) SurgeonAssist-Net: towards context-aware head-mounted display-based augmented reality for surgical guidance. In: international conference on medical image computing and computer-assisted intervention. 667–677
16.
Zurück zum Zitat Kadkhodamohammadi A, Sivanesan Uthraraj N, Giataganas P, Gras G, Kerr K, Luengo I, Oussedik S, Stoyanov D (2021) Towards video-based surgical workflow understanding in open orthopaedic surgery. Comput Methods Biomech Biomed Eng Imag Visual 9(3):286–293CrossRef Kadkhodamohammadi A, Sivanesan Uthraraj N, Giataganas P, Gras G, Kerr K, Luengo I, Oussedik S, Stoyanov D (2021) Towards video-based surgical workflow understanding in open orthopaedic surgery. Comput Methods Biomech Biomed Eng Imag Visual 9(3):286–293CrossRef
17.
Zurück zum Zitat Navab N, Blum T, Wang L, Okur A, Wendler T (2012) First deployments of augmented reality in operating rooms. Computer 45(7):48–55CrossRef Navab N, Blum T, Wang L, Okur A, Wendler T (2012) First deployments of augmented reality in operating rooms. Computer 45(7):48–55CrossRef
18.
Zurück zum Zitat Chen X, Xu L, Wang Y, Wang H, Wang F, Zeng X, Wang Q, Egger J (2015) Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. J Biomed Inf 55:124–131CrossRef Chen X, Xu L, Wang Y, Wang H, Wang F, Zeng X, Wang Q, Egger J (2015) Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. J Biomed Inf 55:124–131CrossRef
19.
Zurück zum Zitat Qian L, Deguet A, Kazanzides P (2018) ARssist: augmented reality on a head mounted display for the first assistant in robotic surgery. Healthc Technol Lett 5(5):194–200PubMedPubMedCentralCrossRef Qian L, Deguet A, Kazanzides P (2018) ARssist: augmented reality on a head mounted display for the first assistant in robotic surgery. Healthc Technol Lett 5(5):194–200PubMedPubMedCentralCrossRef
20.
Zurück zum Zitat Burström G, Nachabe R, Persson O, Edström E, Terander AE (2019) Augmented and virtual reality instrument tracking for minimally invasive spine surgery: a feasibility and accuracy study. Spine 44(15):1097–1104PubMedCrossRef Burström G, Nachabe R, Persson O, Edström E, Terander AE (2019) Augmented and virtual reality instrument tracking for minimally invasive spine surgery: a feasibility and accuracy study. Spine 44(15):1097–1104PubMedCrossRef
21.
Zurück zum Zitat Elmi-Terander A, Burström G, Nachabe R, Skulason H, Pedersen K, Fagerlund M, Ståhl F, Charalampidis A, Söderman M, Holmin S, Babic D (2019) Pedicle screw placement using augmented reality surgical navigation with intraoperative 3D imaging: a first in-human prospective cohort study. Spine 44(7):517PubMedCrossRef Elmi-Terander A, Burström G, Nachabe R, Skulason H, Pedersen K, Fagerlund M, Ståhl F, Charalampidis A, Söderman M, Holmin S, Babic D (2019) Pedicle screw placement using augmented reality surgical navigation with intraoperative 3D imaging: a first in-human prospective cohort study. Spine 44(7):517PubMedCrossRef
22.
Zurück zum Zitat Rodrigues P, Antunes M, Raposo C, Marques P, Fonseca F, Barreto JP (2019) Deep segmentation leverages geometric pose estimation in computer-aided total knee arthroplasty. Healthc Technol Lett 6(6):226–230PubMedPubMedCentralCrossRef Rodrigues P, Antunes M, Raposo C, Marques P, Fonseca F, Barreto JP (2019) Deep segmentation leverages geometric pose estimation in computer-aided total knee arthroplasty. Healthc Technol Lett 6(6):226–230PubMedPubMedCentralCrossRef
23.
Zurück zum Zitat Fucentese SF, Koch PP (2021) A novel augmented reality-based surgical guidance system for total knee arthroplasty. Arch Orthop Trauma Surg 141(12):2227–2233PubMedPubMedCentralCrossRef Fucentese SF, Koch PP (2021) A novel augmented reality-based surgical guidance system for total knee arthroplasty. Arch Orthop Trauma Surg 141(12):2227–2233PubMedPubMedCentralCrossRef
24.
Zurück zum Zitat Doughty M, Ghugre NR, Wright GA (2022) Augmenting performance: a systematic review of optical see-through head-mounted displays in surgery. J Imag 8(7):203CrossRef Doughty M, Ghugre NR, Wright GA (2022) Augmenting performance: a systematic review of optical see-through head-mounted displays in surgery. J Imag 8(7):203CrossRef
25.
Zurück zum Zitat von Atzigen M, Liebmann F, Hoch A, Spirig JM, Farshad M, Snedeker J, Fürnstahl P (2022) Marker-free surgical navigation of rod bending using a stereo neural network and augmented reality in spinal fusion. Med Image Anal 77:102365CrossRef von Atzigen M, Liebmann F, Hoch A, Spirig JM, Farshad M, Snedeker J, Fürnstahl P (2022) Marker-free surgical navigation of rod bending using a stereo neural network and augmented reality in spinal fusion. Med Image Anal 77:102365CrossRef
27.
Zurück zum Zitat Girshick R (2015) Fast R-CNN. In: IEEE international conference on computer vision. 1440–1448 Girshick R (2015) Fast R-CNN. In: IEEE international conference on computer vision. 1440–1448
28.
Zurück zum Zitat Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: IEEE conference on computer vision and pattern recognition. 3431–3440 Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: IEEE conference on computer vision and pattern recognition. 3431–3440
29.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition. 770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition. 770–778
30.
Zurück zum Zitat Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC (2016) SSD: Single shot multibox detector. In: European conference on computer vision. 21–37 Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC (2016) SSD: Single shot multibox detector. In: European conference on computer vision. 21–37
31.
Zurück zum Zitat Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: IEEE conference on computer vision and pattern recognition. 779–788 Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: IEEE conference on computer vision and pattern recognition. 779–788
32.
Zurück zum Zitat He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: IEEE international conference on computer vision. 2961–2969 He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: IEEE international conference on computer vision. 2961–2969
33.
Zurück zum Zitat Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: IEEE conference on computer vision and pattern recognition. 4700–4708 Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: IEEE conference on computer vision and pattern recognition. 4700–4708
34.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90CrossRef Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90CrossRef
35.
Zurück zum Zitat Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: IEEE/CVF international conference on computer vision. 10012–10022 Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: IEEE/CVF international conference on computer vision. 10012–10022
36.
Zurück zum Zitat Nakawala H, Bianchi R, Pescatori LE, De Cobelli O, Ferrigno G, De Momi E (2019) Deep-onto network for surgical workflow and context recognition. Int J Comput Assist Radiol Surg 14(4):685–696PubMedCrossRef Nakawala H, Bianchi R, Pescatori LE, De Cobelli O, Ferrigno G, De Momi E (2019) Deep-onto network for surgical workflow and context recognition. Int J Comput Assist Radiol Surg 14(4):685–696PubMedCrossRef
37.
Zurück zum Zitat Rivoir D, Bodenstedt S, von Bechtolsheim F, Distler M, Weitz J, Speidel S (2019) Unsupervised temporal video segmentation as an auxiliary task for predicting the remaining surgery duration. In: OR 2.0 context-aware operating theaters and machine learning in clinical neuroimaging. 29–37 Rivoir D, Bodenstedt S, von Bechtolsheim F, Distler M, Weitz J, Speidel S (2019) Unsupervised temporal video segmentation as an auxiliary task for predicting the remaining surgery duration. In: OR 2.0 context-aware operating theaters and machine learning in clinical neuroimaging. 29–37
38.
Zurück zum Zitat Shi X, Jin Y, Dou Q, Heng PA (2020) LRTD: long-range temporal dependency based active learning for surgical workflow recognition. Int J Comput Assist Radiol Surg 15(9):1573–1784PubMedCrossRef Shi X, Jin Y, Dou Q, Heng PA (2020) LRTD: long-range temporal dependency based active learning for surgical workflow recognition. Int J Comput Assist Radiol Surg 15(9):1573–1784PubMedCrossRef
40.
Zurück zum Zitat Xia T, Jia F (2021) Against spatial-temporal discrepancy: contrastive learning-based network for surgical workflow recognition. Int J Comput Assist Radiol Surg 16(5):839–848PubMedCrossRef Xia T, Jia F (2021) Against spatial-temporal discrepancy: contrastive learning-based network for surgical workflow recognition. Int J Comput Assist Radiol Surg 16(5):839–848PubMedCrossRef
41.
Zurück zum Zitat Zhang D, Wang R, Lo B (2021) Surgical gesture recognition based on bidirectional multi-layer independently RNN with explainable spatial feature extraction. In: IEEE international conference on robotics and automation. 1350–1356 Zhang D, Wang R, Lo B (2021) Surgical gesture recognition based on bidirectional multi-layer independently RNN with explainable spatial feature extraction. In: IEEE international conference on robotics and automation. 1350–1356
42.
Zurück zum Zitat Mottaghi A, Sharghi A, Yeung S, Mohareri O (2022) Adaptation of surgical activity recognition models across operating rooms. In: Medical image computing and computer assisted intervention. 530–540 Mottaghi A, Sharghi A, Yeung S, Mohareri O (2022) Adaptation of surgical activity recognition models across operating rooms. In: Medical image computing and computer assisted intervention. 530–540
43.
Zurück zum Zitat Valderrama N, Ruiz Puentes P, Hernández I, Ayobi N, Verlyck M, Santander J, Caicedo J, Fernández N, Arbeláez P (2022) Towards holistic surgical scene understanding. In: medical image computing and computer assisted intervention. 442–452 Valderrama N, Ruiz Puentes P, Hernández I, Ayobi N, Verlyck M, Santander J, Caicedo J, Fernández N, Arbeláez P (2022) Towards holistic surgical scene understanding. In: medical image computing and computer assisted intervention. 442–452
44.
Zurück zum Zitat Zhang Y, Bano S, Page AS, Deprest J, Stoyanov D, Vasconcelos F (2022) Retrieval of surgical phase transitions using reinforcement learning. In: medical image computing and computer assisted intervention. 497–506 Zhang Y, Bano S, Page AS, Deprest J, Stoyanov D, Vasconcelos F (2022) Retrieval of surgical phase transitions using reinforcement learning. In: medical image computing and computer assisted intervention. 497–506
45.
Zurück zum Zitat Jin Y, Yu Y, Chen C, Zhao Z, Heng PA, Stoyanov D (2022) Exploring intra-and inter-video relation for surgical semantic scene segmentation. IEEE Trans Med Imag 41(11):2991–3002CrossRef Jin Y, Yu Y, Chen C, Zhao Z, Heng PA, Stoyanov D (2022) Exploring intra-and inter-video relation for surgical semantic scene segmentation. IEEE Trans Med Imag 41(11):2991–3002CrossRef
46.
Zurück zum Zitat Müller LR, Petersen J, Yamlahi A, Wise P, Adler TJ, Seitel A, Kowalewski KF, Müller B, Kenngott H, Nickel F, Maier-Hein L (2022) Robust hand tracking for surgical telestration. Int J Comput Assist Radiol Surg 17(8):1477–1486PubMedPubMedCentralCrossRef Müller LR, Petersen J, Yamlahi A, Wise P, Adler TJ, Seitel A, Kowalewski KF, Müller B, Kenngott H, Nickel F, Maier-Hein L (2022) Robust hand tracking for surgical telestration. Int J Comput Assist Radiol Surg 17(8):1477–1486PubMedPubMedCentralCrossRef
47.
Zurück zum Zitat Elfring R, de la Fuente M, Radermacher K (2010) Assessment of optical localizer accuracy for computer aided surgery systems. Comput Aid Surg 15(1–3):1–12CrossRef Elfring R, de la Fuente M, Radermacher K (2010) Assessment of optical localizer accuracy for computer aided surgery systems. Comput Aid Surg 15(1–3):1–12CrossRef
48.
Zurück zum Zitat Picard F, Deep K, Jenny JY (2016) Current state of the art in total knee arthroplasty computer navigation. Knee Surg Sports Traumatol Arthrosc 24(11):3565–3574PubMedCrossRef Picard F, Deep K, Jenny JY (2016) Current state of the art in total knee arthroplasty computer navigation. Knee Surg Sports Traumatol Arthrosc 24(11):3565–3574PubMedCrossRef
49.
Zurück zum Zitat Simoes R, Raposo C, Barreto JP, Edwards P, Stoyanov D (2018) Visual tracking vs optical tracking in computer-assisted intervention. IEEE Trans Biomed Eng Simoes R, Raposo C, Barreto JP, Edwards P, Stoyanov D (2018) Visual tracking vs optical tracking in computer-assisted intervention. IEEE Trans Biomed Eng
50.
Zurück zum Zitat Herregodts S, Verhaeghe M, De Coninck B, Forward M, Verstraete MA, Victor J, De Baets P (2021) An improved method for assessing the technical accuracy of optical tracking systems for orthopaedic surgical navigation. Int J Med Robot Comput Assist Surg 17(4):e2285CrossRef Herregodts S, Verhaeghe M, De Coninck B, Forward M, Verstraete MA, Victor J, De Baets P (2021) An improved method for assessing the technical accuracy of optical tracking systems for orthopaedic surgical navigation. Int J Med Robot Comput Assist Surg 17(4):e2285CrossRef
51.
Zurück zum Zitat Rodrigues M, Mayo M, Patros P (2022) Surgical tool datasets for machine learning research: a survey. Int J Comput Vis 130(9):2222–2248CrossRef Rodrigues M, Mayo M, Patros P (2022) Surgical tool datasets for machine learning research: a survey. Int J Comput Vis 130(9):2222–2248CrossRef
52.
Zurück zum Zitat Hein J, Seibold M, Bogo F, Farshad M, Pollefeys M, Fürnstahl P, Navab N (2021) Towards markerless surgical tool and hand pose estimation. Int J Comput Assist Radiol Surg 16(5):799–808PubMedPubMedCentralCrossRef Hein J, Seibold M, Bogo F, Farshad M, Pollefeys M, Fürnstahl P, Navab N (2021) Towards markerless surgical tool and hand pose estimation. Int J Comput Assist Radiol Surg 16(5):799–808PubMedPubMedCentralCrossRef
54.
Zurück zum Zitat Laina I, Rieke N, Rupprecht C, Vizcaíno JP, Eslami A, Tombari F, Navab N (2017) Concurrent segmentation and localization for tracking of surgical instruments. In: international conference on medical image computing and computer-assisted intervention. 664–672 Laina I, Rieke N, Rupprecht C, Vizcaíno JP, Eslami A, Tombari F, Navab N (2017) Concurrent segmentation and localization for tracking of surgical instruments. In: international conference on medical image computing and computer-assisted intervention. 664–672
55.
Zurück zum Zitat Garcia-Peraza-Herrera LC, Li W, Fidon L, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Vander Poorten E, Stoyanov D, Vercauteren T, Ourselin S (2017) ToolNet: holistically-nested real-time segmentation of robotic surgical tools. In: IEEE/RSJ international conference on intelligent robots and systems. 5717–5722 Garcia-Peraza-Herrera LC, Li W, Fidon L, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Vander Poorten E, Stoyanov D, Vercauteren T, Ourselin S (2017) ToolNet: holistically-nested real-time segmentation of robotic surgical tools. In: IEEE/RSJ international conference on intelligent robots and systems. 5717–5722
56.
Zurück zum Zitat Aklilu J, Yeung S (2022) ALGES: active learning with gradient embeddings for semantic segmentation of laparoscopic surgical images. In: machine learning for healthcare. 182 Aklilu J, Yeung S (2022) ALGES: active learning with gradient embeddings for semantic segmentation of laparoscopic surgical images. In: machine learning for healthcare. 182
57.
Zurück zum Zitat Kurmann T, Marquez Neila P, Du X, Fua P, Stoyanov D, Wolf S, Sznitman R (2017) Simultaneous recognition and pose estimation of instruments in minimally invasive surgery. In: international conference on medical image computing and computer-assisted intervention. 505–513 Kurmann T, Marquez Neila P, Du X, Fua P, Stoyanov D, Wolf S, Sznitman R (2017) Simultaneous recognition and pose estimation of instruments in minimally invasive surgery. In: international conference on medical image computing and computer-assisted intervention. 505–513
58.
Zurück zum Zitat Du X, Kurmann T, Chang PL, Allan M, Ourselin S, Sznitman R, Kelly JD, Stoyanov D (2018) Articulated multi-instrument 2-D pose estimation using fully convolutional networks. IEEE Trans Med Imag 37(5):1276–1287CrossRef Du X, Kurmann T, Chang PL, Allan M, Ourselin S, Sznitman R, Kelly JD, Stoyanov D (2018) Articulated multi-instrument 2-D pose estimation using fully convolutional networks. IEEE Trans Med Imag 37(5):1276–1287CrossRef
59.
Zurück zum Zitat Colleoni E, Moccia S, Du X, De Momi E, Stoyanov D (2019) Deep learning based robotic tool detection and articulation estimation with spatio-temporal layers. IEEE Robot Autom Lett 4(3):2714–2721CrossRef Colleoni E, Moccia S, Du X, De Momi E, Stoyanov D (2019) Deep learning based robotic tool detection and articulation estimation with spatio-temporal layers. IEEE Robot Autom Lett 4(3):2714–2721CrossRef
60.
Zurück zum Zitat Kayhan M, Köpüklü O, Sarhan MH, Yigitsoy M, Eslami A, Rigoll G (2021) Deep attention based semi-supervised 2D-pose estimation for surgical instruments. In: international conference on pattern recognition. 444–460 Kayhan M, Köpüklü O, Sarhan MH, Yigitsoy M, Eslami A, Rigoll G (2021) Deep attention based semi-supervised 2D-pose estimation for surgical instruments. In: international conference on pattern recognition. 444–460
61.
Zurück zum Zitat Sarikaya D, Corso JJ, Guru KA (2017) Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans Med Imag 36(7):1542–1549CrossRef Sarikaya D, Corso JJ, Guru KA (2017) Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans Med Imag 36(7):1542–1549CrossRef
62.
Zurück zum Zitat Fujii R, Hachiuma R, Kajita H, Saito H (2022) Surgical tool detection in open surgery videos. Appl Sci 12(20):10473CrossRef Fujii R, Hachiuma R, Kajita H, Saito H (2022) Surgical tool detection in open surgery videos. Appl Sci 12(20):10473CrossRef
63.
Zurück zum Zitat Su H, Qi CR, Li Y, Guibas LJ (2015) Render for CNN: viewpoint estimation in images using CNNs trained with rendered 3D model views. In: IEEE international conference on computer vision. 2686–2694 Su H, Qi CR, Li Y, Guibas LJ (2015) Render for CNN: viewpoint estimation in images using CNNs trained with rendered 3D model views. In: IEEE international conference on computer vision. 2686–2694
64.
Zurück zum Zitat Kehl W, Manhardt F, Tombari F, Ilic S, Navab N (2017) SSD-6D: making RGB-based 3D detection and 6D pose estimation great again. In: IEEE international conference on computer vision. 1521–1529 Kehl W, Manhardt F, Tombari F, Ilic S, Navab N (2017) SSD-6D: making RGB-based 3D detection and 6D pose estimation great again. In: IEEE international conference on computer vision. 1521–1529
65.
Zurück zum Zitat Sundermeyer M, Marton ZC, Durner M, Brucker M, Triebel R (2018) Implicit 3D orientation learning for 6D object detection from RGB images. In: European conference on computer vision. 699–715 Sundermeyer M, Marton ZC, Durner M, Brucker M, Triebel R (2018) Implicit 3D orientation learning for 6D object detection from RGB images. In: European conference on computer vision. 699–715
66.
Zurück zum Zitat Su Y, Rambach J, Pagani A, Stricker D (2021) SynPo-Net—accurate and fast CNN-based 6-DoF object pose estimation using synthetic training. Sensors 21(1):300PubMedPubMedCentralCrossRef Su Y, Rambach J, Pagani A, Stricker D (2021) SynPo-Net—accurate and fast CNN-based 6-DoF object pose estimation using synthetic training. Sensors 21(1):300PubMedPubMedCentralCrossRef
67.
Zurück zum Zitat Wang G, Manhardt F, Tombari F, Ji X (2021) GDR-Net: geometry-guided direct regression network for monocular 6D object pose estimation. In: IEEE conference on computer vision and pattern recognition. 16611–16621 Wang G, Manhardt F, Tombari F, Ji X (2021) GDR-Net: geometry-guided direct regression network for monocular 6D object pose estimation. In: IEEE conference on computer vision and pattern recognition. 16611–16621
68.
Zurück zum Zitat Pavlakos G, Zhou X, Chan A, Derpanis KG, Daniilidis K (2017) 6-DoF object pose from semantic keypoints. In: IEEE international conference on robotics and automation. 2011–2018 Pavlakos G, Zhou X, Chan A, Derpanis KG, Daniilidis K (2017) 6-DoF object pose from semantic keypoints. In: IEEE international conference on robotics and automation. 2011–2018
69.
Zurück zum Zitat Tekin B, Sinha SN, Fua P (2018) Real-time seamless single shot 6D object pose prediction. In: IEEE conference on computer vision and pattern recognition. 292–301 Tekin B, Sinha SN, Fua P (2018) Real-time seamless single shot 6D object pose prediction. In: IEEE conference on computer vision and pattern recognition. 292–301
70.
Zurück zum Zitat Li Z, Wang G, Ji X (2019) CDPN: coordinates-based disentangled pose network for real-time RGB-based 6-DoF object pose estimation. In: IEEE/CVF international conference on computer vision. 7678–7687 Li Z, Wang G, Ji X (2019) CDPN: coordinates-based disentangled pose network for real-time RGB-based 6-DoF object pose estimation. In: IEEE/CVF international conference on computer vision. 7678–7687
71.
Zurück zum Zitat Park K, Patten T, Vincze M (2019) Pix2Pose: pixel-wise coordinate regression of objects for 6d pose estimation. In: IEEE/CVF international conference on computer vision. 7668–7677 Park K, Patten T, Vincze M (2019) Pix2Pose: pixel-wise coordinate regression of objects for 6d pose estimation. In: IEEE/CVF international conference on computer vision. 7668–7677
72.
Zurück zum Zitat Peng S, Liu Y, Huang Q, Zhou X, Bao H (2019) PVNet: pixel-wise voting network for 6-DoF pose estimation. In: IEEE conference on computer vision and pattern recognition. 4561–4570 Peng S, Liu Y, Huang Q, Zhou X, Bao H (2019) PVNet: pixel-wise voting network for 6-DoF pose estimation. In: IEEE conference on computer vision and pattern recognition. 4561–4570
73.
Zurück zum Zitat Zakharov S, Shugurov I, Ilic S (2019) DPOD: 6D pose object detector and refiner. In: IEEE/CVF international conference on computer vision. 1941–1950 Zakharov S, Shugurov I, Ilic S (2019) DPOD: 6D pose object detector and refiner. In: IEEE/CVF international conference on computer vision. 1941–1950
74.
Zurück zum Zitat König R, Drost B (2020) A hybrid approach for 6DoF pose estimation. In: computer vision-ECCV 2020 workshops. 700–706 König R, Drost B (2020) A hybrid approach for 6DoF pose estimation. In: computer vision-ECCV 2020 workshops. 700–706
75.
Zurück zum Zitat Burton WS, Myers CA, Jensen A, Hamilton L, Shelburne KB, Banks SA, Rullkoetter PJ (2021) Automatic tracking of healthy joint kinematics from stereo-radiography sequences. Comput Biol Med 139:104945PubMedCrossRef Burton WS, Myers CA, Jensen A, Hamilton L, Shelburne KB, Banks SA, Rullkoetter PJ (2021) Automatic tracking of healthy joint kinematics from stereo-radiography sequences. Comput Biol Med 139:104945PubMedCrossRef
76.
Zurück zum Zitat Rad M, Lepetit V (2017) BB8: a scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth. In: IEEE international conference on computer vision. 3828–3836 Rad M, Lepetit V (2017) BB8: a scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth. In: IEEE international conference on computer vision. 3828–3836
77.
Zurück zum Zitat Xiang Y, Schmidt T, Narayanan V, Fox D (2018) PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes. In: Robotics: Science and Systems Xiang Y, Schmidt T, Narayanan V, Fox D (2018) PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes. In: Robotics: Science and Systems
78.
Zurück zum Zitat Li Y, Wang G, Ji X, Xiang Y, Fox D (2018) DeepIM: deep iterative matching for 6D pose estimation. In: European conference on computer vision. 683–698 Li Y, Wang G, Ji X, Xiang Y, Fox D (2018) DeepIM: deep iterative matching for 6D pose estimation. In: European conference on computer vision. 683–698
79.
Zurück zum Zitat Labbé Y, Carpentier J, Aubry M, Sivic J (2020) CosyPose: consistent multi-view multi-object 6D pose estimation. In: European conference on computer vision. 574–591 Labbé Y, Carpentier J, Aubry M, Sivic J (2020) CosyPose: consistent multi-view multi-object 6D pose estimation. In: European conference on computer vision. 574–591
80.
Zurück zum Zitat Hinterstoisser S, Lepetit V, Ilic S, Holzer S, Bradski G, Konolige K, Navab N (2012) Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes. In: computer vision-ACCV 2012: 11th Asian conference on computer vision. 548–562 Hinterstoisser S, Lepetit V, Ilic S, Holzer S, Bradski G, Konolige K, Navab N (2012) Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes. In: computer vision-ACCV 2012: 11th Asian conference on computer vision. 548–562
81.
Zurück zum Zitat Brachmann E, Krull A, Michel F, Gumhold S, Shotton J, Rother C (2014) Learning 6D object pose estimation using 3D object coordinates. In: European conference on computer vision. 536–551 Brachmann E, Krull A, Michel F, Gumhold S, Shotton J, Rother C (2014) Learning 6D object pose estimation using 3D object coordinates. In: European conference on computer vision. 536–551
82.
Zurück zum Zitat Xiang Y, Mottaghi R, Savarese S (2014) Beyond pascal: a benchmark for 3D object detection in the wild. In: IEEE winter conference on applications of computer vision. 75–82 Xiang Y, Mottaghi R, Savarese S (2014) Beyond pascal: a benchmark for 3D object detection in the wild. In: IEEE winter conference on applications of computer vision. 75–82
83.
Zurück zum Zitat Rennie C, Shome R, Bekris KE, De Souza AF (2016) A dataset for improved RGBD-based object detection and pose estimation for warehouse pick-and-place. IEEE Robot Autom Lett 1(2):1179–1185CrossRef Rennie C, Shome R, Bekris KE, De Souza AF (2016) A dataset for improved RGBD-based object detection and pose estimation for warehouse pick-and-place. IEEE Robot Autom Lett 1(2):1179–1185CrossRef
84.
Zurück zum Zitat Hodan T, Haluza P, Obdržálek Š, Matas J, Lourakis M, Zabulis X (2017) T-LESS: an RGB-D dataset for 6D pose estimation of texture-less objects. In: IEEE winter conference on applications of computer vision. 880–888 Hodan T, Haluza P, Obdržálek Š, Matas J, Lourakis M, Zabulis X (2017) T-LESS: an RGB-D dataset for 6D pose estimation of texture-less objects. In: IEEE winter conference on applications of computer vision. 880–888
85.
Zurück zum Zitat Hodan T, Michel F, Brachmann E, Kehl W, GlentBuch A, Kraft D, Drost B, Vidal J, Ihrke S, Zabulis X, Sahin C (2018) BOP: benchmark for 6D object pose estimation. In: European conference on computer vision. 19–34 Hodan T, Michel F, Brachmann E, Kehl W, GlentBuch A, Kraft D, Drost B, Vidal J, Ihrke S, Zabulis X, Sahin C (2018) BOP: benchmark for 6D object pose estimation. In: European conference on computer vision. 19–34
86.
Zurück zum Zitat Brachmann E, Michel F, Krull A, Yang MY, Gumhold S (2016) Uncertainty-driven 6D pose estimation of objects and scenes from a single RGB image. In: IEEE conference on computer vision and pattern recognition. 3364–3372 Brachmann E, Michel F, Krull A, Yang MY, Gumhold S (2016) Uncertainty-driven 6D pose estimation of objects and scenes from a single RGB image. In: IEEE conference on computer vision and pattern recognition. 3364–3372
87.
Zurück zum Zitat Hodaň T, Matas J, Obdržálek Š (2016) On evaluation of 6D object pose estimation. In: European conference on computer vision. 606–619 Hodaň T, Matas J, Obdržálek Š (2016) On evaluation of 6D object pose estimation. In: European conference on computer vision. 606–619
88.
Zurück zum Zitat Hodaň T, Sundermeyer M, Drost B, Labbé Y, Brachmann E, Michel F, Rother C, Matas J (2020) BOP challenge 2020 on 6D object localization. In: European conference on computer vision. 577–594 Hodaň T, Sundermeyer M, Drost B, Labbé Y, Brachmann E, Michel F, Rother C, Matas J (2020) BOP challenge 2020 on 6D object localization. In: European conference on computer vision. 577–594
89.
Zurück zum Zitat Esfandiari H, Newell R, Anglin C, Street J, Hodgson AJ (2018) A deep learning framework for segmentation and pose estimation of pedicle screw implants based on C-arm fluoroscopy. Int J Comput Assist Radiol Surg 13(8):1269–1282PubMedCrossRef Esfandiari H, Newell R, Anglin C, Street J, Hodgson AJ (2018) A deep learning framework for segmentation and pose estimation of pedicle screw implants based on C-arm fluoroscopy. Int J Comput Assist Radiol Surg 13(8):1269–1282PubMedCrossRef
90.
Zurück zum Zitat Gao C, Farvardin A, Grupp RB, Bakhtiarinejad M, Ma L, Thies M, Unberath M, Taylor RH, Armand M (2020) Fiducial-free 2D/3D registration for robot-assisted femoroplasty. IEEE Trans Med Robot Bionic 2(3):437–446CrossRef Gao C, Farvardin A, Grupp RB, Bakhtiarinejad M, Ma L, Thies M, Unberath M, Taylor RH, Armand M (2020) Fiducial-free 2D/3D registration for robot-assisted femoroplasty. IEEE Trans Med Robot Bionic 2(3):437–446CrossRef
91.
Zurück zum Zitat Marion P, Florence PR, Manuelli L, Tedrake R (2018) Label fusion: a pipeline for generating ground truth labels for real RGBD data of cluttered scenes. In: IEEE international conference on robotics and automation. 3235–3242 Marion P, Florence PR, Manuelli L, Tedrake R (2018) Label fusion: a pipeline for generating ground truth labels for real RGBD data of cluttered scenes. In: IEEE international conference on robotics and automation. 3235–3242
92.
Zurück zum Zitat Cartucho J, Tukra S, Li Y, Elson DS, Giannarou S (2021) VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery. Comput Methods Biomecha Biomed Eng Imag Visual 9(4):331–338CrossRef Cartucho J, Tukra S, Li Y, Elson DS, Giannarou S (2021) VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery. Comput Methods Biomecha Biomed Eng Imag Visual 9(4):331–338CrossRef
93.
Zurück zum Zitat Liu X, Iwase S, Kitani KM (2021) StereOBJ-1M: large-scale stereo image dataset for 6D object pose estimation. In: IEEE international conference on computer vision. 10870–10879 Liu X, Iwase S, Kitani KM (2021) StereOBJ-1M: large-scale stereo image dataset for 6D object pose estimation. In: IEEE international conference on computer vision. 10870–10879
94.
Zurück zum Zitat Maier-Hein L et al (2017) Surgical data science for next-generation interventions. Nat Biomed Eng 1(9):691–696PubMedCrossRef Maier-Hein L et al (2017) Surgical data science for next-generation interventions. Nat Biomed Eng 1(9):691–696PubMedCrossRef
95.
Zurück zum Zitat Maier-Hein L et al (2022) Surgical data science—from concepts toward clinical translation. Med Image Anal 76:102306PubMedCrossRef Maier-Hein L et al (2022) Surgical data science—from concepts toward clinical translation. Med Image Anal 76:102306PubMedCrossRef
96.
Zurück zum Zitat Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252CrossRef Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252CrossRef
97.
Zurück zum Zitat Paszke A, Gross S, Chintala S, Chanan G, Yang E, DeVito Z, Lin Z, Desmaison A, Antiga L, Lerer A (2017) Automatic differentiation in PyTorch Paszke A, Gross S, Chintala S, Chanan G, Yang E, DeVito Z, Lin Z, Desmaison A, Antiga L, Lerer A (2017) Automatic differentiation in PyTorch
99.
Zurück zum Zitat Everingham M, Eslami S, Van Gool L, Williams CK, Winn J, Zisserman A (2015) The PASCAL visual object classes challenge: a retrospective. Int J Comput Vis 111(1):98–136CrossRef Everingham M, Eslami S, Van Gool L, Williams CK, Winn J, Zisserman A (2015) The PASCAL visual object classes challenge: a retrospective. Int J Comput Vis 111(1):98–136CrossRef
101.
Zurück zum Zitat Tobin J, Fong R, Ray A, Schneider J, Zaremba W, Abbeel P (2017) Domain randomization for transferring deep neural networks from simulation to the real world. In: IEEE international conference on intelligent robots and systems. 23–30 Tobin J, Fong R, Ray A, Schneider J, Zaremba W, Abbeel P (2017) Domain randomization for transferring deep neural networks from simulation to the real world. In: IEEE international conference on intelligent robots and systems. 23–30
102.
Zurück zum Zitat Spong MW, Hutchinson S, Vidyasagar M (2006) Robot modeling and control. Wiley, New York Spong MW, Hutchinson S, Vidyasagar M (2006) Robot modeling and control. Wiley, New York
103.
Zurück zum Zitat Huynh DQ (2009) Metrics for 3D rotations: comparison and analysis. J Math Imag Vis 35(2):155–64CrossRef Huynh DQ (2009) Metrics for 3D rotations: comparison and analysis. J Math Imag Vis 35(2):155–64CrossRef
104.
Zurück zum Zitat Yang H, Shi J, Carlone L (2020) Teaser: fast and certifiable point cloud registration. IEEE Trans Robot 37(2):314–33CrossRef Yang H, Shi J, Carlone L (2020) Teaser: fast and certifiable point cloud registration. IEEE Trans Robot 37(2):314–33CrossRef
105.
Zurück zum Zitat Hintze JL, Nelson RD (1998) Violin plots: AS box plot-density trace synergism. Am Statistician 52(2):181–4 Hintze JL, Nelson RD (1998) Violin plots: AS box plot-density trace synergism. Am Statistician 52(2):181–4
106.
Zurück zum Zitat Sattler T, Zhou Q, Pollefeys M, Leal-Taixe L (2019) Understanding the limitations of CNN-based absolute camera pose regression. In: IEEE/CVF conference on computer vision and pattern recognition. 3302–3312 Sattler T, Zhou Q, Pollefeys M, Leal-Taixe L (2019) Understanding the limitations of CNN-based absolute camera pose regression. In: IEEE/CVF conference on computer vision and pattern recognition. 3302–3312
107.
Zurück zum Zitat Qi CR, Su H, Mo K, Guibas LJ (2017) PointNet: deep learning on point sets for 3D classification and segmentation. In: IEEE conference on computer vision and pattern recognition. 652-660 Qi CR, Su H, Mo K, Guibas LJ (2017) PointNet: deep learning on point sets for 3D classification and segmentation. In: IEEE conference on computer vision and pattern recognition. 652-660
108.
Zurück zum Zitat Qi CR, Yi L, Su H, Guibas LJ (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. In: advances in neural information processing systems. 30 Qi CR, Yi L, Su H, Guibas LJ (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. In: advances in neural information processing systems. 30
109.
Zurück zum Zitat Gao H, Ji S (2019) Graph U-Nets. In: International conference on machine learning. 2083-2092 Gao H, Ji S (2019) Graph U-Nets. In: International conference on machine learning. 2083-2092
111.
Zurück zum Zitat Ganin Y, Lempitsky V (2015) Unsupervised domain adaptation by backpropagation. In: International conference on machine learning. 1180–1189 Ganin Y, Lempitsky V (2015) Unsupervised domain adaptation by backpropagation. In: International conference on machine learning. 1180–1189
112.
Zurück zum Zitat Jones RB, Greene AT, Polakovic SV, Hamilton MA, Mohajer NJ, Youderian AR, Parsons IM, Saadi PD, Cheung EV (2020) Accuracy and precision of placement of the glenoid baseplate in reverse total shoulder arthroplasty using a novel computer assisted navigation system combined with preoperative planning: a controlled cadaveric study. Semin Arthroplast JSES. 39(1):3–20 Jones RB, Greene AT, Polakovic SV, Hamilton MA, Mohajer NJ, Youderian AR, Parsons IM, Saadi PD, Cheung EV (2020) Accuracy and precision of placement of the glenoid baseplate in reverse total shoulder arthroplasty using a novel computer assisted navigation system combined with preoperative planning: a controlled cadaveric study. Semin Arthroplast JSES. 39(1):3–20
113.
Zurück zum Zitat Martin JA, Regehr G, Reznick R, Macrae H, Murnaghan J, Hutchison C, Brown M (1997) Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg 84(2):273–278PubMed Martin JA, Regehr G, Reznick R, Macrae H, Murnaghan J, Hutchison C, Brown M (1997) Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg 84(2):273–278PubMed
114.
Zurück zum Zitat Ahmed K, Miskovic D, Darzi A, Athanasiou T, Hanna GB (2011) Observational tools for assessment of procedural skills: a systematic review. Am J Surg 202(4):469–480PubMedCrossRef Ahmed K, Miskovic D, Darzi A, Athanasiou T, Hanna GB (2011) Observational tools for assessment of procedural skills: a systematic review. Am J Surg 202(4):469–480PubMedCrossRef
115.
Zurück zum Zitat Alvand A, Logishetty K, Middleton R, Khan T, Jackson WF, Price AJ, Rees JL (2013) Validating a global rating scale to monitor individual resident learning curves during arthroscopic knee meniscal repair Arthroscopy. J Arthrosc Relat Surg 29(5):906–912CrossRef Alvand A, Logishetty K, Middleton R, Khan T, Jackson WF, Price AJ, Rees JL (2013) Validating a global rating scale to monitor individual resident learning curves during arthroscopic knee meniscal repair Arthroscopy. J Arthrosc Relat Surg 29(5):906–912CrossRef
116.
Zurück zum Zitat Gallagher AG, O’Sullivan GC, Leonard G, Bunting BP, McGlade KJ (2014) Objective structured assessment of technical skills and checklist scales reliability compared for high stakes assessments. ANZ J Surg 84(7–8):568–573PubMedCrossRef Gallagher AG, O’Sullivan GC, Leonard G, Bunting BP, McGlade KJ (2014) Objective structured assessment of technical skills and checklist scales reliability compared for high stakes assessments. ANZ J Surg 84(7–8):568–573PubMedCrossRef
117.
Zurück zum Zitat Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM (2022) Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 5(1):1–6CrossRef Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM (2022) Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 5(1):1–6CrossRef
118.
Zurück zum Zitat Cichos KH, Hyde ZB, Mabry SE, Ghanem ES, Brabston EW, Hayes LW, McGwin G Jr, Ponce BA (2019) Optimization of orthopedic surgical instrument trays: lean principles to reduce fixed operating room expenses. J Arthroplast 34(12):2834–2840CrossRef Cichos KH, Hyde ZB, Mabry SE, Ghanem ES, Brabston EW, Hayes LW, McGwin G Jr, Ponce BA (2019) Optimization of orthopedic surgical instrument trays: lean principles to reduce fixed operating room expenses. J Arthroplast 34(12):2834–2840CrossRef
119.
Zurück zum Zitat Stockert EW, Langerman A (2014) Assessing the magnitude and costs of intraoperative inefficiencies attributable to surgical instrument trays. J Am College Surg 219(4):646–655CrossRef Stockert EW, Langerman A (2014) Assessing the magnitude and costs of intraoperative inefficiencies attributable to surgical instrument trays. J Am College Surg 219(4):646–655CrossRef
120.
Zurück zum Zitat Crosby L, Lortie E, Rotenberg B, Sowerby L (2020) Surgical instrument optimization to reduce instrument processing and operating room setup time. Otolaryngol Head Neck Surg 162(2):215–219PubMedCrossRef Crosby L, Lortie E, Rotenberg B, Sowerby L (2020) Surgical instrument optimization to reduce instrument processing and operating room setup time. Otolaryngol Head Neck Surg 162(2):215–219PubMedCrossRef
121.
Zurück zum Zitat John-Baptiste A, Sowerby LJ, Chin CJ, Martin J, Rotenberg BW (2016) Comparing surgical trays with redundant instruments with trays with reduced instruments: a cost analysis. Can Med Assoc Open Access J 4(3):E404–E408 John-Baptiste A, Sowerby LJ, Chin CJ, Martin J, Rotenberg BW (2016) Comparing surgical trays with redundant instruments with trays with reduced instruments: a cost analysis. Can Med Assoc Open Access J 4(3):E404–E408
122.
Zurück zum Zitat Lonner JH, Goh GS, Sommer K, Niggeman G, Levicoff EA, Vernace JV, Good RP (2021) Minimizing surgical instrument burden increases operating room efficiency and reduces perioperative costs in total joint arthroplasty. J Arthroplast 36(6):1857–1863CrossRef Lonner JH, Goh GS, Sommer K, Niggeman G, Levicoff EA, Vernace JV, Good RP (2021) Minimizing surgical instrument burden increases operating room efficiency and reduces perioperative costs in total joint arthroplasty. J Arthroplast 36(6):1857–1863CrossRef
123.
Zurück zum Zitat Dyas AR, Lovell KM, Balentine CJ, Wang TN, Porterfield JR Jr, Chen H, Lindeman BM (2018) Reducing cost and improving operating room efficiency: examination of surgical instrument processing. J Surg Res 229:15–19PubMedCrossRef Dyas AR, Lovell KM, Balentine CJ, Wang TN, Porterfield JR Jr, Chen H, Lindeman BM (2018) Reducing cost and improving operating room efficiency: examination of surgical instrument processing. J Surg Res 229:15–19PubMedCrossRef
124.
Zurück zum Zitat Hill I, Olivere L, Helmkamp J, Le E, Hill W, Wahlstedt J, Khoury P, Gloria J, Richard MJ, Rosenberger LH, Codd PJ (2022) Measuring intraoperative surgical instrument use with radio-frequency identification. JAMIA Open 5(1):ooac003 Hill I, Olivere L, Helmkamp J, Le E, Hill W, Wahlstedt J, Khoury P, Gloria J, Richard MJ, Rosenberger LH, Codd PJ (2022) Measuring intraoperative surgical instrument use with radio-frequency identification. JAMIA Open 5(1):ooac003
Metadaten
Titel
Evaluation of single-stage vision models for pose estimation of surgical instruments
verfasst von
William Burton
Casey Myers
Matthew Rutherford
Paul Rullkoetter
Publikationsdatum
30.04.2023
Verlag
Springer International Publishing
Erschienen in
International Journal of Computer Assisted Radiology and Surgery / Ausgabe 12/2023
Print ISSN: 1861-6410
Elektronische ISSN: 1861-6429
DOI
https://doi.org/10.1007/s11548-023-02890-6

Weitere Artikel der Ausgabe 12/2023

International Journal of Computer Assisted Radiology and Surgery 12/2023 Zur Ausgabe

Darf man die Behandlung eines Neonazis ablehnen?

08.05.2024 Gesellschaft Nachrichten

In einer Leseranfrage in der Zeitschrift Journal of the American Academy of Dermatology möchte ein anonymer Dermatologe bzw. eine anonyme Dermatologin wissen, ob er oder sie einen Patienten behandeln muss, der eine rassistische Tätowierung trägt.

Ein Drittel der jungen Ärztinnen und Ärzte erwägt abzuwandern

07.05.2024 Klinik aktuell Nachrichten

Extreme Arbeitsverdichtung und kaum Supervision: Dr. Andrea Martini, Sprecherin des Bündnisses Junge Ärztinnen und Ärzte (BJÄ) über den Frust des ärztlichen Nachwuchses und die Vorteile des Rucksack-Modells.

Endlich: Zi zeigt, mit welchen PVS Praxen zufrieden sind

IT für Ärzte Nachrichten

Darauf haben viele Praxen gewartet: Das Zi hat eine Liste von Praxisverwaltungssystemen veröffentlicht, die von Nutzern positiv bewertet werden. Eine gute Grundlage für wechselwillige Ärztinnen und Psychotherapeuten.

Akuter Schwindel: Wann lohnt sich eine MRT?

28.04.2024 Schwindel Nachrichten

Akuter Schwindel stellt oft eine diagnostische Herausforderung dar. Wie nützlich dabei eine MRT ist, hat eine Studie aus Finnland untersucht. Immerhin einer von sechs Patienten wurde mit akutem ischämischem Schlaganfall diagnostiziert.

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.