Skip to main content
Erschienen in: International Journal of Computer Assisted Radiology and Surgery 6/2019

09.04.2019 | Original Article

Prediction of laparoscopic procedure duration using unlabeled, multimodal sensor data

verfasst von: Sebastian Bodenstedt, Martin Wagner, Lars Mündermann, Hannes Kenngott, Beat Müller-Stich, Michael Breucha, Sören Torge Mees, Jürgen Weitz, Stefanie Speidel

Erschienen in: International Journal of Computer Assisted Radiology and Surgery | Ausgabe 6/2019

Einloggen, um Zugang zu erhalten

Abstract

Purpose

The course of surgical procedures is often unpredictable, making it difficult to estimate the duration of procedures beforehand. This uncertainty makes scheduling surgical procedures a difficult task. A context-aware method that analyses the workflow of an intervention online and automatically predicts the remaining duration would alleviate these problems. As basis for such an estimate, information regarding the current state of the intervention is a requirement.

Methods

Today, the operating room contains a diverse range of sensors. During laparoscopic interventions, the endoscopic video stream is an ideal source of such information. Extracting quantitative information from the video is challenging though, due to its high dimensionality. Other surgical devices (e.g., insufflator, lights, etc.) provide data streams which are, in contrast to the video stream, more compact and easier to quantify. Though whether such streams offer sufficient information for estimating the duration of surgery is uncertain. In this paper, we propose and compare methods, based on convolutional neural networks, for continuously predicting the duration of laparoscopic interventions based on unlabeled data, such as from endoscopic image and surgical device streams.

Results

The methods are evaluated on 80 recorded laparoscopic interventions of various types, for which surgical device data and the endoscopic video streams are available. Here the combined method performs best with an overall average error of 37% and an average halftime error of approximately 28%.

Conclusion

In this paper, we present, to our knowledge, the first approach for online procedure duration prediction using unlabeled endoscopic video data and surgical device data in a laparoscopic setting. Furthermore, we show that a method incorporating both vision and device data performs better than methods based only on vision, while methods only based on tool usage and surgical device data perform poorly, showing the importance of the visual channel.
Literatur
1.
Zurück zum Zitat Aksamentov I, Twinanda AP, Mutter D, Marescaux J, Padoy N (2017) Deep neural networks predict remaining surgery duration from cholecystectomy videos. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 586–593 Aksamentov I, Twinanda AP, Mutter D, Marescaux J, Padoy N (2017) Deep neural networks predict remaining surgery duration from cholecystectomy videos. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 586–593
2.
Zurück zum Zitat Blum T, Feußner H, Navab N (2010) Modeling and segmentation of surgical workflow from laparoscopic video. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 400–407 Blum T, Feußner H, Navab N (2010) Modeling and segmentation of surgical workflow from laparoscopic video. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 400–407
4.
Zurück zum Zitat Bodenstedt S, Wagner M, Katić D, Mietkowski P, Mayer B, Kenngott H, Müller-Stich B, Dillmann R, Speidel S (2017) Unsupervised temporal context learning using convolutional neural networks for laparoscopic workflow analysis. ArXiv e-prints Bodenstedt S, Wagner M, Katić D, Mietkowski P, Mayer B, Kenngott H, Müller-Stich B, Dillmann R, Speidel S (2017) Unsupervised temporal context learning using convolutional neural networks for laparoscopic workflow analysis. ArXiv e-prints
6.
Zurück zum Zitat Dergachyova O, Bouget D, Huaulmé A, Morandi X, Jannin P (2016) Automatic data-driven real-time segmentation and recognition of surgical workflow. Int J Comput Assist Radiol Surg 11(6):1081–1089CrossRefPubMed Dergachyova O, Bouget D, Huaulmé A, Morandi X, Jannin P (2016) Automatic data-driven real-time segmentation and recognition of surgical workflow. Int J Comput Assist Radiol Surg 11(6):1081–1089CrossRefPubMed
8.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
9.
10.
Zurück zum Zitat Katić D, Wekerle AL, Gärtner F, Kenngott H, Müller-Stich BP, Dillmann R, Speidel S (2014) Knowledge-driven formalization of laparoscopic surgeries for rule-based intraoperative context-aware assistance. In: International conference on information processing in computer-assisted interventions. Springer, pp 158–167 Katić D, Wekerle AL, Gärtner F, Kenngott H, Müller-Stich BP, Dillmann R, Speidel S (2014) Knowledge-driven formalization of laparoscopic surgeries for rule-based intraoperative context-aware assistance. In: International conference on information processing in computer-assisted interventions. Springer, pp 158–167
12.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105
13.
Zurück zum Zitat Lea C, Choi JH, Reiter A, Hager G (2016) Surgical phase recognition: from instrumented ors to hospitals around the world. M2CAI 2016 Lea C, Choi JH, Reiter A, Hager G (2016) Surgical phase recognition: from instrumented ors to hospitals around the world. M2CAI 2016
14.
Zurück zum Zitat Padoy N, Blum T, Ahmadi SA, Feussner H, Berger MO, Navab N (2012) Statistical modeling and recognition of surgical workflow. Med Image Anal 16(3):632–641CrossRefPubMed Padoy N, Blum T, Ahmadi SA, Feussner H, Berger MO, Navab N (2012) Statistical modeling and recognition of surgical workflow. Med Image Anal 16(3):632–641CrossRefPubMed
15.
Zurück zum Zitat Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imag 36(1):86–97CrossRef Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imag 36(1):86–97CrossRef
Metadaten
Titel
Prediction of laparoscopic procedure duration using unlabeled, multimodal sensor data
verfasst von
Sebastian Bodenstedt
Martin Wagner
Lars Mündermann
Hannes Kenngott
Beat Müller-Stich
Michael Breucha
Sören Torge Mees
Jürgen Weitz
Stefanie Speidel
Publikationsdatum
09.04.2019
Verlag
Springer International Publishing
Erschienen in
International Journal of Computer Assisted Radiology and Surgery / Ausgabe 6/2019
Print ISSN: 1861-6410
Elektronische ISSN: 1861-6429
DOI
https://doi.org/10.1007/s11548-019-01966-6

Weitere Artikel der Ausgabe 6/2019

International Journal of Computer Assisted Radiology and Surgery 6/2019 Zur Ausgabe

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.