Skip to main content
Erschienen in: International Journal of Computer Assisted Radiology and Surgery 7/2019

04.06.2019 | Original Article

EasyLabels: weak labels for scene segmentation in laparoscopic videos

verfasst von: Félix Fuentes-Hurtado, Abdolrahim Kadkhodamohammadi, Evangello Flouty, Santiago Barbarisi, Imanol Luengo, Danail Stoyanov

Erschienen in: International Journal of Computer Assisted Radiology and Surgery | Ausgabe 7/2019

Einloggen, um Zugang zu erhalten

Abstract

Purpose

We present a different approach for annotating laparoscopic images for segmentation in a weak fashion and experimentally prove that its accuracy when trained with partial cross-entropy is close to that obtained with fully supervised approaches.

Methods

We propose an approach that relies on weak annotations provided as stripes over the different objects in the image and partial cross-entropy as the loss function of a fully convolutional neural network to obtain a dense pixel-level prediction map.

Results

We validate our method on three different datasets, providing qualitative results for all of them and quantitative results for two of them. The experiments show that our approach is able to obtain at least \(90\%\) of the accuracy obtained with fully supervised methods for all the tested datasets, while requiring \(\sim 13\)\(\times \) less time to create the annotations compared to full supervision.

Conclusions

With this work, we demonstrate that laparoscopic data can be segmented using very few annotated data while maintaining levels of accuracy comparable to those obtained with full supervision.
Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
For simplicity, even if there are approaches either using points, bounding boxes or scribbles, we will refer to this approach as just “scribbles.”
 
2
Please note that the skeleton is computed as the ridge of the distance transform.
 
Literatur
1.
Zurück zum Zitat Bearman A, Russakovsky O, Ferrari V, Fei-Fei L (2016) What’s the point: semantic segmentation with point supervision. In: European conference on computer vision. Springer, pp 549–565 Bearman A, Russakovsky O, Ferrari V, Fei-Fei L (2016) What’s the point: semantic segmentation with point supervision. In: European conference on computer vision. Springer, pp 549–565
2.
Zurück zum Zitat Bodenstedt S, Allan M, Agustinos A, Du X, Garcia-Peraza-Herrera L, Kenngott H, Kurmann T, Müller-Stich B, Ourselin S, Pakhomov D, Sznitman R, Teichmann M, Thoma M, Vercauteren T, Voros S, Wagner M, Wochner P, Maier-Hein L, Stoyanov D, Speidel S. (2018) Comparative evaluation of instrument segmentation and tracking methods in minimally invasive surgery. arXiv preprint arXiv:1805.02475 Bodenstedt S, Allan M, Agustinos A, Du X, Garcia-Peraza-Herrera L, Kenngott H, Kurmann T, Müller-Stich B, Ourselin S, Pakhomov D, Sznitman R, Teichmann M, Thoma M, Vercauteren T, Voros S, Wagner M, Wochner P, Maier-Hein L, Stoyanov D, Speidel S. (2018) Comparative evaluation of instrument segmentation and tracking methods in minimally invasive surgery. arXiv preprint arXiv:​1805.​02475
3.
Zurück zum Zitat Bodenstedt S, Ohnemus A, Katic D, Wekerle AL, Wagner M, Kenngott H, Müller-Stich B, Dillmann R, Speidel S. (2018) Real-time image-based instrument classification for laparoscopic surgery. arXiv preprint arXiv:1808.00178 Bodenstedt S, Ohnemus A, Katic D, Wekerle AL, Wagner M, Kenngott H, Müller-Stich B, Dillmann R, Speidel S. (2018) Real-time image-based instrument classification for laparoscopic surgery. arXiv preprint arXiv:​1808.​00178
4.
Zurück zum Zitat Bouget D, Allan M, Stoyanov D, Jannin P (2017) Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal 35:633–654CrossRefPubMed Bouget D, Allan M, Stoyanov D, Jannin P (2017) Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal 35:633–654CrossRefPubMed
5.
Zurück zum Zitat Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV), pp 801–818 Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV), pp 801–818
6.
Zurück zum Zitat Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: The IEEE conference on computer vision and pattern recognition (CVPR) Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: The IEEE conference on computer vision and pattern recognition (CVPR)
7.
Zurück zum Zitat Gao M, Xu Z, Lu L, Wu A, Nogues I, Summers RM, Mollura DJ (2016) Segmentation label propagation using deep convolutional neural networks and dense conditional random field. In: 2016 IEEE 13th international symposium on biomedical imaging (ISBI). IEEE, pp 1265–1268 Gao M, Xu Z, Lu L, Wu A, Nogues I, Summers RM, Mollura DJ (2016) Segmentation label propagation using deep convolutional neural networks and dense conditional random field. In: 2016 IEEE 13th international symposium on biomedical imaging (ISBI). IEEE, pp 1265–1268
8.
Zurück zum Zitat García-Peraza-Herrera LC, Li W, Fidon L, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Vander Poorten E, Stoyanov D, Vercauteren T, Ourselin S (2017) Toolnet: holistically-nested real-time segmentation of robotic surgical tools. In: 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 5717–5722 García-Peraza-Herrera LC, Li W, Fidon L, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Vander Poorten E, Stoyanov D, Vercauteren T, Ourselin S (2017) Toolnet: holistically-nested real-time segmentation of robotic surgical tools. In: 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 5717–5722
9.
Zurück zum Zitat García-Peraza-Herrera LC, Li W, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Vander Poorten E, Stoyanov D, Vercauteren T, Ourselin S (2016) Real-time segmentation of non-rigid surgical tools based on deep learning and tracking. In: International workshop on computer-assisted and robotic endoscopy. Springer, pp 84–95 García-Peraza-Herrera LC, Li W, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Vander Poorten E, Stoyanov D, Vercauteren T, Ourselin S (2016) Real-time segmentation of non-rigid surgical tools based on deep learning and tracking. In: International workshop on computer-assisted and robotic endoscopy. Springer, pp 84–95
10.
Zurück zum Zitat Laina I, Rieke N, Rupprecht C, Vizcaíno JP, Eslami A, Tombari F, Navab N (2017) Concurrent segmentation and localization for tracking of surgical instruments. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 664–672 Laina I, Rieke N, Rupprecht C, Vizcaíno JP, Eslami A, Tombari F, Navab N (2017) Concurrent segmentation and localization for tracking of surgical instruments. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 664–672
11.
Zurück zum Zitat Lejeune L, Grossrieder J, Sznitman R (2018) Iterative multi-path tracking for video and volume segmentation with sparse point supervision. Med Image Anal 50:65–81CrossRefPubMed Lejeune L, Grossrieder J, Sznitman R (2018) Iterative multi-path tracking for video and volume segmentation with sparse point supervision. Med Image Anal 50:65–81CrossRefPubMed
12.
Zurück zum Zitat Lin D, Dai J, Jia J, He K, Sun J (2016) Scribblesup: scribble-supervised convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3159–3167 Lin D, Dai J, Jia J, He K, Sun J (2016) Scribblesup: scribble-supervised convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3159–3167
13.
Zurück zum Zitat Maier-Hein L, Ross T, Gröhl J, Glocker B, Bodenstedt S, Stock C, Heim E, Götz M, Wirkert S, Kenngott H, Speidel S, Maier-Hein K (2016) Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 616–623 Maier-Hein L, Ross T, Gröhl J, Glocker B, Bodenstedt S, Stock C, Heim E, Götz M, Wirkert S, Kenngott H, Speidel S, Maier-Hein K (2016) Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 616–623
15.
Zurück zum Zitat Pakhomov D, Premachandran V, Allan M, Azizian M, Navab N (2017) Deep residual learning for instrument segmentation in robotic surgery. arXiv preprint arXiv:1703.08580 Pakhomov D, Premachandran V, Allan M, Azizian M, Navab N (2017) Deep residual learning for instrument segmentation in robotic surgery. arXiv preprint arXiv:​1703.​08580
16.
Zurück zum Zitat Ross T, Zimmerer D, Vemuri A, Isensee F, Wiesenfarth M, Bodenstedt S, Both F, Kessler P, Wagner M, Müller B, Kengott H, Speidel S, Kop-Schneider A, Maier-Hein K, Maier-Hein L (2018) Exploiting the potential of unlabeled endoscopic video data with self-supervised learning. Int J Comput Assist Radiol Surg 13:1–9CrossRef Ross T, Zimmerer D, Vemuri A, Isensee F, Wiesenfarth M, Bodenstedt S, Both F, Kessler P, Wagner M, Müller B, Kengott H, Speidel S, Kop-Schneider A, Maier-Hein K, Maier-Hein L (2018) Exploiting the potential of unlabeled endoscopic video data with self-supervised learning. Int J Comput Assist Radiol Surg 13:1–9CrossRef
18.
Zurück zum Zitat Shvets A, Rakhlin A, Kalinin AA, Iglovikov V (2018) Automatic instrument segmentation in robot-assisted surgery using deep learning. arXiv preprint arXiv:1803.01207 Shvets A, Rakhlin A, Kalinin AA, Iglovikov V (2018) Automatic instrument segmentation in robot-assisted surgery using deep learning. arXiv preprint arXiv:​1803.​01207
20.
Zurück zum Zitat Tang M, Djelouah A, Perazzi F, Boykov Y, Schroers C (2018) Normalized cut loss for weakly-supervised cnn segmentation. In: IEEE conference on computer vision and pattern recognition (CVPR), Salt Lake City Tang M, Djelouah A, Perazzi F, Boykov Y, Schroers C (2018) Normalized cut loss for weakly-supervised cnn segmentation. In: IEEE conference on computer vision and pattern recognition (CVPR), Salt Lake City
21.
Zurück zum Zitat Tang P, Wang X, Wang A, Yan Y, Liu W, Huang J, Yuille A (2018) Weakly supervised region proposal network and object detection. In: Proceedings of the European conference on computer vision (ECCV), pp 352–368 Tang P, Wang X, Wang A, Yan Y, Liu W, Huang J, Yuille A (2018) Weakly supervised region proposal network and object detection. In: Proceedings of the European conference on computer vision (ECCV), pp 352–368
22.
Zurück zum Zitat Vardazaryan A, Mutter D, Marescaux J, Padoy N (2018) Weakly-supervised learning for tool localization in laparoscopic videos. In: Stoyanov D et al (eds) Intravascular imaging and computer assisted stenting and large-scale annotation of biomedical data and expert label synthesis. LABELS 2018, CVII 2018, STENT 2018. Lecture Notes in Computer Science, vol 11043. Springer, Cham, pp 169–179 Vardazaryan A, Mutter D, Marescaux J, Padoy N (2018) Weakly-supervised learning for tool localization in laparoscopic videos. In: Stoyanov D et al (eds) Intravascular imaging and computer assisted stenting and large-scale annotation of biomedical data and expert label synthesis. LABELS 2018, CVII 2018, STENT 2018. Lecture Notes in Computer Science, vol 11043. Springer, Cham, pp 169–179
23.
Zurück zum Zitat Wang X, You S, Li X, Ma H (2018) Weakly-supervised semantic segmentation by iteratively mining common object features. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1354–1362 Wang X, You S, Li X, Ma H (2018) Weakly-supervised semantic segmentation by iteratively mining common object features. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1354–1362
24.
Zurück zum Zitat Zhao X, Liang S, Wei Y (2018) Pseudo mask augmented object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4061–4070 Zhao X, Liang S, Wei Y (2018) Pseudo mask augmented object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4061–4070
Metadaten
Titel
EasyLabels: weak labels for scene segmentation in laparoscopic videos
verfasst von
Félix Fuentes-Hurtado
Abdolrahim Kadkhodamohammadi
Evangello Flouty
Santiago Barbarisi
Imanol Luengo
Danail Stoyanov
Publikationsdatum
04.06.2019
Verlag
Springer International Publishing
Erschienen in
International Journal of Computer Assisted Radiology and Surgery / Ausgabe 7/2019
Print ISSN: 1861-6410
Elektronische ISSN: 1861-6429
DOI
https://doi.org/10.1007/s11548-019-02003-2

Weitere Artikel der Ausgabe 7/2019

International Journal of Computer Assisted Radiology and Surgery 7/2019 Zur Ausgabe

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.