Skip to main content
Erschienen in: International Journal of Computer Assisted Radiology and Surgery 7/2019

02.05.2019 | Original Article

Extending pretrained segmentation networks with additional anatomical structures

verfasst von: Firat Ozdemir, Orcun Goksel

Erschienen in: International Journal of Computer Assisted Radiology and Surgery | Ausgabe 7/2019

Einloggen, um Zugang zu erhalten

Abstract

Purpose

For comprehensive surgical planning with sophisticated patient-specific models, all relevant anatomical structures need to be segmented. This could be achieved using deep neural networks given sufficiently many annotated samples; however, datasets of multiple annotated structures are often unavailable in practice and costly to procure. Therefore, being able to build segmentation models with datasets from different studies and centers in an incremental fashion is highly desirable.

Methods

We propose a class-incremental framework for extending a deep segmentation network to new anatomical structures using a minimal incremental annotation set. Through distilling knowledge from the current network state, we overcome the need for a full retraining.

Results

We evaluate our methods on 100 MR volumes from SKI10 challenge with varying incremental annotation ratios. For 50% incremental annotations, our proposed method suffers less than 1% Dice score loss in retaining old-class performance, as opposed to 25% loss of conventional finetuning. Our framework inherently exploits transferable knowledge from previously trained structures to incremental tasks, demonstrated by results superior even to non-incremental training: In a single volume one-shot incremental learning setting, our method outperforms vanilla network performance by>11% in Dice.

Conclusions

With the presented method, new anatomical structures can be learned while retaining performance for older structures, without a major increase in complexity and memory footprint, hence suitable for lifelong class-incremental learning. By leveraging information from older examples, a fraction of annotations can be sufficient for incrementally building comprehensive segmentation models. With our meta-method, a deep segmentation network is extended with only a minor addition per structure, thus can be applicable also for future network architectures.
Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Sloan M, Sheth NP (2018) Projected volume of primary and revision total joint arthroplasty in the united states, 2030–2060. Tech. Rep. 16, annual meeting of the American Academy of Orthopaedic Surgeons, New Orleans, Louisiana, 03 Sloan M, Sheth NP (2018) Projected volume of primary and revision total joint arthroplasty in the united states, 2030–2060. Tech. Rep. 16, annual meeting of the American Academy of Orthopaedic Surgeons, New Orleans, Louisiana, 03
2.
Zurück zum Zitat Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: MICCAI. pp 234–241 Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: MICCAI. pp 234–241
3.
Zurück zum Zitat Baumgartner CF, Koch LM, Pollefeys M, Konukoglu E (2018) An exploration of 2D and 3D deep learning techniques for cardiac MR image segmentation. In: Statistical atlases and computational models of the heart. pp 111–119 Baumgartner CF, Koch LM, Pollefeys M, Konukoglu E (2018) An exploration of 2D and 3D deep learning techniques for cardiac MR image segmentation. In: Statistical atlases and computational models of the heart. pp 111–119
4.
Zurück zum Zitat Yang L, Zhang Y, Chen J, Zhang S, Chen DZ (2017) Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: MICCAI. pp 399–407 Yang L, Zhang Y, Chen J, Zhang S, Chen DZ (2017) Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: MICCAI. pp 399–407
5.
Zurück zum Zitat Ozdemir F, Peng Z, Tanner C, Fuernstahl P, Goksel O (2018) Active learning for segmentation by optimizing content information for maximal entropy. In: MICCAI DLMIA. pp 183–191 Ozdemir F, Peng Z, Tanner C, Fuernstahl P, Goksel O (2018) Active learning for segmentation by optimizing content information for maximal entropy. In: MICCAI DLMIA. pp 183–191
6.
Zurück zum Zitat McCloskey M, Cohen NJ (1989) Catastrophic interference in connectionist networks: the sequential learning problem. vol 24, pp 109–165 McCloskey M, Cohen NJ (1989) Catastrophic interference in connectionist networks: the sequential learning problem. vol 24, pp 109–165
7.
Zurück zum Zitat Rebuffi S, Kolesnikov A, Sperl G, Lampert CH (2017) iCaRL: Incremental classifier and representation learning. In: IEEE conference on computer vision pattern recog (CVPR). pp 5533–5542 Rebuffi S, Kolesnikov A, Sperl G, Lampert CH (2017) iCaRL: Incremental classifier and representation learning. In: IEEE conference on computer vision pattern recog (CVPR). pp 5533–5542
8.
Zurück zum Zitat Castro FM, Marín-Jiménez MJ, Mata NG, Schmid C, Alahari K (2018) End-to-end incremental learning. In: ECCV Castro FM, Marín-Jiménez MJ, Mata NG, Schmid C, Alahari K (2018) End-to-end incremental learning. In: ECCV
9.
Zurück zum Zitat Ozdemir F, Fuernstahl P, Goksel O (2018) Learn the new, keep the old: Extending pretrained models with new anatomy and images. In: MICCAI. pp 361–369 Ozdemir F, Fuernstahl P, Goksel O (2018) Learn the new, keep the old: Extending pretrained models with new anatomy and images. In: MICCAI. pp 361–369
10.
Zurück zum Zitat Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. In: NIPS deep learning and representation learning workshop Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. In: NIPS deep learning and representation learning workshop
11.
Zurück zum Zitat Li Z, Hoiem D (2016) Learning without forgetting. In: ECCV. pp 614–629 Li Z, Hoiem D (2016) Learning without forgetting. In: ECCV. pp 614–629
12.
Zurück zum Zitat Hochbaum DS (1997) Approximation algorithms for NP-hard problems. PWS Publishing Co., Boston, pp 94–143 Hochbaum DS (1997) Approximation algorithms for NP-hard problems. PWS Publishing Co., Boston, pp 94–143
13.
Zurück zum Zitat Gal Y, Ghahramani Z (2016) Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: International conference on machine learning (ICML). pp 1050–1059 Gal Y, Ghahramani Z (2016) Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: International conference on machine learning (ICML). pp 1050–1059
14.
Zurück zum Zitat Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958 Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958
15.
Zurück zum Zitat Heimann T, Morrison BJ, Styner MA, Niethammer M, Warfield S (2010) Segmentation of knee images: a grand challenge. In MICCAI workshop on medical image analysis for the clinic. pp 207–214 Heimann T, Morrison BJ, Styner MA, Niethammer M, Warfield S (2010) Segmentation of knee images: a grand challenge. In MICCAI workshop on medical image analysis for the clinic. pp 207–214
Metadaten
Titel
Extending pretrained segmentation networks with additional anatomical structures
verfasst von
Firat Ozdemir
Orcun Goksel
Publikationsdatum
02.05.2019
Verlag
Springer International Publishing
Erschienen in
International Journal of Computer Assisted Radiology and Surgery / Ausgabe 7/2019
Print ISSN: 1861-6410
Elektronische ISSN: 1861-6429
DOI
https://doi.org/10.1007/s11548-019-01984-4

Weitere Artikel der Ausgabe 7/2019

International Journal of Computer Assisted Radiology and Surgery 7/2019 Zur Ausgabe

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.