Skip to main content
Erschienen in:

30.05.2021 | Original Article

LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images

verfasst von: Anirudh Ashok Aatresh, Kumar Alabhya, Shyam Lal, Jyoti Kini, PU Prakash Saxena

Erschienen in: International Journal of Computer Assisted Radiology and Surgery | Ausgabe 9/2021

Einloggen, um Zugang zu erhalten

Abstract

Purpose

Liver cancer is one of the most common types of cancers in Asia with a high mortality rate. A common method for liver cancer diagnosis is the manual examination of histopathology images. Due to its laborious nature, we focus on alternate deep learning methods for automatic diagnosis, providing significant advantages over manual methods. In this paper, we propose a novel deep learning framework to perform multi-class cancer classification of liver hepatocellular carcinoma (HCC) tumor histopathology images which shows improvements in inference speed and classification quality over other competitive methods.

Method

The BreastNet architecture proposed by Togacar et al. shows great promise in using convolutional block attention modules (CBAM) for effective cancer classification in H&E stained breast histopathology images. As part of our experiments with this framework, we have studied the addition of atrous spatial pyramid pooling (ASPP) blocks to effectively capture multi-scale features in H&E stained liver histopathology data. We classify liver histopathology data into four classes, namely the non-cancerous class, low sub-type liver HCC tumor, medium sub-type liver HCC tumor, and high sub-type liver HCC tumor. To prove the robustness and efficacy of our models, we have shown results for two liver histopathology datasets—a novel KMC dataset and the TCGA dataset.

Results

Our proposed architecture outperforms state-of-the-art architectures for multi-class cancer classification of HCC histopathology images, not just in terms of quality of classification, but also in computational efficiency on the novel proposed KMC liver data and the publicly available TCGA-LIHC dataset. We have considered precision, recall, F1-score, intersection over union (IoU), accuracy, number of parameters, and FLOPs as metrics for comparison. The results of our meticulous experiments have shown improved classification performance along with added efficiency. LiverNet has been observed to outperform all other frameworks in all metrics under comparison with an approximate improvement of \(2\%\) in accuracy and F1-score on the KMC and TCGA-LIHC datasets.

Conclusion

To the best of our knowledge, our work is among the first to provide concrete proof and demonstrate results for a successful deep learning architecture to handle multi-class HCC histopathology image classification among various sub-types of liver HCC tumor. Our method shows a high accuracy of \(90.93\%\) on the proposed KMC liver dataset requiring only 0.5739 million parameters and 1.1934 million floating point operations per second.
Literatur
1.
Zurück zum Zitat Rawla P, Sunkara T, Muralidharan P, Raj J (2018) Update in global trends and aetiology of hepatocellular carcinoma. Współczesna Onkol 22:141–150CrossRef Rawla P, Sunkara T, Muralidharan P, Raj J (2018) Update in global trends and aetiology of hepatocellular carcinoma. Współczesna Onkol 22:141–150CrossRef
2.
Zurück zum Zitat Goodarzi E, Ghorat F, Mosavi Jarrahi A, Adineh HA, Sohrabivafa M, Khazaei Z (2019) Global incidence and mortality of liver cancers and its relationship with the human development index (hdi): an ecology study in 2018. World Cancer Res J. https://www.wcrj.net/article/1255 Goodarzi E, Ghorat F, Mosavi Jarrahi A, Adineh HA, Sohrabivafa M, Khazaei Z (2019) Global incidence and mortality of liver cancers and its relationship with the human development index (hdi): an ecology study in 2018. World Cancer Res J. https://​www.​wcrj.​net/​article/​1255
3.
Zurück zum Zitat Yang J, Hainaut P, Gores G, Amadou A, Plymoth A, Roberts L (2019) A global view of hepatocellular carcinoma: trends, risk, prevention and management. Nat Rev Gastroenterol Hepatol 16:08CrossRef Yang J, Hainaut P, Gores G, Amadou A, Plymoth A, Roberts L (2019) A global view of hepatocellular carcinoma: trends, risk, prevention and management. Nat Rev Gastroenterol Hepatol 16:08CrossRef
4.
Zurück zum Zitat Madabhushi A, Lee G (2016) Image analysis and machine learning in digital pathology: challenges and opportunities Madabhushi A, Lee G (2016) Image analysis and machine learning in digital pathology: challenges and opportunities
5.
Zurück zum Zitat Madabhushi A (2009) Digital pathology image analysis: opportunities and challenges. Imag Med 1(1):7CrossRef Madabhushi A (2009) Digital pathology image analysis: opportunities and challenges. Imag Med 1(1):7CrossRef
7.
Zurück zum Zitat Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. Int Conf Learn Represent Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. Int Conf Learn Represent
8.
Zurück zum Zitat Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Bach F, Blei D (eds) Proceedings of the 32nd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 37. Lille, France: PMLR, 07–09, pp. 448–456. http://proceedings.mlr.press/v37/ioffe15.html Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Bach F, Blei D (eds) Proceedings of the 32nd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 37. Lille, France: PMLR, 07–09, pp. 448–456. http://​proceedings.​mlr.​press/​v37/​ioffe15.​html
9.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. IEEE Conf Comput Vis Pattern Recog (CVPR) 2016:770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. IEEE Conf Comput Vis Pattern Recog (CVPR) 2016:770–778
10.
Zurück zum Zitat Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. IEEE Conf Comput Vis Pattern Recog (CVPR) 2017:2261–2269 Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. IEEE Conf Comput Vis Pattern Recog (CVPR) 2017:2261–2269
11.
Zurück zum Zitat Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9 Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9
12.
Zurück zum Zitat Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, ser. AAAI’17. AAAI Press, pp 4278–4284 Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, ser. AAAI’17. AAAI Press, pp 4278–4284
13.
Zurück zum Zitat Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications
14.
Zurück zum Zitat Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
15.
Zurück zum Zitat Ferreira C, Melo T, Sousa P (2016) Classification of breast cancer histology images through transfer learning using a pre-trained inception resnet v2. Image Anal Recog, pp 763–770, Ferreira C, Melo T, Sousa P (2016) Classification of breast cancer histology images through transfer learning using a pre-trained inception resnet v2. Image Anal Recog, pp 763–770,
16.
Zurück zum Zitat Nan W, Jason P, Jungkyu P, Yiqiu S, Zhe H, Masha Z, Stanisław J, Thibault F, Joe K, Eric K, Stacey W, Ujas P, Sushma G, Leng L, Young L, Kara H, Joshua DW, Beatriu R, Yiming G, Hildegard T, Kristine P, Alana L, Jiyon L, Krystal A, Eralda M, Stephanie C, Esther H, Naziya S, Gene KS, Laura H, Linda M, Kyunghyun C, Krzysztof JG (2020) Deep neural networks improve radiologists performance in breast cancer screening. IEEE Trans Med Imag 39(4):1184–1194. https://doi.org/10.1109/TMI.2019.2945514 Nan W, Jason P, Jungkyu P, Yiqiu S, Zhe H, Masha Z, Stanisław J, Thibault F, Joe K, Eric K, Stacey W, Ujas P, Sushma G, Leng L, Young L, Kara H, Joshua DW, Beatriu R, Yiming G, Hildegard T, Kristine P, Alana L, Jiyon L, Krystal A, Eralda M, Stephanie C, Esther H, Naziya S, Gene KS, Laura H, Linda M, Kyunghyun C, Krzysztof JG (2020) Deep neural networks improve radiologists performance in breast cancer screening. IEEE Trans Med Imag 39(4):1184–1194. https://​doi.​org/​10.​1109/​TMI.​2019.​2945514
17.
Zurück zum Zitat Ni H, Liu H, Wang K, Wang X, Zhou X, Qian Y (2019) Wsi-net: branch-based and hierarchy-aware network for segmentation and classification of breast histopathological whole-slide images. International Workshop on Machine Learning in Medical Imaging. Springer, New York, pp 36–44CrossRef Ni H, Liu H, Wang K, Wang X, Zhou X, Qian Y (2019) Wsi-net: branch-based and hierarchy-aware network for segmentation and classification of breast histopathological whole-slide images. International Workshop on Machine Learning in Medical Imaging. Springer, New York, pp 36–44CrossRef
19.
Zurück zum Zitat Sun C, Xu A, Liu D, Xiong Z (2019) Deep learning-based classification of liver cancer histopathology images using only global labels. IEEE J Biomed Health Inform Sun C, Xu A, Liu D, Xiong Z (2019) Deep learning-based classification of liver cancer histopathology images using only global labels. IEEE J Biomed Health Inform
20.
Zurück zum Zitat Sun C, Xu A, Liu D, Xiong Z, Zhao F, Ding W (2020) Deep learning-based classification of liver cancer histopathology images using only global labels. IEEE J Biomed Health Inform 24(6):1643–1651CrossRef Sun C, Xu A, Liu D, Xiong Z, Zhao F, Ding W (2020) Deep learning-based classification of liver cancer histopathology images using only global labels. IEEE J Biomed Health Inform 24(6):1643–1651CrossRef
22.
Zurück zum Zitat Liao H, Long Y, Han R, Wang W, Xu L, Liao M, Zhang Z, Wu Z, Shang X, Li X, Peng J, Yuan K, Zeng Y (2020) Deep learning-based classification and mutation prediction from histopathological images of hepatocellular carcinoma. Clin Transl Med 06 Liao H, Long Y, Han R, Wang W, Xu L, Liao M, Zhang Z, Wu Z, Shang X, Li X, Peng J, Yuan K, Zeng Y (2020) Deep learning-based classification and mutation prediction from histopathological images of hepatocellular carcinoma. Clin Transl Med 06
23.
Zurück zum Zitat Alom MZ, Yakopcic C, Taha TM, Asari VK (2018) Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network. Comput Vis Pattern Recog. arxiv:1811.04241 Alom MZ, Yakopcic C, Taha TM, Asari VK (2018) Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network. Comput Vis Pattern Recog. arxiv:​1811.​04241
25.
Zurück zum Zitat Chen L, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848CrossRef Chen L, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848CrossRef
Metadaten
Titel
LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images
verfasst von
Anirudh Ashok Aatresh
Kumar Alabhya
Shyam Lal
Jyoti Kini
PU Prakash Saxena
Publikationsdatum
30.05.2021
Verlag
Springer International Publishing
Erschienen in
International Journal of Computer Assisted Radiology and Surgery / Ausgabe 9/2021
Print ISSN: 1861-6410
Elektronische ISSN: 1861-6429
DOI
https://doi.org/10.1007/s11548-021-02410-4

Neu im Fachgebiet Radiologie

Ringen um den richtigen Umgang mit Zufallsbefunden

Wenn 2026 in Deutschland das Lungenkrebsscreening mittels Low-Dose-Computertomografie (LDCT) eingeführt wird, wird es auch viele Zufallsbefunde ans Licht bringen. Das birgt Chancen und Risiken.

Bald 5% der Krebserkrankungen durch CT verursacht

Die jährlich rund 93 Millionen CTs in den USA könnten künftig zu über 100.000 zusätzlichen Krebserkrankungen führen, geht aus einer Modellrechnung hervor. Damit würde eine von 20 Krebserkrankungen auf die ionisierende Strahlung bei CT-Untersuchungen zurückgehen.

Röntgen-Thorax oder LDCT fürs Lungenscreening nach HNSCC?

Personen, die an einem Plattenepithelkarzinom im Kopf-Hals-Bereich erkrankt sind, haben ein erhöhtes Risiko für Metastasen oder zweite Primärmalignome der Lunge. Eine Studie hat untersucht, wie die radiologische Überwachung aussehen sollte.

Statine: Was der G-BA-Beschluss für Praxen bedeutet

Nach dem G-BA-Beschluss zur erweiterten Verordnungsfähigkeit von Lipidsenkern rechnet die DEGAM mit 200 bis 300 neuen Dauerpatienten pro Praxis. Im Interview erläutert Präsidiumsmitglied Erika Baum, wie Hausärztinnen und Hausärzte am besten vorgehen.

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.