Skip to main content
Erschienen in:

Open Access 01.12.2023 | Research

Nuclei instance segmentation from histopathology images using Bayesian dropout based deep learning

verfasst von: Naga Raju Gudhe, Veli-Matti Kosma, Hamid Behravan, Arto Mannermaa

Erschienen in: BMC Medical Imaging | Ausgabe 1/2023

Abstract

Background

The deterministic deep learning models have achieved state-of-the-art performance in various medical image analysis tasks, including nuclei segmentation from histopathology images. The deterministic models focus on improving the model prediction accuracy without assessing the confidence in the predictions.

Methods

We propose a semantic segmentation model using Bayesian representation to segment nuclei from the histopathology images and to further quantify the epistemic uncertainty. We employ Bayesian approximation with Monte-Carlo (MC) dropout during the inference time to estimate the model’s prediction uncertainty.

Results

We evaluate the performance of the proposed approach on the PanNuke dataset, which consists of 312 visual fields from 19 organ types. We compare the nuclei segmentation accuracy of our approach with that of a fully convolutional neural network, U-Net, SegNet, and the state-of-the-art Hover-net. We use F1-score and intersection over union (IoU) as the evaluation metrics. The proposed approach achieves a mean F1-score of 0.893 ± 0.008 and an IoU value of 0.868 ± 0.003 on the test set of the PanNuke dataset. These results outperform the Hover-net, which has a mean F1-score of 0.871 ± 0.010 and an IoU value of 0.840 ± 0.032.

Conclusions

The proposed approach, which incorporates Bayesian representation and Monte-Carlo dropout, demonstrates superior performance in segmenting nuclei from histopathology images compared to existing models such as U-Net, SegNet, and Hover-net. By considering the epistemic uncertainty, our model provides a more reliable estimation of the prediction confidence. These findings highlight the potential of Bayesian deep learning for improving medical image analysis tasks and can contribute to the development of more accurate and reliable computer-aided diagnostic systems.
Hinweise

Supplementary Information

The online version contains supplementary material available at https://​doi.​org/​10.​1186/​s12880-023-01121-3.
Hamid Behravan and Arto Mannermaa are authors share the last author position.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Abkürzungen
MC
Monte-Carlo
IoU
Intersection over union
WSI
Whole slide image
H[MYAMP
E] Hematoxylin and eosin
DL
deep learning
FCN
Fully convolutional neural network
Encoder
Contrasting path
Decoder
Expanding path
SELU
Scaled exponential linear unit
sigma
Standard deviation
TCGA
Cancer Genome Atlas
UA
Uncertainty accuracy

Introduction

The whole slide image (WSI) is the digital version of the patient-derived histology glass slide and provides ample opportunities to develop quantitative and qualitative profiling of the spatial patterns from the cancer tissues. The WSI contains hundreds of thousands of nuclei of various cell types, which is challenging to automatically segment based on the cell types. The manual assessment of the cell types from the hematoxylin and eosin (H &E) stained slides is prone to inter-and intra-observer variability [1]. Automating the workflow of the nuclei segmentation can accelerate the pathologist workflow in analyzing the nuclei cell morphology, cancer cell type classification, and grading [2]. The automatic nuclei segmentation allows computing the nuclei features, which can be used for predicting tissue-phenotype [3], tumor grading [4], estimating cancer recurrence rate [5, 6], and survival analysis [7]. Accurate segmentation of the nuclei from the H &E-stained histopathology images have several challenges due to the variations in the type of organ, tissue site, and variability between the sites, which produced the H &E-stained images [8].
Traditional computer vision algorithms, such as morphological image operations and watershed algorithms, are widely used for nuclei segmentation [8]. However, these algorithms are developed on limited set of images and often fail to generalize on new images. Recently, deep learning (DL) models, especially convolution neural networks, have achieved state-of-the-art performance in various medical image analysis tasks [9, 10]. Long et al. [11] proposed an encoder-decoder based fully convolutional neural network (FCN) for the semantic segmentation task. FCN consists of the contrasting path (encoder) with a set of convolutional layers to extract imaging features and the expanding path (decoder) with transpose or up-convolutions to reconstruct the extracted features and to segment the regions of interest in the input image. Inspired by [11], authors in [12] introduced skip connections to restore the spatial information lost during the contraction and expansion of the network and named the network, as U-Net. The classical U-Net model has been successfully incorporated into various medical image segmentation tasks. Despite being state-of-the-art model, U-Net often fails to segment overlapping and touching nuclei and requires post-processing techniques, such as watershed algorithm [13], for separating such nuclei. Several variants of the U-Net architecture have been proposed to improve image segmentation accuracy [1417]. However, these studies focused on improving the accuracy and ignored uncertainty in the predictions. Graham et al. [2] introduced Hover-net, that can simultaneously perform the nuclei segmentation and classification. Hover-net incorporated horizontal and vertical distance maps to separate the touching nuclei, and demonstrated the state-of-the-art performance. Hover-net lacks the ability to quantify the uncertainty of nuclei segmentation and classification tasks.
The uncertainty quantification at pixel level is as crucial as model accuracy, especially among pathologist, to trust and incorporate DL algorithms in their medical diagnosis [18]. The uncertainty quantification explains the DL models’ overall confidence in predictions and improves the reliability in the decision-making process [18]. Typically, a DL model results in two kinds of uncertainties, epistemic and aleatoric [19]. The epistemic or model uncertainty often arises due to a lack of training data. Increasing the training data size often reduces the epistemic uncertainty. The aleatoric or data uncertainty usually arises due to the presence of noise in the data. This type of uncertainty cannot be reduced by increasing the data [19]. Bayesian methods provide a probabilistic representation of uncertainty and are widely-used for estimating the predictive uncertainties [1823]. In addition to Bayesian methods, several other approximation methods, such as MC dropout [24], variational inference [25, 26], dropout variational inference [27], and ensemble learning [28], have been proposed for estimating uncertainty.
In this study, we present an encoder-decoder-based Bayesian DL model for nuclei instance segmentation from the H &E-stained histopathology images and estimate epistemic uncertainty by using the MC dropout approximation during the inference time. We demonstrate the efficiency of the proposed approach using a publicly available data from 19 different organs.

Methodology

Nuclei instance segmentation architecture

We modified the network proposed by [29] into a Bayesian representation to simultaneously segment the nuclei and quantify the model uncertainty. The proposed model, named as BayesNuSeg, consists of an encoder and two independent decoders (Fig. 1). The encoder is a five-layered network, each containing a residual learning-based convolution followed by a batch-normalization [30] and a scaled exponential linear unit (SELU) [31], as shown in Fig. 2. The decoder replaces the convolution operation with a transpose convolution to reconstruct the extracted features. The seed branch decoder outputs the class specific seed maps and the instance branch decoder generates the pixel embedding.
The objective of the instance segmentation is to cluster a set of input image pixels \({\textbf {x}}\ =\ \{ x_1,\ x_2,\ x_3,\ \ldots ,x_N\ |\ x\ \in \ \mathcal {R}^2\}\) into a set of instances \({\textbf {s}}\ =\ \{ s_1,\ s_2,\ \ldots ...,s_k\}\). The discriminative learning function \(\mathcal {F}({\textbf {x}},{\textbf {W}})\), is employed to localize different nuclei in H &E-stained histology images x, utilizing the weight parameter matrix W, in order to accomplish the instance segmentation task. The instance branch of the decoder network, maps each pixel \(x_i\) of the given input image x, to an offset vector \({\textbf {o}}_{\textbf {i}} \in \mathcal {R}^2\) from which the pixel embeddings, \({\textbf {e}}_{\textbf {i}}= x_i + {\textbf {o}}_{\textbf {i}},\) are generated and pointing to their corresponding instance centroid, \({\textbf {C}}_k\ =\ \frac{1}{N}\ \sum _{x\ \in \ S_k} x\). The size and shape of the nuclei vary within each cell type, therefore, to ensure pixels of one instance are close to their centroid, an instance specific margin loss function is used [29]. A Gaussian function \(\phi _k\) for each instance \(s_k\) converts the distance between \({\textbf {e}}_i\) and \({\textbf {C}}_k\) into a probability of belonging to an instance \(s_k\):
$$\begin{aligned} \phi _k({\textbf {e}}_i) = \exp \left( -\frac{({\textbf {e}}_{kx} - {\textbf {C}}_{kx})^2}{2 \times \sigma _{kx}^2} -\frac{({\textbf {e}}_{ky} - {\textbf {C}}_{ky})^2}{2 \times \sigma _{ky}^2} \right) . \end{aligned}$$
(1)
In addition to the offset vectors, the instance decoder branch computes the standard deviation (sigma), \({\sigma }k \in R^2\), for each instance \(s_k\). The value of \({\sigma }k\) indicates the proximity of the pixel embedding \(\textbf{e}_i\) to the instance centroid \(\textbf{C}_k\): a higher \({\sigma }_k\) suggests the pixel \({\textbf {e}}_{{\textbf {i}}}\) is likely part of instance \(s_k\), whereas a lower value suggests it belongs to the background.
To classify a pixel \({\textbf {e}}_i\), a threshold of \(\phi _k\left( {\textbf {e}}_i\right) \ge 0.5\) is applied. This threshold represents the decision boundary at which the probability of a pixel belonging to an instance \(s_k\) or the background is equal. Specifically, a pixel \({\textbf {e}}_i\) is assigned to instance \(s_k\), if \(\phi _k\left( {\textbf {e}}_i\right) \ge 0.5\), indicating a higher probability of belonging to instance \(s_k\) than to the background, and vice versa.
The seed decoder branch computes the seediness score, the likelihood that the pixel \(x_i\) belongs to the instance \(s_k\). Sequential clustering is employed on the aggregated offset vectors, sigma and seediness score to group the pixels that belong to the same instance, and finally provides the segmented nuclei mask. To train the model end-to-end, the combined loss function contains three terms as follows [29]:
$$\begin{aligned} {\textbf {L}} = \lambda _{\text {IoU}} \times {\textbf {L}}_{\text {IoU}} + \lambda _{\text {seed}} \times {\textbf {L}}_{\text {seed}} + \lambda _{\text {smooth}}\times {\textbf {L}}_{\text {smooth}}, \end{aligned}$$
(2)
where, \(\lambda _{\text {IoU}}\), \(\lambda _{\text {seed}}\) and \(\lambda _{\text {smooth}}\) are the hyper-parameters of the combined loss function. We used \(\lambda _{\text {IoU}}=1\), \(\lambda _{\text {seed}}=1\), and \(\lambda _{\text {smooth}}=10\) , as suggested by [29]. Additional details for the combined loss function L are provided in Appendix A.

Bayesian uncertainty representation

We follow the [27] uncertainty estimation approach by applying the dropout technique, as the variational approximation (see Appendix B for the overview of Bayesian representation learning). To quantify the model uncertainty, we use MC dropout to approximate the predictive variance at the inference time, as follows [19]:
$$\begin{aligned} {\hat{\sum }}^2\ =\ \frac{1}{T}\ \sum _{t=1}^{T}{(\widehat{\text {y}}_t\ \ -\ \frac{1}{T}\sum _{t=1}^{T}\widehat{\text {y}}_t\ )^2\ +\ \frac{1}{T}\ \sum _{t=1}^{T}{\widehat{\sigma }_t}^2}, \end{aligned}$$
(3)
where, \({\hat{\sum }}^2\) is the measure of the model uncertainty, \(\sigma _t\) is the standard deviation of the predicted segmentation mask, \(\hat{\text {y}}_t\), and T represents the number of stochastic forward passes of MC dropout sampling. The mathematical derivation of Eq. 3 is given in Appendix C. We used \(T = 50\) in our experiments, as the optimal number of MC dropout sampling. The effect of changing T on the model performance in terms of F1-score is presented in Appendix D.

System set-ups

Dataset

We trained and validated the BayesNuSeg and the other baseline approaches using PanNuke dataset [32]. The dataset has variability in the image staining protocol and has been collected from different sites. The dataset contains 312 visual fields from 19 different organs randomly sampled at different resolutions from more than twenty thousand WSIs of the Cancer Genome Atlas (TCGA) [33]. The dataset organizers provided the data in 3 folds with a total of 7901 images with their corresponding ground truth masks at a resolution of \(256\times 256\). A visual example of the images and ground truth masks are illustrated in Fig. 3.

Implementation details

Data variability, due to differing H &E-staining protocols across various organs, was mitigated by applying the Vahadane stain normalization technique [34], utilizing a reference image from the stomach organ. We implemented nested cross-validation strategy to train, tune and evaluate the proposed and baseline models. The dataset was first divided into two distinct subsets: 70% as model development set (encompassing training, hyper-parameter tuning and validation, 5530 images), and the remaining 30% reserved as an external test set (2371 images), as depicted in Fig. 4. The model development set was subjected to nested cross-validation, involving an outer k-fold loop for model training and evaluation, and an inner loop for hyper-parameter tuning using Optuna [35]. Within each fold of the outer loop, the model was trained on a fraction of \(\frac{(k-1)}{k}\) of the data, while hyper-parameters were optimized using an inner cross-validation loop on this subset. The model was then evaluated on the remaining \(\frac{1}{k}\) of the data. This methodological design ensure optimal hyper-parameter tuning for each outer loop fold, yielding a robust and unbiased model performance. Following the optimal hyper-parameters and subsequent model training with \(k=5\) folds, performance was evaluated on the external test set to obtain an unbiased evaluation of the model’s ability to generalize to unseen data.
To minimize over fitting, several augmentation techniques [37] were also employed during training, including horizontal and vertical flips, random rotations, and random color jitters. The training was executed on a NVIDIA Tesla V100 GPU, provided by the CSC-IT Center for Science, Finland [38]. During the inference phase, the MC-dropout technique was employed to estimate model uncertainty, resulting in the BayesNuSeg model (with uncertainty).

Evaluation metrics

We evaluated the performance of the proposed BayesNuSeg and the baseline models, FCN, U-Net, SegNet, and Hover-net using F1-score and IoU, as the evaluation metrics. F1-score is the harmonic mean between precision and recall. A higher F1-score indicates a better intersection between the ground truth and the predicted segmentation mask. The IoU, also referred as the Jaccard index, is used to quantify the percentage of overlap between the ground truth and the predicted segmentation mask [39].
Further, during the inference time, we applied MC dropout for BayesNuSeg and all the baseline models and computed the uncertainty accuracy (UA) defined by [40]. A higher UA value indicates a higher level of confidence in the model’s predictions. To report our results, we used a 95% confidence interval as a measure of dispersion [41] for all the metrics.

Results

BayesNuSeg outperforms the baseline models in nuclei segmentation with enhanced accuracy and reliability

Table 1 reports nuclei segmentation results on the test set of the PanNuke dataset. BayesNuSeg with uncertainty outperforms all the baseline models. In particular, the proposed model achieves an F1-score of 0.893 ± 0.008, which outperforms state-of-the-art Hover-Net with F1-score of 0.871 ± 0.010, that is a relative improvement of 2.53%. Additionally, BayesNuSeg demonstrates its ability to estimate uncertainty accurately, as reflected by its UA score of 0.796± 0.001. The relative improvements of 6.31% and 5.43% over U-Net and FCN8, respectively, in terms of F1-score, further emphasize the superiority of BayesNuSeg. These findings show the potential of BayesNuSeg, with its uncertainty estimation capability, to enhance nuclei segmentation accuracy and reliability.
Table 1
The nuclei segmentation results of the BayesNuSeg and the baseline models. The BayesNuSeg model with uncertainty estimation outperforms all the baseline systems. N.A.: Not available
Method
F1-score
IoU
UA
FCN8
0.842 ± 0.008
0.732 ± 0.049
N.A.
U-Net
0.824 ± 0.009
0.791 ± 0.048
N.A.
SegNet
0.845 ± 0.018
0.803 ± 0.055
N.A.
Hover-net
0.851 ± 0.010
0.829 ± 0.032
N.A.
BayesNuSeg
0.848 ± 0.013
0.835 ± 0.003
N.A.
FCN8 + MC dropout
0.848 ± 0.009
0.764 ± 0.004
0.699 ± 0.050
U-Net + MC dropout
0.840 ± 0.009
0.804 ± 0.037
0.738 ± 0.034
SegNet + MC dropout
0.847 ± 0.006
0.828 ± 0.045
0.763 ± 0.046
Hover-net + MC dropout
0.871 ± 0.010
0.840 ± 0.031
0.789 ± 0.032
BayesNuSeg + MC dropout
0.893 \(\varvec{\pm }\) 0.008
0.868 \(\varvec{\pm }\) 0.003
0.796 \(\varvec{\pm }\) 0.004
Bold font are the best values
Additionally, we conducted two sample t-tests using the Welch correction to assess the statistical significance of the BayesNuseg model compared to various baseline models. The results, presented in Fig 5, indicate that the proposed approach is statistically significant when compared to most baseline models (with a p-value \(\le 0.05\)). The only exception is the Hover-net model without MC-dropout at inference time (p-value = 0.6001).
The qualitative results of the BayesNuSeg with uncertainty and other baseline models are illustrated in Fig. 6. The BayesNuSeg delineates the nuclei more preciously than the baseline approaches. The FCN and U-Net models identify all the nuclei in the image but fails to estimate the nuclei boundaries. As the number of nuclei cells increases, the FCN and U-Net models do not properly localize the nuclei having obscure boundaries, as shown with the green circles in Fig. 6. SegNet predictions contain noise in the segmentation, as shown with the orange circles in Fig. 6. Notice that the noise is distributed across the entire image in the first and second rows, fourth column. For the sake of simplicity, we have highlighted only a few areas. We additionally trained the SegNet model for 500 epochs. This reduced the noise in the segmentation; however, the accuracy was not significantly improved. We also noticed that the Hover-net sometimes fails to separate the touching nuclei and thus over-estimates the nuclei, as shown in the cyan color circles in Fig. 6. The BayesNuSeg model separates the touching nuclei more efficiently than the Hover-net. We suggest that the efficient estimation of pixel embeddings of each nuclei by the instance branch and localization of the nuclei by the coordinate maps are likely to contributes towards this success. The qualitative visualizations on few more examples are illustrated in the Appendix E.

Applying MC dropout during inference reveals the robust uncertainty quantification abilities of the BayesNuSeg model

Here, we demonstrate the uncertainty quantification using MC dropout approximation for \(T=50\) samplings of the posterior distributions of the BayesNuSeg predictions. In Fig. 7, each row presents the uncertainty visualization of the given image. The model uncertainty was measured in range [0, 1], where 0 represents low uncertainty and 1 highly uncertain prediction. As shown, higher uncertainty is observed with the nuclei pixel intensities close to the background pixels, where the BayesNuSeg failed to identify or miss detected the nuclei. See Appendix F for few more examples.

Discussion

The proposed BayesNuSeg model holds immense potential for advancing the field of digital pathology and has profound implications for both clinical practice and biological research. Pathologists and experienced researchers in the medical and AI domains can greatly benefit from the capabilities offered by this model.
One of the critical aspects of digital pathology is the quantification of nuclear features, such as size, shape, and texture, which play vital roles in understanding cellular morphology and tissue composition. The BayesNuSeg model provides precise segmentation results with an F1-score of 0.893 ± 0.008, enabling the extraction of nuclei features. Accurate nuclei segmentation can be utilized for quantitative analysis and characterization of tissue structures, ultimately aiding in the diagnosis, grading, and prognosis of disease. Precise nuclei segmentation serves as a foundational step for subsequent analyses, including cell counting, spatial arrangement analysis, and nuclei clustering. These analyses provide insights into cellular interactions, tissue organization, and pathological alterations at the cellular level. By unraveling disease mechanisms, identifying novel biomarkers, and advancing our understanding of complex biological processes, this model contributes to the forefront of biomedical research. Although, Hover-net segments nuclei precisely, the model demands additional post-processing techniques such as horizontal and vertical maps to separate the touching nuclei. The sequential clustering adopted in BayesNuseg, assists in identifying the pixels belonging to similar instances and thus avoid for additional post-processing steps.
BayesNuSeg’s robust uncertainty quantification capability enables the identification of challenging regions where the model may struggle to accurately segment nuclei. This insight assists clinicians focus their attention on areas requiring additional scrutiny, thereby enhancing diagnostic accuracy and reducing the potential for misinterpretations.
It is important to acknowledge the limitations of our study. We solely evaluated the BayesNuSeg model using the PanNuke dataset, which consists of whole-slide images (WSIs) from various organs. While this dataset encompasses variability in image staining protocols representative of challenges encountered in other datasets, a more comprehensive evaluation on a broader range of datasets would be beneficial. This would allow for a more robust assessment of the model’s performance. Nonetheless, our results clearly demonstrate that BayesNuSeg outperformed several established and state-of-the-art models in terms of WSI segmentation on the PanNuke dataset.

Conclusion

We presented the Bayesian dropout based deep learning representation for nuclei segmentation from H &E-stained medical images. We showed the performance of our proposed BayesNuSeg model for nuclei segmentation and uncertainty qualification on the PanNuke dataset containing 312 pathology slides from 19 different organs. We selected the FCN, the U-Net, the SegNet and the Hover-Net, as the baseline models for comparison. The proposed model with uncertainty achieves an F1-score of 0.893 ± 0.008 which outperforms the state-of-the-art Hover-net with F1-score of 0.871 ± 0.010, that is a relative improvement of 2.53%. Additionally, we validated the efficacy of our proposed model by leveraging MC dropout sampling as an approximation of the posterior distribution for uncertainty quantification. In our next study, we will use the output of the BayesNuSeg model to study tumor micro-environment and to identify the breast cancer tumor biomarkers from the H &E-stained pathology images.

Acknowledgements

We thank the pathologist Otto Jokelainen from the Kuopio University Hospital for assisting us in analyzing the achieved results.

Declarations

The PanNuke imaging dataset used in this study has been obtained from publicly available sources. The PanNuke dataset is based on de-identified and anonymized histopathology images. Since the PanNuke dataset is derived from publicly available sources and consists of de-identified data, obtaining explicit informed consent from individual patients for this specific study is not applicable.
Not applicable.

Competing interests

The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. The Creative Commons Public Domain Dedication waiver (http://​creativecommons.​org/​publicdomain/​zero/​1.​0/​) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Elmore JG, Longton GM, Carney PA, Geller BM, Onega T, Tosteson AN, et al. Diagnostic concordance among pathologists interpreting breast biopsy specimens. J Am Med Assoc. 2015;313(11):1122–32.CrossRef Elmore JG, Longton GM, Carney PA, Geller BM, Onega T, Tosteson AN, et al. Diagnostic concordance among pathologists interpreting breast biopsy specimens. J Am Med Assoc. 2015;313(11):1122–32.CrossRef
2.
Zurück zum Zitat Graham S, Vu QD, Raza SEA, Azam A, Tsang YW, Kwak JT, et al. Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med Image Anal. 2019;58:101563.CrossRefPubMed Graham S, Vu QD, Raza SEA, Azam A, Tsang YW, Kwak JT, et al. Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med Image Anal. 2019;58:101563.CrossRefPubMed
3.
Zurück zum Zitat Javed S, Mahmood A, Fraz MM, Koohbanani NA, Benes K, Tsang YW, et al. Cellular community detection for tissue phenotyping in colorectal cancer histology images. Med Image Anal. 2020;63:101696.CrossRefPubMed Javed S, Mahmood A, Fraz MM, Koohbanani NA, Benes K, Tsang YW, et al. Cellular community detection for tissue phenotyping in colorectal cancer histology images. Med Image Anal. 2020;63:101696.CrossRefPubMed
4.
Zurück zum Zitat Cosatto E, Miller M, Graf HP, Meyer JS. Grading nuclear pleomorphism on histological micrographs. In: 19th International Conference on Pattern Recognition. IEEE; 2018 p. 1–4. Cosatto E, Miller M, Graf HP, Meyer JS. Grading nuclear pleomorphism on histological micrographs. In: 19th International Conference on Pattern Recognition. IEEE; 2018 p. 1–4.
5.
Zurück zum Zitat Lee G, Veltri RW, Zhu G, Ali S, Epstein JI, Madabhushi A. Nuclear shape and architecture in benign fields predict biochemical recurrence in prostate cancer patients following radical prostatectomy: preliminary findings. Eur Urol Focus. 2017;3(4–5):457–66.CrossRefPubMed Lee G, Veltri RW, Zhu G, Ali S, Epstein JI, Madabhushi A. Nuclear shape and architecture in benign fields predict biochemical recurrence in prostate cancer patients following radical prostatectomy: preliminary findings. Eur Urol Focus. 2017;3(4–5):457–66.CrossRefPubMed
6.
Zurück zum Zitat Wang X, Janowczyk A, Zhou Y, Thawani R, Fu P, Schalper K, et al. Prediction of recurrence in early stage non-small cell lung cancer using computer extracted nuclear features from digital H &E images. Sci Rep. 2017;7(1):1–10.PubMedPubMedCentral Wang X, Janowczyk A, Zhou Y, Thawani R, Fu P, Schalper K, et al. Prediction of recurrence in early stage non-small cell lung cancer using computer extracted nuclear features from digital H &E images. Sci Rep. 2017;7(1):1–10.PubMedPubMedCentral
7.
Zurück zum Zitat Lu C, Romo-Bucheli D, Wang X, Janowczyk A, Ganesan S, Gilmore H, et al. Nuclear shape and orientation features from H &E images predict survival in early-stage estrogen receptor-positive breast cancers. Lab Investig. 2018;98(11):1438–48.CrossRefPubMed Lu C, Romo-Bucheli D, Wang X, Janowczyk A, Ganesan S, Gilmore H, et al. Nuclear shape and orientation features from H &E images predict survival in early-stage estrogen receptor-positive breast cancers. Lab Investig. 2018;98(11):1438–48.CrossRefPubMed
8.
Zurück zum Zitat Yi F, Huang J, Yang L, Xie Y, Xiao G. Automatic extraction of cell nuclei from H &E-stained histopathological images. J Med Imaging. 2017;4(2):027502.CrossRef Yi F, Huang J, Yang L, Xie Y, Xiao G. Automatic extraction of cell nuclei from H &E-stained histopathological images. J Med Imaging. 2017;4(2):027502.CrossRef
9.
Zurück zum Zitat Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Ann Rev Biomed Eng. 2017;19:221–48.CrossRef Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Ann Rev Biomed Eng. 2017;19:221–48.CrossRef
10.
Zurück zum Zitat Ker J, Wang L, Rao J, Lim T. Deep learning applications in medical image analysis. Inst Electr Electron Eng Access. 2017;6:9375–89. Ker J, Wang L, Rao J, Lim T. Deep learning applications in medical image analysis. Inst Electr Electron Eng Access. 2017;6:9375–89.
11.
Zurück zum Zitat Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 3431–40. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 3431–40.
12.
Zurück zum Zitat Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference. Proceedings, Part III 18. Munich: Springer International Publishing; 2015. p. 234–41. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference. Proceedings, Part III 18. Munich: Springer International Publishing; 2015. p. 234–41.
13.
Zurück zum Zitat Jung CR, Scharcanski J. Robust watershed segmentation using wavelets. Image Vis Comput. 2005;23(7):661–9.CrossRef Jung CR, Scharcanski J. Robust watershed segmentation using wavelets. Image Vis Comput. 2005;23(7):661–9.CrossRef
14.
Zurück zum Zitat Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans Med Imaging. 2019;39(6):1856–67.CrossRefPubMedPubMedCentral Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans Med Imaging. 2019;39(6):1856–67.CrossRefPubMedPubMedCentral
15.
Zurück zum Zitat Alom MZ, Yakopcic C, Hasan M, Taha TM, Asari VK. Recurrent residual U-Net for medical image segmentation. J Med Imaging. 2019;6(1):014006.CrossRef Alom MZ, Yakopcic C, Hasan M, Taha TM, Asari VK. Recurrent residual U-Net for medical image segmentation. J Med Imaging. 2019;6(1):014006.CrossRef
16.
Zurück zum Zitat Gudhe NR, Behravan H, Sudah M, Okuma H, Vanninen R, Kosma VM, et al. Multi-level dilated residual network for biomedical image segmentation. Sci Rep. 2021;11(1):1–18.CrossRef Gudhe NR, Behravan H, Sudah M, Okuma H, Vanninen R, Kosma VM, et al. Multi-level dilated residual network for biomedical image segmentation. Sci Rep. 2021;11(1):1–18.CrossRef
17.
Zurück zum Zitat Ibtehaz N, Rahman MS. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 2020;121:74–87.CrossRefPubMed Ibtehaz N, Rahman MS. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 2020;121:74–87.CrossRefPubMed
18.
Zurück zum Zitat Ghoshal B, Tucker A, Sanghera B, Lup Wong W. Estimating uncertainty in deep learning for reporting confidence to clinicians in medical image segmentation and diseases detection. Comput Intell. 2021;37(2):701–34.CrossRef Ghoshal B, Tucker A, Sanghera B, Lup Wong W. Estimating uncertainty in deep learning for reporting confidence to clinicians in medical image segmentation and diseases detection. Comput Intell. 2021;37(2):701–34.CrossRef
19.
Zurück zum Zitat Kendall A, Gal Y. What uncertainties do we need in Bayesian deep learning for computer vision? arXiv preprint arXiv:1703.04977. 2017. Kendall A, Gal Y. What uncertainties do we need in Bayesian deep learning for computer vision? arXiv preprint arXiv:​1703.​04977. 2017.
20.
Zurück zum Zitat Kwon Y, Won JH, Kim BJ, Paik MC. Uncertainty quantification using Bayesian neural networks in classification: Application to biomedical image segmentation. Comput Stat Data Anal. 2020;142:106816.CrossRef Kwon Y, Won JH, Kim BJ, Paik MC. Uncertainty quantification using Bayesian neural networks in classification: Application to biomedical image segmentation. Comput Stat Data Anal. 2020;142:106816.CrossRef
21.
Zurück zum Zitat Abdar M, Samami M, Mahmoodabad SD, Doan T, Mazoure B, Hashemifesharaki R, et al. Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning. Comput Biol Med. 2021;135:104418.CrossRefPubMed Abdar M, Samami M, Mahmoodabad SD, Doan T, Mazoure B, Hashemifesharaki R, et al. Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning. Comput Biol Med. 2021;135:104418.CrossRefPubMed
22.
Zurück zum Zitat Sankaran S, Kim HJ, Choi G, Taylor CA. Uncertainty quantification in coronary blood flow simulations: impact of geometry, boundary conditions and blood viscosity. J Biomech. 2016;49(12):2540–7.CrossRefPubMed Sankaran S, Kim HJ, Choi G, Taylor CA. Uncertainty quantification in coronary blood flow simulations: impact of geometry, boundary conditions and blood viscosity. J Biomech. 2016;49(12):2540–7.CrossRefPubMed
23.
Zurück zum Zitat Zhu Y, Zabaras N. Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification. J Comput Phys. 2018;366:415–47.CrossRef Zhu Y, Zabaras N. Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification. J Comput Phys. 2018;366:415–47.CrossRef
24.
Zurück zum Zitat Neal RM. Bayesian learning for neural networks. Springer Science & Business Media. vol. 118; 2012. Neal RM. Bayesian learning for neural networks. Springer Science & Business Media. vol. 118; 2012.
25.
Zurück zum Zitat Wu A, Nowozin S, Meeds E, Turner RE, Hernández-Lobato JM, Gaunt AL. Fixing variational bayes: Deterministic variational inference for Bayesian neural networks. arXiv preprint arXiv:1810.03958. 2018. Wu A, Nowozin S, Meeds E, Turner RE, Hernández-Lobato JM, Gaunt AL. Fixing variational bayes: Deterministic variational inference for Bayesian neural networks. arXiv preprint arXiv:​1810.​03958. 2018.
26.
Zurück zum Zitat Blundell C, Cornebise J, Kavukcuoglu K, Wierstra D. Weight uncertainty in neural network. In: International conference on machine learning. 2015 p. 1613–22. PMLR. Blundell C, Cornebise J, Kavukcuoglu K, Wierstra D. Weight uncertainty in neural network. In: International conference on machine learning. 2015 p. 1613–22. PMLR.
27.
Zurück zum Zitat Gal Y, Ghahramani Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: International conference on machine learning. Proceedings of Machine Learning Research; 2016. p. 1050–59. PLMR Gal Y, Ghahramani Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: International conference on machine learning. Proceedings of Machine Learning Research; 2016. p. 1050–59. PLMR
28.
Zurück zum Zitat Tan Y, Jin B, Yue X, Chen Y, Vincentelli AS. Exploiting Uncertainties from Ensemble Learners to Improve Decision-Making in Healthcare AI. arXiv preprint arXiv:2007.06063. 2020. Tan Y, Jin B, Yue X, Chen Y, Vincentelli AS. Exploiting Uncertainties from Ensemble Learners to Improve Decision-Making in Healthcare AI. arXiv preprint arXiv:​2007.​06063. 2020.
29.
Zurück zum Zitat Neven D, Brabandere BD, Proesmans M, Gool LV. Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. p. 8837–45. Neven D, Brabandere BD, Proesmans M, Gool LV. Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. p. 8837–45.
30.
Zurück zum Zitat Santurkar S, Tsipras D, Ilyas A, Mądry A. How does batch normalization help optimization? Advances in neural information processing systems. 2018;31:2488–98. Santurkar S, Tsipras D, Ilyas A, Mądry A. How does batch normalization help optimization? Advances in neural information processing systems. 2018;31:2488–98.
31.
Zurück zum Zitat Klambauer G, Unterthiner T, Mayr A, Hochreiter S. Self-normalizing neural networks. Advances in neural information processing systems. 2017;30:972–81. Klambauer G, Unterthiner T, Mayr A, Hochreiter S. Self-normalizing neural networks. Advances in neural information processing systems. 2017;30:972–81.
32.
Zurück zum Zitat Gamper J, Koohbanani NA, Benet K, Khuram A, Rajpoot N. Pannuke: an open pan-cancer histology dataset for nuclei instance segmentation and classification. In: Digital Pathology: 15th European Congress, ECDP 2019. Proceedings 15. Warwick: Springer International Publishing; 2019. p. 11–19. Gamper J, Koohbanani NA, Benet K, Khuram A, Rajpoot N. Pannuke: an open pan-cancer histology dataset for nuclei instance segmentation and classification. In: Digital Pathology: 15th European Congress, ECDP 2019. Proceedings 15. Warwick: Springer International Publishing; 2019. p. 11–19.
33.
Zurück zum Zitat Liu J, Lichtenberg T, Hoadley KA, Poisson LM, Lazar AJ, Cherniack AD, et al. An integrated TCGA pan-cancer clinical data resource to drive high-quality survival outcome analytics. Cell. 2018;173(2):400–16.CrossRefPubMedPubMedCentral Liu J, Lichtenberg T, Hoadley KA, Poisson LM, Lazar AJ, Cherniack AD, et al. An integrated TCGA pan-cancer clinical data resource to drive high-quality survival outcome analytics. Cell. 2018;173(2):400–16.CrossRefPubMedPubMedCentral
34.
Zurück zum Zitat Vahadane A, Peng T, Sethi A, Albarqouni S, Wang L, Baust M, et al. Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans Med Imaging. 2016;35(8):1962–71.CrossRefPubMed Vahadane A, Peng T, Sethi A, Albarqouni S, Wang L, Baust M, et al. Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans Med Imaging. 2016;35(8):1962–71.CrossRefPubMed
35.
Zurück zum Zitat Akiba T, Sano S, Yanase T, Ohta T, Koyama M. Optuna: A next-generation hyperparameter optimization framework. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining; 2019. p. 2623–31. Akiba T, Sano S, Yanase T, Ohta T, Koyama M. Optuna: A next-generation hyperparameter optimization framework. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining; 2019. p. 2623–31.
36.
Zurück zum Zitat Raschka S, Liu YH, Mirjalili V. Machine Learning with PyTorch and Scikit-Learn. Birmingham: Packt Publishing; 2022. Raschka S, Liu YH, Mirjalili V. Machine Learning with PyTorch and Scikit-Learn. Birmingham: Packt Publishing; 2022.
39.
Zurück zum Zitat Rezatofighi H, Tsoi N, Gwak J, Sadeghian A, Reid I, Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; IEEE. 2019 p. 658–66. Rezatofighi H, Tsoi N, Gwak J, Sadeghian A, Reid I, Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; IEEE. 2019 p. 658–66.
40.
Zurück zum Zitat Mobiny A, Yuan P, Moulik SK, Garg N, Wu CC, Van Nguyen H. DropConnect is effective in modeling uncertainty of Bayesian deep networks. Sci Rep. 2021;11(1):1–14.CrossRef Mobiny A, Yuan P, Moulik SK, Garg N, Wu CC, Van Nguyen H. DropConnect is effective in modeling uncertainty of Bayesian deep networks. Sci Rep. 2021;11(1):1–14.CrossRef
Metadaten
Titel
Nuclei instance segmentation from histopathology images using Bayesian dropout based deep learning
verfasst von
Naga Raju Gudhe
Veli-Matti Kosma
Hamid Behravan
Arto Mannermaa
Publikationsdatum
01.12.2023
Verlag
BioMed Central
Erschienen in
BMC Medical Imaging / Ausgabe 1/2023
Elektronische ISSN: 1471-2342
DOI
https://doi.org/10.1186/s12880-023-01121-3

Neu im Fachgebiet Radiologie

Röntgen-Thorax oder LDCT fürs Lungenscreening nach HNSCC?

Personen, die an einem Plattenepithelkarzinom im Kopf-Hals-Bereich erkrankt sind, haben ein erhöhtes Risiko für Metastasen oder zweite Primärmalignome der Lunge. Eine Studie hat untersucht, wie die radiologische Überwachung aussehen sollte.

Statine: Was der G-BA-Beschluss für Praxen bedeutet

Nach dem G-BA-Beschluss zur erweiterten Verordnungsfähigkeit von Lipidsenkern rechnet die DEGAM mit 200 bis 300 neuen Dauerpatienten pro Praxis. Im Interview erläutert Präsidiumsmitglied Erika Baum, wie Hausärztinnen und Hausärzte am besten vorgehen.

Brustdichte nicht mit Multivitaminpräparat-Einnahme assoziiert

Der regelmäßige Gebrauch von Nahrungsergänzungsmitteln scheint nicht die mammografische Brustdichte zu erhöhen. In einer US-amerikanischen Studie jedenfalls ließ sich ein derartiger Zusammenhang nicht bestätigen.

Erhöhte Suizidrate unter US-Ärztinnen

Während der Arztberuf Männer eher vor Suizid schützt, erhöht er das Risiko bei Frauen – zumindest in den USA: Die Suizidinzidenz unter Ärztinnen ist um die Hälfte höher als unter Frauen mit anderen Berufen. Männliche Ärzte töten sich dennoch wesentlich häufiger selbst als weibliche.

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.