Skip to main content
Erschienen in: BMC Medical Imaging 1/2020

Open Access 01.12.2020 | Research article

CNN-based survival model for pancreatic ductal adenocarcinoma in medical imaging

Erschienen in: BMC Medical Imaging | Ausgabe 1/2020

Abstract

Background

Cox proportional hazard model (CPH) is commonly used in clinical research for survival analysis. In quantitative medical imaging (radiomics) studies, CPH plays an important role in feature reduction and modeling. However, the underlying linear assumption of CPH model limits the prognostic performance. In this work, using transfer learning, a convolutional neural network (CNN) based survival model was built and tested on preoperative CT images of resectable Pancreatic Ductal Adenocarcinoma (PDAC) patients.

Results

The proposed CNN-based survival model outperformed the traditional CPH-based radiomics approach in terms of concordance index and index of prediction accuracy, providing a better fit for patients’ survival patterns.

Conclusions

The proposed CNN-based survival model outperforms CPH-based radiomics pipeline in PDAC prognosis. This approach offers a better fit for survival patterns based on CT images and overcomes the limitations of conventional survival models.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Abkürzungen
ANN
Artificial Neural Networks
BN
Batch Normalization
CI
Concordance Index
CNN
Convolutional Neural Network
CPH
Cox Proportion Hazard
CT
Computerized Tomography
NSCLC
Non-Small Cell Lung Cancer
PDAC
Pancreatic Ductal Adenocarcinoma
ROI
Region of Interest
SVM
Support Vector Machine

Background

In clinical practice, medical imaging plays an increasingly important role in informed decision making of clinicians for disease management. Radiomics is a systematic approach to study the latent information in medical imaging for improved accuracy in prognosis. A typical radiomics study involves image acquisition, feature extraction, feature analysis, and predictive modeling for a clinical outcome such as patient survival [1]. Efforts have been made to standardize quantitative imaging features (radiomic features) by implementing open source libraries such as PyRadiomics [2]. These feature banks contain thousands of hand-crafted formulas, designed to extract the distribution or texture information from medical images. In radiomics studies, a feature reduction method (e.g., principle component analysis) is used to select representative features [3]. The prognostic features are usually determined using Cox proportional hazard model (CPH) [4]. In the past decade, several radiomics features have shown prognostic value in different diseases especially different types of cancer [59]. However, the high dimensionality nature of radiomics features makes the feature selection prone to multiple testing, leading to false positives and low performance in the validation cohorts.
As a statistical method, survival models are commonly used in clinical research to identify potential risk factors and predict risks for a variety of clinical outcomes including patients’ overall survivals for different diseases such as cancer. CPH is one of the most commonly used survival analysis tools [1012]. CPH is a type of semiparametric model that calculates the effects of features (independent variables) on the risk of a certain event (e.g., death) [13]. For example, CPH can measure the effect of tumor size on the risk of death.
The CPH-based survival models can help clinicians make more personalized treatment decisions for individual patients. Traditional CPH models assume that the independent variables make a linear contribution to the model, with respect to time [13]. In many conditions, this assumption oversimplifies the relationships between biomarkers (e.g., radiomic features) and outcomes, especially in cancer diseases with poor prognosis including Pancreatic Ductal Adenocarcinoma (PDAC) [11]. With a limited sample size, the violation of linear assumption may not be obvious. However, as data sizes increase, the violation of linear assumption in CPH models increasingly becomes more obvious and problematic, diminishing the performance and reliability of such models [1012]. In modern survival modeling approaches, restricted cubic splines have been applied to fix this weakness of CPH models [14, 15]. However, most radiomics studies have failed to address this shortcoming and instead, either they have adopted binary classification methods discarding duration (time to event) information altogether or continued to use conventional CPH Methods [1621].
The binary classification methods solve the nonlinearity by using a classifier such as Random Forest or Support Vector Machine (SVM) [22, 23]. Although these classifiers perform well in diagnosis and prognosis, they discard the time information in the modeling. For disease with poor prognosis such as pancreatic cancer, the 5-year survival rate is very low (e.g., less than 10% for pancreatic cancer) [2426]. Consequently, binary predictions only offer limited information for clinicians in designing personalized treatment plans and hence, a nonlinear survival model that takes duration (time to an event such as death) into account to provide useful information on the survival is desired.
A recent development in artificial neural networks (ANNs) has provided an alternative solution for survival modeling. ANNs can learn complex and nonlinear relationships between prognostic features and an individual’s risk for a given outcome [27]. Therefore, the ANNs-based model can provide an improved personalized recommendation based on the computed risk. Nevertheless, previous studies have demonstrated mixed performance for risk-prediction models [2729]. This may be due to the small sample size and limited feature space leading to ANNs models that are underfitted [28]. To exploit the ANNs architecture and successfully apply them to complex cases, larger datasets are required. Recent work has shown that, given enough sample sizes, ANNs can, in fact, outperform traditional CPH survival models [1012].
The majority of previous works on deep learning based survival analysis including DeepSurv and NNET-survival are ANNs-based survival models with modified loss function to capture more accurate survival patterns [10, 11]. These models take features (e.g., age, gender, height) as input and return risks for patients at different timepoints. However, feeding radiomics features into these ANNs as input is not the optimal solution due to the multicollinearity issue.
In this research, we used medical images as input, replacing radiomics feature extractors with a Convolutional Neural Network (CNN) architecture to extract disease-specific image features which are associated with survival patterns. As the most well-known architecture in deep learning, CNNs extract imaging features by applying multiple layers of convolution operations to the images. Furthermore, the weights of the convolution filters are finetuned during training via backpropagation process [30, 31]. Thus, given sufficient data, CNNs can be used to extract disease-specific features, which can be used for diagnosis or prognosis purposes [3235]. Although traditional medical imaging based CNNs use “binary” or “multinomial” classification loss function, the loss function can be modified to also capture the survival patterns [11]. By doing so, CNN can be tuned to extract features that are associated with the risk of the outcome in a certain duration. We hypothesized that the proposed CNN-based Survival (CNN-Survival) model with a modified loss function would outperform conventional radiomics and CPH-based prognosis models.

Methods

Data

Three independent cohorts were used in this study. Cohort 1 consists of publicly available pretreatment CT scans of 422 Non-small cell lung cancer (NSCLC) patients [7]. Cohort 2 has 68 resectable pancreatic adenocarcinoma (PDAC) patients collected from a local hospital from 2008 to 2013. Cohort 3, which is the test data, consists of 30 resectable PDAC patients enrolled in another independent hospital site from 2007 to 2012 [3]. For all the patients in these three independent cohorts, CT scans, annotations (contours) of tumor performed by radiologists, and survival data were available. For PDAC patients, the CT scans were preoperative contrast-enhanced images of resectable patients, and the survival data was collected from the date of surgery until death. CT images from all three cohorts were read from DICOM file without further processing. As CT scans were from different institutions, the image acquisition protocol information (e.g., exact contrast bolus volume, timing, and injection rate) was not consistent over the time period. The institutions’ Research Ethics Boards approved these retrospective studies and all methods were carried out in accordance with relevant guidelines and regulations.

Architecture of the proposed CNN-survival

A CNN architecture with six-layered convolutions (CNN-Survival) was trained as shown in Fig. 1. Input images have dimensions of 140 × 140 × 1 (grayscale), which contain the CT images within the manual contours of the tumors (example shown in Fig. 2). All pixels outside of the contoured region were 0 in the 0 to 255 grayscales. All convolutional layers have kernel size of 3 × 3 with 32 filters following by Batch Normalization layers (BN). The first Max Pool layer has pool size of 2 × 2, and the latter two Max Pool layers have pool size of 3 × 3. Through the Max Pool layers, number of trainable parameters was significantly reduced. To avoid overfitting with this small sample size, dropout layers were added after every two convolutional layers with dropout rate at 0.5. Finally, passing through the flatten and dense layer, images were converted into 19 features and finally, survival probabilities for a given time t were calculated.

Loss function

To better fit the distribution of survival data, a modified loss function, proposed by Gensheimer et al. [11], was applied to the CNN architecture (Eq. 1).
$$ loss=-{\sum}_{i=1}^{d_j}\ln \left({h}_j^i\right)-{\sum}_{i={d}_j+1}^{r_j}\ln \left(1-{h}_j^i\right) $$
(1)
In Equation 1, \( {h}_j^i \) is the hazard probability for individual i during time interval j. r stands for individuals “in view” during the interval j (i.e., survived in this period) and d means a patient suffered a failure (e.g., death) during this interval [11]. As it can be seen from Equation 1, the left part penalizes if the model gave low hazard for failure (e.g., death), while the right part penalizes if the model gave high hazard for a survived case. The overall loss function is the sum of the losses for each time interval [11].

Training process and transfer learning of CNN-survival

Training a CNN-based survival model needs to finetune a large number of parameters. Given this CNN architecture, there were 73,587 trainable parameters. As such, the larger dataset, cohort 1, was used to pretrain the network. In Cohort 1, 422 patients had 5479 slices containing manually contoured tumor regions. However, the region of interest (ROI) on some of the slices were too small (e.g., less than 250 pixels) to be fed as input to the CNN (shown in Fig. 3). To mitigate this, we ranked slices using their ROI size and pixel intensity and picked the top 2500 slices. This ensured the minimum ROI size of 250 pixels.
These 2500 slices were fed into the proposed CNN model without augmentation. After training the initial model at learning rate 0.0001 for 50 epochs, all the weights in the pretrained model were frozen except for the final dense layer. Next, 68 patients of Cohort 2 were used to finetune the dense layer (containing 627 parameters) for 20 epochs with learning rate of 0.0001 without augmentation. The finetuning was necessary since Cohort 1 and Cohort 2 have CT images from two different types of cancers (lung and pancreas cancer, respectively) with different survival patterns. After 20 epochs of finetuning in Cohort 2, the final model was tested in Cohort 3. The prognosis performance was measured by two metrics: concordance index (CI) [36] and index of prediction accuracy (IPA) [37]. CI is calculated using Equation 2.
$$ \mathrm{c}=\frac{1}{\left|\mathcal{E}\right|}\;{\sum}_{{\mathrm{T}}_{\mathrm{i}}\kern0.24em \mathrm{uncensored}}{\sum}_{T_j>{T}_i}{\mathbf{1}}_{\mathbf{f}\left({\mathbf{x}}_{\mathbf{i}}\right)<\mathbf{f}\left({\mathbf{x}}_{\mathbf{j}}\right)} $$
(2)
where the indicator function 1a < b = 1 if a < b, and 0 otherwise. Ti is the survival time for subject i. | \( \mathcal{E} \) | is the number of edges in the order graph. f(xi) is the predicted survival time for subject i by model f. Under this formula, concordance index (CI) is the probability of concordance between the predicted and the observed survival [36]. IPA is a recently proposed performance measure for binary and time to event outcomes accounting for both discrimination and calibration, and can identify harmful models as well [37]. IPA of 100% indicates a perfect model, and harmful models will have IPA < 0 [37].

Radiomics and CNN-survival features

In order to systematically compare the performance of CNN-Survival with CPH models and rule out the confounding variable, we built two additional models using CPH. The first model (Model 1: Radiomics features + LASSO-CPH) is a traditional radiomics-based CPH model, which used 1428 2D radiomics features extracted from the manually contoured regions using PyRadiomics library [2] (version 2.0). A LASSO-CPH [38] feature reduction method was used to find prognostic radiomic features in the training cohort (Cohort 2), which were then tested in the test cohort (Cohort 3). The second model (Model 2: Transfer learning features + LASSO-CPH) was trained using the 19 transfer learning features extracted from the last dense layer of the CNN-Survival model. Similar to Model 1, a LASSO-CPH method was used to select prognostic features in Cohort 2 and test them in Cohort 3. Under this setting, Model 1 and Model 2 had the same type of survival function (LASSO-CPH), and hence, the differences in the input data would explain the differences in performance. On the other hand, Model 2 had the same input data as our proposed CNN-Survival (Model 3) as they both used features from the dense layer. Given that, the performance disparities of Model 2 and Model 3 can be explained by the different survival functions where Model 2 uses LASSO-CPH and instead, Model 3 uses the modified loss function to generate survival probabilities for a given time. The performance of all three models was validated in Cohort 3 (test set) at 18 months by concordance index (CI) and index of prediction accuracy (IPA) using R software (version 3.5.3), Survival, Survcomp, and riskRegression library [3941].

Results

In the traditional radiomics with LASSO-CPH approach (Model 1), an optimal CPH model was trained in Cohort 2 using four features (“gradient_gldm_SmallDependenceEmphasis”, “gradient_glszm_SmallAreaEmphasis”, “original_glszm_LargeAreaLowGrayLevelEmphasis”, and “wavelet. HLH_glszm_HighGrayLevelZoneEmphasis”). This model was tested in cohort 3 for validation with CI and IPA at 0.491 and − 3.80%, respectively. Similarly, another LASSO-CPH model (Model 2) was trained using transfer learning features extracted from Cohort 2. Using three features selected by LASSO-CPH, this model yielded CI and IPA of 0.603 and 4.40%, respectively when validated in Cohort 3. In contrast, the proposed CNN-Survival model (Model 3) achieved CI and IPA of 0.651 and 11.81%, respectively, in Cohort 3, outperforming the previous two CPH-based methods. Table 1 lists the results (IPA and CI) for all three survival models.
Table 1
Results (IPA and CI) of three survival models for resectable PDAC
 
IPA in Cohort 3 (test set)
CI in Cohort 3 (test set)
Model 1: Radiomics features + LASSO-CPH
−3.80%
0.491
Model 2: Transfer learning features + LASSO-CPH
4.40%
0.603
Model 3: Proposed CNN-Survival
11.81%
0.651
As discussed above, CNN-Survival could depict the survival probability of a patient at a given time. The survival probabilities curves of two patients (one survived versus one deceased) in the test cohort are shown in Fig. 4 and Fig. 5.
For the patient deceased within 1 year after surgery, the survival probability dropped significantly, while for the survived patient, the survival probability stays above 0.5.

Discussion

Using the proposed CNN-Survival model, the prognosis performance was improved, elevating IPA from − 3.80 to 11.81%, and CI from 0.491 to 0.651 compared to a traditional radiomic-based CPH model (Model 1). Even when transfer learning features were used to build a CPH model (Model 2), the proposed CNN-Survival model was still superior (IPA: 11.81% vs 4.40%, CI: 0.651 vs. 0.603). These comparisons illustrate that transfer learning features outperform radiomics features (Model 2 vs. Model 1) and the proposed CNN-Survival model using a modified loss function outperforms both CPH-based models (Model 3 vs. Model 1 and 2). Deep learning networks provide flexibility in modifying the dimension of feature space and loss function, enabling us to extract disease-specific features and build more precise models. Using a CNN-based survival model, we showed that, with the help of transfer learning, deep learning architectures can outperform the traditional pipeline in a typical small sample size setting when modeling the survival for resectable PDAC patients. The proposed transfer learning-based CNN-Survival model has significant potential to enable researchers to pretrain a model using images from common cancers with larger datasets and transfer this model to target rare cancers. Transfer learning-based CNN-Survival model mitigates the needs for large sample size, allowing the survival model to be applied to a wide range of cancer sites.
The proposed CNN-Survival model provides better prognostic performance compared to the traditional radiomics analytic pipeline (IPA 11.81% versus − 3.80%). Although there was no prior publications reporting IPA for PDAC biomarkers, the IPA of our proposed CNN-Survival is comparable to the typical IPA for other survival models [37]. From the feature extraction perspective, parameters in a CNN can be updated during backpropagation, allowing to extract a large number of features that are associated with the target outcome. For feature analysis, the CNN-Survival model avoids the multiple testing, which is a significant issue in the conventional radiomics analytic pipeline. Finally, with the modified loss function, CNN-Survival model does not rely on the linear assumption, making it suitable for more real-world scenarios. These advantages contributed to the improved performance of the proposed model. Compared to transfer learning features-based CPH model (Model 2), which used the same feature sued by CNN-Survival model (Model 3), the proposed CNN-Survival had higher IPA (11.89% versus 4.40%). Given that these two models had the same data input, this result indicates that the loss function in CNN-Survival model outperforms the traditional linear CPH which is commonly used in radiomic studies.
In this research, due to the small sample size in PDAC cohorts, the proposed CNN-Survival model was not optimal. We used CT images from 68 patients to finetune the pretrained CNN-Survival model and tested in another 30 patients of an independent cohort. Although through transfer learning, most of the parameters were trained using the pretrained cohort, there were still 627 parameters in the dense layer needed to be modified through finetuning. Thus, if a larger dataset was available for finetuning, performance may be further improved. Additionally, the pretrained dataset are CT images from Non-Small Cell Lung Cancer (NSCLC) patients. Although it is the largest open source dataset we could find, NSCLC has different biological traits and survival patterns compared to PDAC. In future research, using a similar pretrained domain and a larger finetuning cohort, further improvement may be achieved. A proper validation of the proposed model is required through clinical validation, which is beyond the scope of this work.
In this study, using CT images from three independent cohorts, we validated the proposed CNN-Survival model with the modified loss function proposed by Gensheimer et al. [11]. We showed that the proposed CNN-Survival model outperformed and avoided the limitations of the conventional radiomics-based CPH model in a real-world small sample size setting. Further validation of this loss function can be performed for other types of diseases through transfer learning. The proposed CNN-Survival model has the potential to be a standardized survival model in quantitative medical imaging research field.

Conclusions

The proposed CNN-based survival model outperforms traditional CPH-based radiomics and transfer learning pipelines in PDAC prognosis. This approach offers a better fit for survival patterns based on CT images and overcomes the limitations of conventional survival models.

Acknowledgements

We sincerely appreciate all the patients who participated in this study.
Cohort 1 is publicly available and can be downloaded from: https://​wiki.​cancerimagingarc​hive.​net/​. For Cohort 2, University Health Network Research Ethics Boards approved the retrospective study and informed consent was obtained. For Cohort 3, the Sunnybrook Health Sciences Centre Research Ethics Boards approved the retrospective study and waived the requirement for informed consent.
Not Applicable.

Competing interests

FK is an associated editor of BMC Medical Imaging.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://​creativecommons.​org/​publicdomain/​zero/​1.​0/​) applies to the data made available in this article, unless otherwise stated.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Khalvati F, Zhang Y, Wong A, Haider MA. Radiomics. In: Encyclopedia of Biomedical Engineering, vol. 2; 2019. p. 597–603.CrossRef Khalvati F, Zhang Y, Wong A, Haider MA. Radiomics. In: Encyclopedia of Biomedical Engineering, vol. 2; 2019. p. 597–603.CrossRef
2.
Zurück zum Zitat Van Griethuysen JJM, et al. Computational radiomics system to decode the radiographic phenotype. Cancer Res. 2017;77:e104–7.CrossRef Van Griethuysen JJM, et al. Computational radiomics system to decode the radiographic phenotype. Cancer Res. 2017;77:e104–7.CrossRef
4.
Zurück zum Zitat George B, Seals S, Aban I. Survival analysis and regression models. J Nucl Cardiol. 2014;21:686–94.CrossRef George B, Seals S, Aban I. Survival analysis and regression models. J Nucl Cardiol. 2014;21:686–94.CrossRef
5.
Zurück zum Zitat Keek SA, Leijenaar RT, Jochems A, Woodruff HC. A review on radiomics and the future of theranostics for patient selection in precision medicine. Br J Radiol. 2018;91:20170926.CrossRef Keek SA, Leijenaar RT, Jochems A, Woodruff HC. A review on radiomics and the future of theranostics for patient selection in precision medicine. Br J Radiol. 2018;91:20170926.CrossRef
7.
Zurück zum Zitat Aerts HJ, et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat Commun. 2014;5:4006.CrossRef Aerts HJ, et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat Commun. 2014;5:4006.CrossRef
8.
Zurück zum Zitat Aerts HJ, The Potential of Radiomic-based Phenotyping in precision medicine. JAMA Oncol. 2016;2:1636.CrossRef Aerts HJ, The Potential of Radiomic-based Phenotyping in precision medicine. JAMA Oncol. 2016;2:1636.CrossRef
9.
Zurück zum Zitat Haider MA, Vosough A, Khalvati F, Kiss A, Ganeshan B, Bjarnason GA. CT texture analysis: a potential tool for prediction of survival in patients with metastatic clear cell carcinoma treated with sunitinib. Cancer Imaging. 2017;17(1). https://doi.org/10.1186/s40644-017-0106-8. Haider MA, Vosough A, Khalvati F, Kiss A, Ganeshan B, Bjarnason GA. CT texture analysis: a potential tool for prediction of survival in patients with metastatic clear cell carcinoma treated with sunitinib. Cancer Imaging. 2017;17(1). https://​doi.​org/​10.​1186/​s40644-017-0106-8.
11.
Zurück zum Zitat Gensheimer MF, Narasimhan B. A Scalable Discrete-Time Survival Model for Neural Networks. PeerJ. 2019;7:e6257.CrossRef Gensheimer MF, Narasimhan B. A Scalable Discrete-Time Survival Model for Neural Networks. PeerJ. 2019;7:e6257.CrossRef
12.
Zurück zum Zitat Ching T, Zhu X, Garmire LX. Cox-nnet: an artificial neural network method for prognosis prediction of high-throughput omics data. PLoS Comput Biol. 2018;14:e1006076.CrossRef Ching T, Zhu X, Garmire LX. Cox-nnet: an artificial neural network method for prognosis prediction of high-throughput omics data. PLoS Comput Biol. 2018;14:e1006076.CrossRef
13.
Zurück zum Zitat Cox D. R. Regression models and life-tables. J Royal Stat Soc. 1972;34:187–220. Cox D. R. Regression models and life-tables. J Royal Stat Soc. 1972;34:187–220.
17.
Zurück zum Zitat Isensee F, Kickingereder P, Wick W, Bendszus M, Maier-Hein KH. Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge. In: International MICCAI Brainlesion Workshop. Cham: Springer; 2017. p. 287–97. Isensee F, Kickingereder P, Wick W, Bendszus M, Maier-Hein KH. Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge. In: International MICCAI Brainlesion Workshop. Cham: Springer; 2017. p. 287–97.
18.
Zurück zum Zitat Hawkins S, et al. Predicting malignant nodules from screening CT scans. J Thorac Oncol. 2016;11:2120–8.CrossRef Hawkins S, et al. Predicting malignant nodules from screening CT scans. J Thorac Oncol. 2016;11:2120–8.CrossRef
19.
Zurück zum Zitat Chakraborty J, et al. CT radiomics to predict high-risk intraductal papillary mucinous neoplasms of the pancreas. Med Phys. 2018;45:5019–29.CrossRef Chakraborty J, et al. CT radiomics to predict high-risk intraductal papillary mucinous neoplasms of the pancreas. Med Phys. 2018;45:5019–29.CrossRef
20.
Zurück zum Zitat Cozzi L, et al. Computed tomography based radiomic signature as predictive of survival and local control after stereotactic body radiation therapy in pancreatic carcinoma. PLoS One. 2019;14:e0210758.CrossRef Cozzi L, et al. Computed tomography based radiomic signature as predictive of survival and local control after stereotactic body radiation therapy in pancreatic carcinoma. PLoS One. 2019;14:e0210758.CrossRef
21.
Zurück zum Zitat Lao J, et al. A deep learning-based Radiomics model for prediction of survival in Glioblastoma Multiforme. Sci Rep. 2017;7:10353.CrossRef Lao J, et al. A deep learning-based Radiomics model for prediction of survival in Glioblastoma Multiforme. Sci Rep. 2017;7:10353.CrossRef
22.
Zurück zum Zitat Breiman L. Random Forests; 2001. p. 1–33. Breiman L. Random Forests; 2001. p. 1–33.
23.
Zurück zum Zitat Hearst MA, Dumais ST, Osman E, Platt J, Scholkopf B. Support vector machines. IEEE Intell Syst. 1998;13:18–28.CrossRef Hearst MA, Dumais ST, Osman E, Platt J, Scholkopf B. Support vector machines. IEEE Intell Syst. 1998;13:18–28.CrossRef
25.
Zurück zum Zitat Foucher ED, et al. Pancreatic ductal adenocarcinoma: a strong imbalance of good and bad immunological cops in the tumor microenvironment. Front Immunol. 2018;9:1044.CrossRef Foucher ED, et al. Pancreatic ductal adenocarcinoma: a strong imbalance of good and bad immunological cops in the tumor microenvironment. Front Immunol. 2018;9:1044.CrossRef
27.
Zurück zum Zitat Mariani L, et al. Prognostic factors for metachronous contralateral breast cancer: a comparison of the linear Cox regression model and its artificial neural network extension. Breast Cancer Res Treat. 1997;44:167–78.CrossRef Mariani L, et al. Prognostic factors for metachronous contralateral breast cancer: a comparison of the linear Cox regression model and its artificial neural network extension. Breast Cancer Res Treat. 1997;44:167–78.CrossRef
28.
Zurück zum Zitat Katzman JL, et al. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network. BMC Med Res Methodol. 2018;18:24.CrossRef Katzman JL, et al. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network. BMC Med Res Methodol. 2018;18:24.CrossRef
29.
Zurück zum Zitat Xiang A, Lapuerta P, Ryutov A, Buckley J, Azen S. Comparison of the performance of neural network methods and Cox regression for censored survival data. Comput Stat Data Anal. 2000;34:243–57.CrossRef Xiang A, Lapuerta P, Ryutov A, Buckley J, Azen S. Comparison of the performance of neural network methods and Cox regression for censored survival data. Comput Stat Data Anal. 2000;34:243–57.CrossRef
30.
Zurück zum Zitat LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44.CrossRef LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44.CrossRef
31.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: InAdvances in neural information processing systems; 2012. p. 1097–105. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: InAdvances in neural information processing systems; 2012. p. 1097–105.
32.
Zurück zum Zitat Shin H-C, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, Dataset Characteristics and Transfer Learning. IEEE Trans Med Imaging. 2016;35:1285–98.CrossRef Shin H-C, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, Dataset Characteristics and Transfer Learning. IEEE Trans Med Imaging. 2016;35:1285–98.CrossRef
33.
Zurück zum Zitat Tajbakhsh N, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2017;35:1299–312.CrossRef Tajbakhsh N, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2017;35:1299–312.CrossRef
34.
Zurück zum Zitat Yasaka K, Akai H, Abe O, Kiryu S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: a preliminary study. Radiology. 2018;286:887–96.CrossRef Yasaka K, Akai H, Abe O, Kiryu S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: a preliminary study. Radiology. 2018;286:887–96.CrossRef
35.
Zurück zum Zitat Yamashita R, Nishio M, Do RKG, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights Imaging. 2018;9:611–29.CrossRef Yamashita R, Nishio M, Do RKG, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights Imaging. 2018;9:611–29.CrossRef
36.
Zurück zum Zitat Raykar VC, Steck H, Krishnapuram B, Dehing-Oberije C, Lambin P. On Ranking in Survival Analysis: Bounds on the Concordance Index. In: InAdvances in neural information processing systems; 2008. p. 1209–16. Raykar VC, Steck H, Krishnapuram B, Dehing-Oberije C, Lambin P. On Ranking in Survival Analysis: Bounds on the Concordance Index. In: InAdvances in neural information processing systems; 2008. p. 1209–16.
38.
Zurück zum Zitat Tibshirani R. Regression shrinkage and selection via the Lasso. J R Stat Soc Ser B. 1996;58:267–88. Tibshirani R. Regression shrinkage and selection via the Lasso. J R Stat Soc Ser B. 1996;58:267–88.
39.
Zurück zum Zitat Therneau TM, Grambsch PM. Modeling Survival Data: Extending the Cox Model. Springer, New York; 2000. ISBN 0-387-98784–3. Therneau TM, Grambsch PM. Modeling Survival Data: Extending the Cox Model. Springer, New York; 2000. ISBN 0-387-98784–3.
Metadaten
Titel
CNN-based survival model for pancreatic ductal adenocarcinoma in medical imaging
Publikationsdatum
01.12.2020
Erschienen in
BMC Medical Imaging / Ausgabe 1/2020
Elektronische ISSN: 1471-2342
DOI
https://doi.org/10.1186/s12880-020-0418-1

Weitere Artikel der Ausgabe 1/2020

BMC Medical Imaging 1/2020 Zur Ausgabe

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.