Zum Inhalt

Developing a deep learning model for predicting ovarian cancer in Ovarian-Adnexal Reporting and Data System Ultrasound (O-RADS US) Category 4 lesions: A multicenter study

  • Open Access
  • 01.07.2024
  • Research
Erschienen in:

Abstract

Purpose

To develop a deep learning (DL) model for differentiating between benign and malignant ovarian tumors of Ovarian-Adnexal Reporting and Data System Ultrasound (O-RADS US) Category 4 lesions, and validate its diagnostic performance.

Methods

A retrospective analysis of 1619 US images obtained from three centers from December 2014 to March 2023. DeepLabV3 and YOLOv8 were jointly used to segment, classify, and detect ovarian tumors. Precision and recall and area under the receiver operating characteristic curve (AUC) were employed to assess the model performance.

Results

A total of 519 patients (including 269 benign and 250 malignant masses) were enrolled in the study. The number of women included in the training, validation, and test cohorts was 426, 46, and 47, respectively. The detection models exhibited an average precision of 98.68% (95% CI: 0.95–0.99) for benign masses and 96.23% (95% CI: 0.92–0.98) for malignant masses. Moreover, in the training set, the AUC was 0.96 (95% CI: 0.94–0.97), whereas in the validation set, the AUC was 0.93(95% CI: 0.89–0.94) and 0.95 (95% CI: 0.91–0.96) in the test set. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive values for the training set were 0.943,0.957,0.951,0.966, and 0.936, respectively, whereas those for the validation set were 0.905,0.935, 0.935,0.919, and 0.931, respectively. In addition, the sensitivity, specificity, accuracy, positive predictive value, and negative predictive value for the test set were 0.925, 0.955, 0.941, 0.956, and 0.927, respectively.

Conclusion

The constructed DL model exhibited high diagnostic performance in distinguishing benign and malignant ovarian tumors in O-RADS US category 4 lesions.
Wenting Xie and Wenjie Lin contributed equally to this work and share first authorship.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
O-RADS
Ovarian-Adnexal Reporting and Data System
US
Ultrasonography
DL
Deep learning
AUC
Area under the receiver operating characteristic curve

Introduction

Ovarian cancer is one of the most common gynecological malignancy and the fifth leading cause of cancer-related deaths in women worldwide (Siegel et al. 2022; Zheng et al. 2020). The cancer lacks typical symptoms which makes it difficult to conduct early screening and timely diagnosis. Consequently, most patients with ovarian cancer are diagnosed at an advanced stage (Terp et al. 2023). Advanced ovarian cancer patients are often treated with debulking surgery combined with platinum and paclitaxel chemotherapy, but still these treatment modalities are associated with poor survival and high recurrence rates (Konstantinopoulos and Matulonis 2023; Wheeler et al. 2023). Currently, the main diagnostic method for ovarian-adnexal lesions is pelvic imaging (Sadowski et al. 2023). Ultrasonography (US) is the most commonly used imaging modality for assessing ovarian-adnexal lesions owing to its universality, non-invasive nature, and affordability (Wang et al. 2021). However, given its pathological diversity and morphological complexity, accurate preoperative diagnosis of ovarian-adnexal masses by conventional US alone has been suboptimal (Shi et al. 2023). To date, several US-based models have been developed to assess the benign or malignant adnexal masses, such as the International Ovarian Tumor Analysis (IOTA) Simple Rules, the Assessment of Different Neoplasia in the Adnexa (ADNEX) model, and Ovarian-Adnexal Reporting and Data System (O-RADS) (Dang Thi Minh et al. 2024; Pelayo et al. 2023; Pozzati et al. 2023).
In 2020, the American College of Radiology proposed the O-RADS risk stratification and management system which provides a detailed description of each category (Andreotti et al. 2018, 2020; Yang et al. 2023). The system recommends six risk classification categories. These include: O-RADS category 0, defined as an incomplete evaluation; O-RADS category 1, defined as the physiologic category; O-RADS category 2, defined as the almost certainly benign category with <1% malignant probability; O-RADS category 3, referring to lesions with 1% to<10% risk of malignancy; O-RADS category 4, defined as lesions with 10% to<50% risk of malignancy and O-RADS category 5, referring to lesions with high risk of malignancy (≥ 50%) (Vara et al. 2022). The accuracy of gynecologic ultrasonography is largely dependent on the sonologist’s subjective assessment. It has been observed that the correct classification of the adnexal lesions in expert ultrasound examination is higher than in less experienced doctors (Wu et al. 2023). Currently, there are no effective management strategies for O-RADS US 4 lesions, with the risk of malignancy exhibiting significant variations and some lesions found to be benign. If benign lesions can be accurately diagnosed, patients can avoid unnecessary or extensive surgery. A simple description of the ovarian tumor by a sonologist may not be completely fulfilled. This calls for the establishment of appropriate and advanced approaches for sub-stratifying O-RADS US 4 lesions into benign and malignant subgroups.
Artificial intelligence (AI) has emerged as a significant tool with diverse medical applications (Wang et al. 2024). Deep learning (DL) can quantitatively analyze of medical images and has been applied in the field of oncology (Taddese et al. 2024). Several studies demonstrated that DL can improve the diagnosis, predict treatment responses, and progression-free survival of patients with ovarian tumors (Arezzo et al. 2022; Boehm et al. 2022; Na et al. 2024; Sadeghi et al. 2024; Yao et al. 2021). Compared with traditional imaging diagnosis by radiologists, the DL method can improve the accuracy and reduce the bias of diagnosis results (Chen et al. 2022). However, few studies have demonstrated the diagnostic performance of AI in O-RADS 4 lesions.
In this multicenter study, we developed and tested a US image-based DL model for distinguishing between benign and malignant ovarian tumors of O-RADS 4 lesions and to demonstrate the diagnostic performance of our DL model.

Materials and methods

Study population and datasets

This retrospective study was approved by the Institutional Review Board of the Second Affiliated Hospital of Fujian Medical University (No.636, 2023). The analyzed datasets were obtained from three hospitals from December 2014 to March 2023. The three centers were coded as A, the Second Affiliated Hospital of Fujian Medical University; B, Fujian Cancer Hospital; and C, Nanping First Hospital Affiliated to Fujian Medical University. Consecutive patients who met the following criteria were included: (1) underwent diagnostic pelvis US prior to gynecological surgery; (2) diagnosed with O-RADS category 4 by US radiologists according to the O-RADS lexicon white paper; (3) postoperative pathological confirmation of ovarian tumor; (4) did not use radiotherapy or chemotherapy before US examination. Patients with poor image quality and without histopathology were excluded. A total of 519 women met the inclusion criteria and were enrolled in this study (Fig. 1). Center C was used as the validation set, 10% of all cases coded Center A and Center B as the test set, and the remaining 90% as the training set.
Fig. 1
The flowchart of patient enrollment, inclusion, and exclusion criteria, and partitioning of datasets. Center A, the second affiliated hospital of Fujian Medical University; Center B, Fujian Cancer Hospital; Center C, Nanping First Hospital Affiliated to Fujian Medical University
Bild vergrößern
The US image target masses with the largest diameter or a more complicated ultrasound morphology were selected for further analysis. The US images were acquired using ultrasound devices equipped with abdominal probes spanning frequencies from 1 to 6 MHz, as well as transvaginal probes covering frequencies from 2 to 9 MHz. Confirmation of all adnexal lesions was achieved by surgical pathology. Furthermore, pertinent clinical data such as age at diagnosis, lesion size, CA125 levels, and menopausal status were meticulously documented for subsequent analysis.

Algorithm for analysis

Image annotation

First, we used a professional image annotation tool LabelImg (https://github.com/HumanSignal/labelImg) to draw bounding boxes on all ovarian cancer images to mark the location and extent of ovarian tumors. The boundaries of each lesion area were drawn by two radiologists (with 6 and 3 years of experience). The boundaries of all lesions were manually delineated on the axes of the 2D images. In addition, we accurately outlined the tumor borders, ensuring precise adjustment of the bounding box to accommodate the morphology of the tumor. In addition to the bounding box labeling, each bounding box was assigned a corresponding category label, benign or malignant.

Feature extraction

The dataset was preprocessed which involved processed such as image specification and data enhancement. Images were then resized to the same resolution. In addition, we randomly employed data enhancement techniques such as flipping, cropping, and addition of noise to the images to generate richer training data to improve the generalization of the model. To normalize the range of values for each phenotypic feature, the feature normalization technique was employed to convert these features to 0 or 1. Next, segmentation feature extraction was conducted using DeepLabV3 (Chen et al. 2017). DeepLabV3 utilizes the architecture of domain-adaptive convolutional neural networks to capture the regional features of objects. Within the encoder-decoder architecture, the encoder module employs ResNet-101 (Jusman, 2023) as its backbone network for extracting low-level features. Subsequently, the decoder module utilizes null convolution to augment edge features, followed by up-sampling to reconstruct the segmentation output. In addition, we used the YOLOv8 (Terven et al. 2023) network to extract the target detection features. YOLOv8 adopts Darknet-53 (Azim et al., 2022) as the backbone network to improve the inter-channel correlation of the feature channels while preserving the fine edge information due to the new channel attention module and short connections. The output feature maps were post-processed to predict the edge coordinates and the classification.

Machine learning model development

Furthermore, we propose a novel dual-model architecture (Fig. 2) that leverages the complementary abilities of DeepLabV3 and YOLOv8. In our approach, we employed DeepLabV3 to first perform pixel-accurate segmentation of ovarian tumor regions from US images. The ResNet-101pretrained on ImageNet was employed as the encoder backbone in DeepLabV3. Atrous separable convolutions were utilized with varying rates to explicitly capture multi-scale information. Moreover, a lightweight decoder module was integrated to restore fine spatial details and generate segmentation maps at the full image resolution. These segmented outputs were subsequently fed into YOLOv8 for downstream classification and detection tasks. Notably, YOLOv8 features a modified Darknet-53 network, which consists of residual and convolutional blocks, serving as the backbone for feature extraction. YOLOv8 classifies each tumor region as benign or malignant and generates bounding boxes around individual tumor objects. This enables joint analysis of segmentation, classification, and localization within a single framework. To achieve better outcomes, we first trained DeepLabV3 on our US image datasets until convergence was attained. The model outputs were then extracted and fed to YOLOv8 in the training phase. Both models were fine-tuned end-to-end through this sequential process. At inference, a new US image is directly input to obtain segmented tumor regions, which are instantly classified and localized by YOLOv8.
Fig. 2
The overview of the proposed framework. Our approach is divided into two major phases: (1) The segmentation stage in which the DeepLabV3 model is used to obtain the lesion segmentation mask, which is subsequently used to mask the input image. (2) In the classifier phase, YOLOv8 is employed to disentangle the feature maps of a CNN into human-interpretable concepts
Bild vergrößern

Model training

A pipeline was established to extract tumor ROIs at 512 × 512 pixels from the annotated masks while maintaining original aspect ratios. To address the class imbalance, we oversampled the minority malignant class during the training stage. We trained DeepLabV3 and YOLOv8 sequentially in an end-to-end fashion on our dataset through transfer learning. DeepLabV3 was first pretrained on PASCAL VOC3 to achieve semantic segmentation. Subsequently, we fine-tuned the model on our ovarian tumor masks for 50 epochs. This process employed stochastic gradient descent with polynomial learning rate decay, utilizing a batch size of 4 and employing the dice coefficient loss function. After segmentation, YOLOv8 was initialized from weights pretrained on COCO4 for object detection. It was then optimized on the extracted DeepLabV3 tumor regions for 100 epochs. The proposed dual-model architecture and training procedure presents an innovative approach for joint analysis of US images.

Statistical analysis

All statistical analyses were conducted using the SPSS 20.0 software (IBM, Armonk, NY, USA). Categorical data were analyzed by Chi-Squared test and expressed as the frequency and percentage. Continuous data were analyzed by the Student’s t test or Mann-Whitney U test and then expressed as the mean and standard deviation or median and interquartile range. The ROC curve analysis was utilized to evaluate the performance of our proposed dual-model method. The ROC curves were constructed by the ROC package on Python (version 3.8.0). p < 0.05 was considered statistically significant.

Results

Patient characteristics

A total of 1619 images in 519 patients who underwent US examination and surgery were enrolled from three hospitals. The histological profile of enrolled adnexal patients is presented in Table 1. The number of women included in the training, validation, and test cohorts was 426, 46, and 47, respectively.
Table 1
Pathology results of 519 adnexal masses that were assigned O-RADS US category 4
Center A
(n = 156)
Center B
(n = 317)
Center C
(n = 46)
Histopathological findings
No. of patients (%)
Histopathological findings
No. of patients (%)
Histopathological findings
No. of patients (%)
Benign
 
Benign
 
Benign
 
Benign cyst
5(3.2)
Benign cyst
13(4.1)
Serous cyst
2(4.3)
Endometriosis cyst
6(3.8)
Endometriosis cyst
21(6.6)
Endometriosis cyst
6(13.0)
Teratoma
8(5.1)
Teratoma
17(5.4)
Teratoma
5(10.9)
Cystadenoma
24(15.4)
Cystadenofibroma
7(2.2)
Cystadenoma
22(47.8)
Struma ovarii
1(0.6)
Cystadenoma
61(19.2)
Struma ovarii
1(2.2)
Inflammation
5(3.2)
Struma ovarii
10(3.2)
Fibroma
1(2.2)
Cystadenofibroma
4(2.6)
Brenner
2(0.6)
Sclerosing stromal tumor
1(2.2)
Fibroma
6(3.8)
Inflammation
5(1.6)
  
Theca-fibroma
1(0.6)
Fibroma
4(1.3)
  
  
Theca-fibroma
30(9.5)
  
  
Microcystic stromal tumour
1(0.3)
  
Malignant
 
Malignant
 
Malignant
 
Borderline
46(29.5)
Borderline
71(22.4)
Borderline
6(13.0)
High grade
33(21.2)
High grade
32(10.1)
High grade
1(2.2)
Immature teratoma
5(3.2)
Low grade
2(0.6)
Metastasis
1(2.2)
Clear cell carcinoma
5(3.2)
Immature teratoma
3(1.0)
  
Endometrioid cancer
3(2.0)
Clear cell carcinoma
14(4.4)
  
Granular cell tumor
3(2.0)
Endometrioid cancer
5(1.6)
  
Sertoli-Leydig cell tumor
1(0.6)
Granular cell tumor
9(2.8)
  
  
Sertoli-Leydig cell tumor
1(0.3)
  
  
Metastasis
8(2.5)
  
  
Malignant mixed Mullerian tumour
1(0.3)
  
Center A, the second affiliated hospital of fujian medical university; Center B, Fujian Cancer Hospital; Center C, Nanping First Hospital Affiliated to Fujian Medical University
The clinical characteristics and laboratory results of patients are shown in Table 2. It can be inferred that 269 (51.8%) lesions were benign while 250 (48.2%) lesions were malignant. Among the study variables, the largest diameter of the lesions (mm) (108 (75.5-142.6) vs. 120.5 (77.7-171.7) mm, p = 0.024) was significantly different between benign and malignant groups. In women with elevated CA125 levels, malignant tumors were significantly more prevalent than benign tumors (p < 0.001). Nevertheless, no statistically significant variances were detected in terms of age at diagnosis, menopausal status, or tumor location between the benign and malignant groups.
Table 2
Comparison clinical features between benign and malignant O-RADS 4 US adnexal lesions
Characteristic
Benign
(n = 269)
Malignant
(n = 250)
p-value
Age at diagnosis
47.0 ± 15.9
47.2 ± 14.5
0.866
Largest diameter of lesion (mm)
108(75.5−142.6)
120.5(77.7−171.7)
0.024
Menopausal status
  
0.631
 Premenopausal
145(53.9)
140(56.0)
 
 Postmenopausal
124(46.1)
110(44.0)
 
Location
  
0.38
 Left
126(46.8)
118(47.2)
 
 Right
119(44.2)
101(40.4)
 
 Bilateral
24(9.0)
31(12.4)
 
Serum CA125 level (U/ml)
  
0.000
 ≤ 35
176(65.4)
105(42.0)
 
 >35
93(34.6)
145(58.0)
 

Model performance

The performance of the detection DL model is shown in Fig. 3. The average precision of the precision-recall curve was 98.68% (95% CI: 0.95–0.99) for benign masses (Fig. 3A) and 96.23% (95% CI: 0.92–0.98) for malignant masses (Fig. 3B). When applied to US imaging of adnexal masses, the model showed the potential to identify nodules in benign and malignant categories (Fig. 4).
Fig. 3
The performance of the adnexal mass detection model. (A) The precision-recall curve of the detection model used for evaluating for performance of benign masses indicating an average precision (AP) of 0.98 (95% CI: 0.95–0.99). (B) The precision-recall curve of the detection model used for evaluating for performance of malignant masses displaying an AP of 0.96 (95% CI: 0.92–0.98)
Bild vergrößern
Fig. 4
Images with O-RADS US category 4 masses detected using the model. (A) High-grade serous carcinoma in a 70-year-old female patient, (B) Borderline serous tumor in a 57-year-old female patient, (C) Brenner (borderline) in a 69-year-old female patient, (D) Endometriosis cyst in a 22-year-old female patient, (E) Teratoma in a 22-year-old female patient, (F) Mucinous cystadenoma in a 47-year-old female patient. The red and blue bounding box within each imaging indicates the adnex masses delineated by the detection model
Bild vergrößern
The DL model had the best discrimination between the benign and malignant groups, with an AUC of 0.96 (95% CI: 0.94–0.97) in the training set, an AUC of 0.93(95% CI: 0.89–0.94) in the validation set and 0.95 (95% CI: 0.91–0.96) in the test set (Fig. 5). Further analysis indicated that the sensitivity, specificity, accuracy, positive predictive value, and negative predictive value in the training set were 0.943, 0.957, 0.951, 0.966, and 0.936, respectively, whereas those for the validation set were 0.905,0.935, 0.935,0.919, and 0.931, respectively. In addition, the sensitivity, specificity, accuracy, positive predictive value, and negative predictive value for the test set were 0.925,0.955,0.941,0.956, and 0.927, respectively (Table 3).
Fig. 5
The ROC curve showing the classification model for ovarian cancer classification
Bild vergrößern
Table 3
Diagnostic performance of the deep learning classification model in training, validation, and test cohorts
Set
AUC
Sensitivity
Specificity
Accuracy
Positive predictive value
Negative predictive value
Training set
0.96(95%CI:0.94–0.97)
0.943
0.957
0.951
0.966
0.936
Validation set
0.93(95%CI:0.89–0.94)
0.905
0.935
0.927
0.919
0.931
Test set
0.95(95%CI:0.91–0.96)
0.925
0.955
0.941
0.956
0.927

Discussion

The significant similarity in ultrasonographic characteristics between malignant and benign ovarian lesions posed a diagnostic challenge for sonologists. Nevertheless, prior research has demonstrated that O-RADS US can accurately detect ovarian malignancies, indicating outstanding diagnostic accuracy (Hack et al. 2022). However, the risk of malignancy in O-RADS 4 was in the range of 10% to<50% implying that some benign lesions were classified in this category. Correct classification of an adnexal lesion is therefore important for improving personalized management. Xu et al. incorporated the qualitative parameters of contrast-enhanced ultrasound (CEUS) to reassign the O-RADS category and the overall sensitivity increased to 90.2% (Xu et al. 2023). In this study, we evaluated the performance of the DL model to classify benign or malignant lesions in O-RADS US Category 4 lesions and showed an acceptable diagnostic performance with an AUC of 0.95.
We deployed a DL method for distinguishing O-RADS 4 lesions. DeepLabV3 is well-suited for precise semantic segmentation due to its powerful encoder-decoder design. Chen et al. developed DeepLabV3 to facilitate the segmentation of colon cancer histology at subcellular scales (Chen et al. 2018). However, semantic segmentation alone does not provide classification or localization of tumors. Object detection models such as YOLO have achieved great success in natural images. In a previous study, Xiao et al. employed YOLOv3 to identify lung cancer in CT scans (Xiao et al. 2023). Meanwhile, YOLOv8 has been shown to rapidly classify and achieve object detection due to its robust architecture (Redmon 2018). In this study, we leveraged the strengths of these models to conduct an in-depth analysis of US images. Our innovative dual-model architecture merges the features of DeepLabV3 and YOLOv8, enabling concurrent segmentation, classification, and detection of ovarian tumors. DeepLabV3 first segments tumor regions, whose outputs are then classified and localized by YOLOv8. This novel dual-model architecture can enable efficient and accurate analysis of imaging analysis.
Artificial intelligence has been demonstrated to improve the diagnostics rate of ovarian tumors. A study found that using deep neural networks to analyze ultrasound images can discriminate between benign and malignant ovarian masses and achieve comparable diagnostic accuracy to expert examiners (Christiansen et al. 2021). A recent multicenter study developed a deep convolutional neural network (DCNN) model for detecting ovarian cancer with high performance (Gao et al. 2022). By integrating DeepLabV3 and YOLOv8, our DL system achieved a higher AUC. However, we cannot directly compare our results to those obtained in previous studies because our focus was on a specific population subset, and the ovarian tumor datasets utilized differ from those in previous investigations.
In addition, we investigated the clinical features of benign and malignant O-RADS 4 US lesions. The levels of CA-125 were higher in malignant masses than in benign masses (p<0.05), which is consistent with recent literature (Wong et al. 2023). Not surprisingly, the largest diameter of the lesion was significantly higher in malignant masses than in benign masses (p = 0.024). This finding may be ascribed to the concept that ovarian tumors often lack typical symptoms and hence detected at the advanced stage.
This study has several limitations that should be acknowledged. First, this study was a retrospective investigation which may have inherent biases. To improve the DL model’s reliability, a prospective study should be performed. Second, we did not include clinical factors in the DL model. Third, we only used one DL method to construct the model. In our future work, we plan to explore and compare the performance of various DL methods. Moreover, we did not assess the diagnostic accuracy of our DL model against US-based models, such as the IOTA-ADNEX model. Therefore, it will be imperative to conduct a comparative analysis of diagnostic performance between our model and the IOTA-ADNEX model.

Conclusions

In conclusion, this study demonstrates that the US image-based DL model may be used as a tool for distinguishing between benign and malignant ovarian tumors of O-RADS 4 lesions.

Declarations

Competing interests

The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
download
DOWNLOAD
print
DRUCKEN
Titel
Developing a deep learning model for predicting ovarian cancer in Ovarian-Adnexal Reporting and Data System Ultrasound (O-RADS US) Category 4 lesions: A multicenter study
Verfasst von
Wenting Xie
Wenjie Lin
Ping Li
Hongwei Lai
Zhilan Wang
Peizhong Liu
Yijun Huang
Yao Liu
Lina Tang
Guorong Lyu
Publikationsdatum
01.07.2024
Verlag
Springer Berlin Heidelberg
Erschienen in
Journal of Cancer Research and Clinical Oncology / Ausgabe 7/2024
Print ISSN: 0171-5216
Elektronische ISSN: 1432-1335
DOI
https://doi.org/10.1007/s00432-024-05872-6
Zurück zum Zitat Andreotti RF, Timmerman D, Benacerraf BR, Bennett GL, Bourne T, Brown DL, Coleman BG, Frates MC, Froyman W, Goldstein SR et al (2018) J Am Coll Radiology:JACR 15:1415–1429. https://doi.org/10.1016/j.jacr.2018.07.004. Ovarian-Adnexal Reporting Lexicon for Ultrasound: A White Paper of the ACR Ovarian-Adnexal Reporting and Data System Committee
Zurück zum Zitat Andreotti RF, Timmerman D, Strachowski LM, Froyman W, Benacerraf BR, Bennett GL, Bourne T, Brown DL, Coleman BG, Frates MC et al (2020) O-RADS US Risk Stratification and Management System: a Consensus Guideline from the ACR ovarian-adnexal reporting and Data System Committee. Radiology 294:168–185. https://doi.org/10.1148/radiol.2019191150CrossRefPubMed
Zurück zum Zitat Arezzo F, Cormio G, La Forgia D, Santarsiero CM, Mongelli M, Lombardi C, Cazzato G, Cicinelli E, Loizzi V (2022) A machine learning approach applied to gynecological ultrasound to predict progression-free survival in ovarian cancer patients. Arch Gynecol Obstet 306:2143–2154. https://doi.org/10.1007/s00404-022-06578-1CrossRefPubMedPubMedCentral
Zurück zum Zitat Azim T (2022) Breast Cancer Identification Using Improved DarkNet53 Model. In Innovations in Bio-Inspired Computing and Applications: Proceedings of the 13th International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA 2022) Held During December 15–17 (Vol. 649, p. 338)
Zurück zum Zitat Boehm KM, Aherne EA, Ellenson L, Nikolovski I, Alghamdi M, Vázquez-García I, Zamarin D, Roche L, Liu K, Patel Y, D., et al (2022) Multimodal data integration using machine learning improves risk stratification of high-grade serous ovarian cancer. Nat Cancer 3:723–733. https://doi.org/10.1038/s43018-022-00388-9CrossRefPubMedPubMedCentral
Zurück zum Zitat Chen LC, Papandreou G, Schroff F, Adam H (2017) Rethinking atrous convolution for semantic image segmentation. arXiv preprint, arXiv:1706.05587
Zurück zum Zitat Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) DeepLab: semantic image segmentation with Deep Convolutional nets, atrous Convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 40:834–848. https://doi.org/10.1109/TPAMI.2017.2699184CrossRefPubMed
Zurück zum Zitat Chen H, Yang BW, Qian L, Meng YS, Bai XH, Hong XW, He X, Jiang MJ, Yuan F, Du QW et al (2022) Deep learning prediction of ovarian malignancy at US compared with O-RADS and Expert Assessment. Radiology 304:106–113. https://doi.org/10.1148/radiol.211367CrossRefPubMed
Zurück zum Zitat Christiansen F, Epstein EL, Smedberg E, Åkerlund M, Smith K, Epstein E (2021) Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: comparison with expert subjective assessment. Ultrasound Obstet Gynecol 57:155–163. https://doi.org/10.1002/uog.23530CrossRefPubMedPubMedCentral
Zurück zum Zitat Dang Thi Minh N, Van N, Duc TD, Nguyen Tuan H, Tra MDT, Tuan GD, D., and, Nguyen Tai D (2024) IOTA simple rules: an efficient tool for evaluation of ovarian tumors by non-experienced but trained examiners - a prospective study. Heliyon 10:e24262. https://doi.org/10.1016/j.heliyon.2024.e24262CrossRefPubMedPubMedCentral
Zurück zum Zitat Gao Y, Zeng S, Xu X, Li H, Yao S, Song K, Li X, Chen L, Tang J, Xing H et al (2022) Deep learning-enabled pelvic ultrasound images for accurate diagnosis of ovarian cancer in China: a retrospective, multicentre, diagnostic study. Lancet Digit Health 4:e179–e187. https://doi.org/10.1016/S2589-7500(21)00278-8CrossRefPubMed
Zurück zum Zitat Hack K, Gandhi N, Bouchard-Fortier G, Chawla TP, Ferguson SE, Li S, Kahn D, Tyrrell PN, Glanc P (2022) External validation of O-RADS US Risk Stratification and Management System. Radiology 304:114–120. https://doi.org/10.1148/radiol.211868CrossRefPubMed
Zurück zum Zitat Jusman Y Comparison of Prostate Cell Image Classification Using CNN: ResNet-101 and VGG-19. In 2023 IEEE 13th International Conference on Control System, Computing and Engineering (ICCSCE),2023. pp.74–78
Zurück zum Zitat Konstantinopoulos PA, Matulonis UA (2023) Clinical and translational advances in ovarian cancer therapy. Nat Cancer 4:1239–1257. https://doi.org/10.1038/s43018-023-00617-9CrossRefPubMed
Zurück zum Zitat Na I, Noh JJ, Kim CK, Lee JW, Park H (2024) Combined radiomics-clinical model to predict platinum-sensitivity in advanced high-grade serous ovarian carcinoma using multimodal MRI. Front Oncol 14:1341228. https://doi.org/10.3389/fonc.2024.1341228CrossRefPubMedPubMedCentral
Zurück zum Zitat Pelayo M, Sancho-Sauco J, Sánchez-Zurdo J, Perez-Mies B, Abarca-Martínez L, Cancelo-Hidalgo MJ, Sainz-Bueno JA, Alcázar JL, Pelayo-Delgado I (2023) Application of Ultrasound scores (subjective Assessment, simple rules Risk Assessment, ADNEX Model, O-RADS) to Adnexal masses of difficult classification. Diagnostics (Basel Switzerland) 13:2785. https://doi.org/10.3390/diagnostics13172785CrossRefPubMed
Zurück zum Zitat Pozzati F, Sassu CM, Marini G, Mascilini F, Biscione A, Giannarelli D, Garganese G, Fragomeni SM, Scambia G, Testa AC et al (2023) Subjective assessment and IOTA ADNEX model in evaluation of adnexal masses in patients with history of breast cancer. Ultrasound Obstet Gynecol 62:594–602. https://doi.org/10.1002/uog.26253CrossRefPubMed
Zurück zum Zitat Redmon J (2018) YOLOv3: an incremental improvement. arXiv arXiv:180402767
Zurück zum Zitat Sadeghi MH, Sina S, Omidi H, Farshchitabrizi AH, Alavi M (2024) Deep learning in ovarian cancer diagnosis: a comprehensive review of various imaging modalities. Pol J Radiol 89:30–48. https://doi.org/10.5114/pjr.2024.134817CrossRef
Zurück zum Zitat Sadowski EA, Rockall A, Thomassin-Naggara I, Barroilhet LM, Wallace SK, Jha P, Gupta A, Shinagare AB, Guo Y, Reinhold C (2023) Adnexal lesion imaging: past, Present, and Future. Radiology 307:e223281. https://doi.org/10.1148/radiol.223281CrossRefPubMed
Zurück zum Zitat Shi Y, Li H, Wu X, Li X, Yang M (2023) O-RADS combined with contrast-enhanced ultrasound in risk stratification of adnexal masses. J Ovarian Res 16:153. https://doi.org/10.1186/s13048-023-01243-wCrossRefPubMedPubMedCentral
Zurück zum Zitat Siegel RL, Miller KD, Fuchs HE, Jemal A (2022) Cancer statistics, 2022. CA: A Cancer. J Clin 72:7–33. https://doi.org/10.3322/caac.21708CrossRef
Zurück zum Zitat Taddese AA, Tilahun BC, Awoke T, Atnafu A, Mamuye A, Mengiste SA (2024) Deep-learning models for image-based gynecological cancer diagnosis: a systematic review and meta-analysis. Front Oncol 13:1216326. https://doi.org/10.3389/fonc.2023.1216326CrossRefPubMedPubMedCentral
Zurück zum Zitat Terp SK, Stoico MP, Dybkær K, Pedersen IS (2023) Early diagnosis of ovarian cancer based on methylation profiles in peripheral blood cell-free DNA: a systematic review. Clin Epigenetics 15:24. https://doi.org/10.1186/s13148-023-01440-wCrossRefPubMedPubMedCentral
Zurück zum Zitat Terven J, Córdova-Esparza DM, Romero-González JA (2023) A comprehensive review of Yolo architectures in computer vision: from yolov1 to yolov8 and yolo-nas. Mach Learn Knowl Extr 5(4):1680–1716CrossRef
Zurück zum Zitat Vara J, Manzour N, Chacón E, López-Picazo A, Linares M, Pascual MÁ, Guerriero S, Alcázar JL (2022) Ovarian adnexal Reporting Data System (O-RADS) for classifying Adnexal masses: a systematic review and Meta-analysis. Cancers 14:3151. https://doi.org/10.3390/cancers14133151CrossRefPubMedPubMedCentral
Zurück zum Zitat Wang H, Liu C, Zhao Z, Zhang C, Wang X, Li H, Wu H, Liu X, Li C, Qi L et al (2021) Application of deep convolutional neural networks for discriminating Benign, Borderline, and malignant serous ovarian tumors from Ultrasound images. Front Oncol 11:770683. https://doi.org/10.3389/fonc.2021.770683CrossRefPubMedPubMedCentral
Zurück zum Zitat Wang Y, Lin W, Zhuang X, Wang X, He Y, Li L, Lyu G (2024) Advances in artificial intelligence for the diagnosis and treatment of ovarian cancer (review). Oncol Rep 51:46. https://doi.org/10.3892/or.2024.8705CrossRefPubMedPubMedCentral
Zurück zum Zitat Wheeler V, Umstead B, Chadwick C (2023) Adnexal masses: diagnosis and management. Am Family Phys 108:580–587
Zurück zum Zitat Wong BZY, Causa Andrieu PI, Sonoda Y, Chi DS, Aviki EM, Vargas HA, Woo S (2023) Improving risk stratification of indeterminate adnexal masses on MRI: what imaging features help predict malignancy in O-RADS MRI 4 lesions? Eur J Radiol 168:111122. https://doi.org/10.1016/j.ejrad.2023.111122CrossRefPubMed
Zurück zum Zitat Wu M, Zhang M, Cao J, Wu S, Chen Y, Luo L, Lin X, Su M, Zhang X (2023) Predictive accuracy and reproducibility of the O-RADS US scoring system among sonologists with different training levels. Arch Gynecol Obstet 308:631–637. https://doi.org/10.1007/s00404-022-06752-5CrossRefPubMed
Zurück zum Zitat Xiao H, Xue X, Zhu M, Jiang X, Xia Q, Chen K, Li H, Long L, Peng K (2023) Deep learning-based lung image registration: a review. Comput Biol Med 165:107434. https://doi.org/10.1016/j.compbiomed.2023.107434CrossRefPubMed
Zurück zum Zitat Xu J, Huang Z, Zeng J, Zheng Z, Cao J, Su M, Zhang X (2023) Value of contrast-enhanced Ultrasound parameters in the evaluation of Adnexal Masses with ovarian–adnexal reporting and Data System Ultrasound. Ultrasound Med Biol 49:1527–1534. https://doi.org/10.1016/j.ultrasmedbio.2023.02.015CrossRefPubMed
Zurück zum Zitat Yang Y, Wang H, Liu Z, Su N, Gao L, Tao X, Zhang R, Gu Y, Ma L, Wang R et al (2023) Effect of differences in O-RADS lexicon interpretation between senior and junior sonologists on O-RADS classification and diagnostic performance. J Cancer Res Clin Oncol 149:12275–12283. https://doi.org/10.1007/s00432-023-05108-zCrossRefPubMedPubMedCentral
Zurück zum Zitat Yao F, Ding J, Hu Z, Cai M, Liu J, Huang X, Zheng R, Lin F, Lan L (2021) Ultrasound-based radiomics score: a potential biomarker for the prediction of progression-free survival in ovarian epithelial cancer. Abdom Radiol 46:4936–4945. https://doi.org/10.1007/s00261-021-03163-zCrossRef
Zurück zum Zitat Zheng L, Cui C, Shi O, Lu X, Li Y-k, Wang W, Li Y, Wang Q (2020) Incidence and mortality of ovarian cancer at the global, regional, and national levels, 1990–2017. Gynecol Oncol 159:239–247. https://doi.org/10.1016/j.ygyno.2020.07.008CrossRefPubMed

Neu im Fachgebiet Onkologie

Was wächst diesmal Unterkiefer?

Eine 31-jährige, ansonsten gesunde Frau stellte sich in der Kopf- und Halschirurgie mit einer progredient zunehmenden Raumforderung im linken Unterkieferbereich vor. In der Vorgeschichte war ein pleomorphes Adenom reseziert worden. Worum könnte es sich dieses Mal handeln?

Erhöhte Sterblichkeit bei Brustkrebs durch genetisches BMI-Risiko

Bei postmenopausalen Brustkrebspatientinnen ist ein hoher genetischer Risikoscore für Adipositas mit einer erhöhten Gesamtmortalität assoziiert. Die gute Nachricht: Regelmäßige Spaziergänge können dieses Risiko abmildern.

Vorteile für risikobasiertes Mammakarzinom-Screening?

Mehr verhinderte Brustkrebstodesfälle, weniger falsch positive Wiedereinbestellungen – geht es nach einer US-Simulationsstudie, bieten bestimmte Strategien zur Früherkennung, die das absolute 5-Jahres-Mammakarzinomsrisiko berücksichtigen, Vorteile gegenüber rein altersbasierten Mammografieprogrammen.

Positiver Schnittrand bei Prostatektomie: Ausdehnung ist prognoserelevant

Ob der Nachweis von positiven Schnitträndern nach radikaler Prostatektomie mittelfristig mit einem erhöhten Risiko für biochemische Rezidive und für Metastasen einhergeht, hängt auch von der Ausdehnung des Randbefalls ab.

Update Onkologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.

Bildnachweise
Tonsillektomie Operation/© ShvedKristina - stock.adobe.com (Symbolbild mit Fotomodellen), Ältere Frau beim Nordic Walking/© ratmaner / stock.adobe.com (Symbolbild mit Fotomodell), Ärztin betrachtet Mammografie-Befund/© Gorodenkoff / stock.adobe.com (Symbolbild mit Fotomodellen)