Next Article in Journal
Positron Emission Tomography (PET) Imaging of Multiple Myeloma in a Post-Treatment Setting
Next Article in Special Issue
An Improved UNet++ Model for Congestive Heart Failure Diagnosis Using Short-Term RR Intervals
Previous Article in Journal
Clinical Implication of Patchy Pattern Corneal Staining in Dry Eye Disease
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence in Fractured Dental Implant Detection and Classification: Evaluation Using Dataset from Two Dental Hospitals

1
Department of Periodontology, Veterans Health Service Medical Center, Seoul 05368, Korea
2
Department of Prosthodontics, Veterans Health Service Medical Center, Seoul 05368, Korea
3
Department of Periodontology, Daejeon Dental Hospital, Institute of Wonkwang Dental Research, Wonkwang University College of Dentistry, Daejeon 35233, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this study.
Diagnostics 2021, 11(2), 233; https://doi.org/10.3390/diagnostics11020233
Submission received: 24 December 2020 / Revised: 28 January 2021 / Accepted: 2 February 2021 / Published: 3 February 2021
(This article belongs to the Special Issue Clinical Diagnosis Using Deep Learning)

Abstract

:
Fracture of a dental implant (DI) is a rare mechanical complication that is a critical cause of DI failure and explantation. The purpose of this study was to evaluate the reliability and validity of a three different deep convolutional neural network (DCNN) architectures (VGGNet-19, GoogLeNet Inception-v3, and automated DCNN) for the detection and classification of fractured DI using panoramic and periapical radiographic images. A total of 21,398 DIs were reviewed at two dental hospitals, and 251 intact and 194 fractured DI radiographic images were identified and included as the dataset in this study. All three DCNN architectures achieved a fractured DI detection and classification accuracy of over 0.80 AUC. In particular, automated DCNN architecture using periapical images showed the highest and most reliable detection (AUC = 0.984, 95% CI = 0.900–1.000) and classification (AUC = 0.869, 95% CI = 0.778–0.929) accuracy performance compared to fine-tuned and pre-trained VGGNet-19 and GoogLeNet Inception-v3 architectures. The three DCNN architectures showed acceptable accuracy in the detection and classification of fractured DIs, with the best accuracy performance achieved by the automated DCNN architecture using only periapical images.

1. Introduction

Dental implants (DIs) have shown a high survival and success rate, making them an indispensable and predictable treatment modality for restoring missing teeth [1]. In a recent systematic review of DI rehabilitation outcomes, the 10-year survival rate was reported as 96.4% (95% CI = 95.2–97.5%), and the overall cumulative survival rate for a follow-up study of 15 years was reported as 82.6%, respectively [1,2]. Accordingly, various biological (including peri-implant mucositis and peri-implantitis) and mechanical (including chipping, screw loosening and fractures, and ceramic and fixture fractures) complications could increase and require a multiplicity of re-interventions [3].
Among mechanical complications, fracture of DI is almost impossible to repair or modify; therefore, it is one of the critical causes for the possibility of DI failure and explantation. Biomechanical and physiological overload and stress with non-passive prosthesis fit might be considered to be the most common risk factors for DI fracture [4,5]. As shown in recent studies, various clinical variables (including age, sex, diameter, length, placement position, with or without bone graft, fixture material (CP4 or alloy), polished or unpolished cervical feature, butt or conical abutment connection, micro- or macro- thread, and platform switching) may affect the fracture of DIs, and the diameter, position, history of bone graft, and micro-thread presence of the DI are significantly related to the occurrence of DI fractures [6,7]. In a systematic review of long-term results of more than 5 years, the ratio of fracture was reported as 0.18%, and a recent 12-year follow-up study showed a frequency of 0.92% in 19,006 fractured DIs of 5125 patients [6,7]. Since the prevalence and incidence of fracture is relatively rare and often asymptomatic, it is a very difficult and challenging task for early detection in actual clinical practice. When DI fracture is undiagnosed or diagnosed late, post-traumatic and inflammatory reactions that induce severe bone loss around DI will inevitably occur [7].
Artificial intelligence, specifically deep learning and neural network-related technologies, has developed significantly over the last 10 years and is now widely applied in the medical and dental fields [8,9]. Deep convolutional neural networks (DCNNs) are a branch of deep learning methods that use a cascade of multiple layers of nonlinear transformation to generate high-level abstraction, thereby increasing its versatility for identifying representative patterns or features [10,11]. Recently, DCNN has expanded in popularity and has become the cutting-edge technology for medical image analysis, including detection, segmentation, and classification [12].
In orthopedic and trauma surgery, DCNN has been successfully used to detect and classify various types of human bone fractures, and in particular, has shown excellent accuracy performance of diagnosing hip, proximal humerus, ankle, and femur fractures [13,14,15,16]. In dentistry practice, one study was recently conducted to improve the detection accuracy of vertical root fractures based on dental radiographic images, but as far as we are aware, there is no research related to DI fracture [17]. Therefore, the aim of this study is to evaluate the reliability and validity of deep learning for detection and classification of DI facture based on three different DCNN architectures using panoramic and periapical radiographic images.

2. Materials and Methods

The study design and protocol were reviewed and authorized by the Institutional Review Board of the Veterans Health Service Medical Center (VHSMC, approval no. BOHUN 2020-03-012-001, 13 April 2020) and Daejeon Dental Hospital, Wonkwang University (approval No. W2011/002-001, 23 April 2020), and the need for informed or written consent was waived as part of the study approval. This study was conducted in compliance with the revised Declaration of Helsinki and followed the STROBE guidelines for the conduct and reporting of observational studies [18,19].

2.1. Dataset

We retrospectively obtained a dataset from January 2006 to December 2015 in VHSMC and from April 2007 to December 2019 in WKUDH. A total of 21,398 DIs in 7281 patients were reviewed through dental electronic records, clinical photos, and dental digital radiographic images by two participating board-certified periodontists (DWL and JHL) and one board-certified prosthodontist (SYK). All periapical images were obtained using the standard paralleling technique, and radiographic images with severe noise, haziness, or distortion were excluded by the three dental professionals mentioned. Following this, one periodontist (JHL) manually and multiply segmented the anonymized DICOM format DI images (panoramic images with a pixel resolution of 2868 × 1504 and periapical images with a pixel resolution of 1876 × 1402), using radiographic image analysis software (INFINITT PACS, INFINITT Healthcare and Osirix X 10.0 64-bit version, Pixmeo SARL), into the region of interest. Finally, 251 intact and 198 fractured DIs were identified and included as the total dataset in this study. The fractured DIs were classified into three groups, referring to a previous study that analyzed the pattern of fracture (Type I, horizontal and vertical fractures limited within and around the crestal module; Type II, vertical fracture beyond the crestal module; and Type III, horizontal fracture over the crestal module) [20]. However, the number of type-III fractured DIs was very small (n = 4) in the process of obtaining datasets; therefore, only type-I and -II fractured DIs were included in this study. The details and numbers of the panoramic and periapical images for each intact and fractured DI are shown in Table 1.

2.2. Preprocessing

All included radiographic images were resized to 224 × 224 pixels for the VGGNet-19 architecture, 299 × 299 pixels for the GoogLeNet Inception v3 architecture, and 224 × 224 pixels for the automated DCNN architecture, respectively. The dataset was randomly divided into 60% training, 20% validation, and 20% test datasets for model development and accuracy performance predictions. The preprocessing includes pixel normalization, and one-hot encoding was deployed to reduce irregularities in the dataset. The training dataset was randomly augmented 100 times using rotation (range of 30°), width and height shifting (range of 0.2), zooming (range of 0.2), and horizontal and vertical flip. No augmentation procedure was performed in the validation and test datasets.

2.3. Architecture of the DCNN

We conducted a training process based on three different DCNN architectures, to compare the accuracy performance to detect and classify the types of fractured DIs (Figure 1):
  • The VGGNet-19 architecture is a 19-layer DCNN model for the 2014 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) competition with a 7.3% Top-5 error rate, by the Visual Geometry Group at the University of Oxford [21].
  • The GoogLeNet Inception-v3 architecture, which showed excellent performance in the 2014 ILSVRC competition with a 6.7% Top-5 error rate, consists of 22 deep layers and 9 inception modules [22].
  • The automated DCNN architecture was designed to search for optimized DCNN model selection and efficient hyperparameter tuning (including number of convolutional layers, learning rate, dropout rate, batch size, number of epochs, and optimizer type) [23]. All automated DCNN analyses were conducted using the Neuro-T version 2.1.1 (Neurocle Inc., Seoul, Korea).
The VGGNet-19 and Inception-v3 architectures utilized the transfer learning and pre-trained model with weights from approximately 1.28 million images (ImageNet) and 11,980 DI images of datasets we have built in the past [24]. For training the VGGNet-19 and Inception-v3 models, the top layers were truncated by defining a new fully connected softmax classification and output layer with a practical number of categories. We implemented the stochastic gradient descent (SGD) algorithm and used the Adam optimizer with an initial learning rate of 0.0001 and a decay rate of 0.001 based on the Keras application programming interface in the Python [25]. The models were trained for a maximum of 2000 epochs with a dropout probability of 0.5 during training to avoid overfitting. The final models were chosen as the pre-trained architectures with the best performance on the validation datasets. The automated DCNN architecture automatically created effective deep learning models and searched for the optimal hyperparameters during training and inference. The final automated model consisted of 18 layers with no dropout, with an Adam optimizer and L2 normalization. The batch size was set to 10, and epochs were set to 25.

3. Results

3.1. Detection of Fractured DIs

The automated DCNN architecture achieved the best accuracy performance using periapical images, with the highest AUC of 0.984 (95% CI = 0.900–1.000), sensitivity of 0.880, specificity of 1.000, and Youden index of 0.880. The fine-tuned and pre-trained VGGNet-19 architecture achieved the lowest accuracy performance using panoramic images, with an AUC of 0.902 (95% CI = 0.765–0.973), sensitivity of 0.944, specificity of 0.818, and Youden index of 0.762. The detection accuracy of the fractured DIs is shown in detail in Table 2. Figure 2 shows the ROC curves of three different DCNN architectures using only panoramic, only periapical, and panoramic and periapical images.

3.2. Classification of Types of Fractured DIs

The automated DCNN architecture achieved the highest accuracy performance using periapical images, with the highest AUC of 0.869 (95% CI = 0.778–0.929), sensitivity of 0.900, specificity of 0.911, and Youden index of 0.811. The fine-tuned and pre-trained Inception-v3 achieved the second-highest accuracy performance using periapical images, with an AUC of 0.853 (95% CI = 0.769–0.916), sensitivity of 1.000, specificity of 0.677, and Youden index of 0.677. The VGGNet-19 architecture achieved the lowest accuracy performance using panoramic images, with an AUC of 0.745 (95% CI = 0.504–0.910), sensitivity of 0.700, specificity of 0.800, and Youden index of 0.500. The classification accuracy of the fractured DIs is shown in detail in Table 3. Figure 3 shows the ROC curves of all three different DCNN architectures using only panoramic, only periapical, and panoramic and periapical images.

4. Discussion

Artificial intelligence and deep learning are progressing and expanding rapidly, and have shown promising applications for dental image analysis in recent years. In particular, as newly developed DCNN models and algorithms are continuously adopted and coupled with the area of implant dentistry, it may be an important adjunct for diagnosis, treatment, and prognosis assessments [26]. Recent DCNN-related studies confirmed that various types of DIs with different shapes, lengths, or dimensions can be effectively detected and classified using panoramic and periapical images [27,28,29].
Automated DCNN architecture that automatically finds well-performing and specialized models and optimal hyperparameters is receiving increasing attention in the field of computer science, but research based on automated DCNN architecture in the medical and dental fields is quite insufficient [30,31]. Our most recent research showed that the automated DCNN architecture was highly accurate (AUC = 0.954, 95% CI = 0.933–0.970) for classifying similar shapes of six different morphological types of DIs based on panoramic and periapical images, and achieves better classification accuracy performance (AUC = 0.961, 95% CI = 0.941–0.976) compared to most of the 25 participating dental professionals, including board-certified periodontists, periodontal residents, and residents not specialized in periodontology [24].
The VGGNet-19 and GoogLeNet Inception-v3 architectures, with transfer learning and fine-tuning of pretrained weights, are already being actively used and show highly consistent and predictable outcomes in the fields of periodontology, restorative dentistry, and oral surgery [32,33,34]. All three deep learning algorithms applied in the current study achieved a fractured DI detection accuracy of over 0.90 AUC, and in particular, automated DCNN using periapical images showed the best accuracy performance (AUC = 0.984, 95% CI = 0.900–1.000), compared to the modified VGGNet-19 (AUC = 0.946, 95% CI = 0.842–0.990) and GoogLeNet Inception-v3 (AUC = 0.979, 95% CI = 0.892–0.999) architectures.
It is difficult to accurately classify similar shapes, but different types of fractured DI can be examined through dental radiography, and considerable clinical experience is required for proper type classification of DI fractures. Except for the VGGNet-19 architecture (AUC = 0.745, 95% CI = 0.504–0.910) using panoramic images, included DCNN architectures achieved a classification accuracy of over 0.80 AUC, and in particular, automated DCNN architecture using periapical images showed the highest and most reliable classification accuracy performance (AUC = 0.869, 95% CI = 0.778–0.929). However, although a total of 21,398 DIs were reviewed in 7281 patients from two dental hospitals, only four radiographic images were classified as type III. Therefore, type-III DI fractures were excluded from the dataset and are considered one of the drawbacks of the current study.
Regardless of the type of dataset (including panoramic-only, periapical-only, and panoramic and periapical images) used for DCNN model training, our previous studies confirmed that there is no statistically significant difference in accuracy performance for the identification of DIs [24,29]. The results of this study, consistent with previous studies, indicated that the classification accuracy was not significantly different among the use of panoramic-only, periapical-only, and panoramic and periapical image datasets based on three different DCNN architectures. However, regardless of the type of DCNN architecture, when periapical-only images were used as a dataset, it consistently showed the highest accuracy on average. This is owing to the fact that the periapical image has a higher resolution and sharpness than the panoramic image, and therefore it is expected that the use of the periapical images as a dataset will be more effective in improving the detection and classification of fractured DIs.
The current study has several limitations and future directions that need to be considered. First, because the prevalence and incidence of DI fracture are very low, it is not easy to obtain more than a significant number of fractured DI image datasets. In this study, although more than 20,000 radiographic images were reviewed at two dental hospitals, only 194 fractured DI radiographic images were included in the dataset. Collecting a larger quantity and quality dataset through more dental hospitals is considered the most important prerequisite for clinical use in the field of implant dentistry. Second, the use of low-resolution image datasets for training and validating the DCNN architecture is another limitation of this study. Owing to the limitation of available resources, including computing power storage capacity, we used reduced low-resolution panoramic and periapical images cropped and resized. Additional studies are necessary to confirm whether higher accuracy performance could be achieved by using a high-resolution image dataset.

5. Conclusions

In accordance with the limited results obtained from this study, VGGNet-19, GoogLeNet Inception-v3, and automated DCNN architectures showed acceptable accuracy outcomes in the detection and classification of fractured DIs, with the best performance achieved by the automated DCNN architecture using only periapical radiographic images. Further prospective and clinical evidence is necessary to determine the feasibility of applying DCNN architecture in dental practice.

Author Contributions

Conceptualization, D.-W.L., S.-Y.K., S.-N.J. and J.-H.L.; methodology, D.-W.L., S.-Y.K., S.-N.J. and J.-H.L.; software, J.-H.L.; validation, D.-W.L., S.-Y.K., S.-N.J. and J.-H.L.; formal analysis, D.-W.L., S.-Y.K., S.-N.J. and J.-H.L.; investigation, D.-W.L., S.-Y.K., S.-N.J. and J.-H.L.; resources, D.-W.L., S.-Y.K., S.-N.J. and J.-H.L.; data curation, D.-W.L., S.-Y.K., S.-N.J. and J.-H.L.; writing—original draft preparation, D.-W.L., S.-Y.K., S.-N.J. and J.-H.L.; writing—review and editing, D.-W.L., S.-Y.K., S.-N.J. and J.-H.L.; visualization, D.-W.L. and S.-Y.K.; supervision, S.-N.J. and J.-H.L.; project administration, J.-H.L.; funding acquisition, S.-Y.K. and J.-H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by a VHS Medical Center Research Grant, Republic of Korea (grant VHSMC20021) and a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2019R1A2C1083978).

Institutional Review Board Statement

The study design and protocol were reviewed and authorized by the Institutional Review Board of the Veterans Health Service Medical Center (VHSMC, approval no. BOHUN 2020-03-012-001, 13 April 2020) and Daejeon Dental Hospital, Wonkwang University (approval No. W2011/002-001, 23 April 2020), and the need for informed or written consent was waived as part of the study approval. This study was conducted in compliance with the revised Declaration of Helsinki and followed the STROBE guidelines for the conduct and reporting of observational studies.

Informed Consent Statement

Patient consent was waived due to retrospective design of the study.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Howe, M.S.; Keys, W.; Richards, D. Long-term (10-year) dental implant survival: A systematic review and sensitivity meta-analysis. J. Dent. 2019, 84, 9–21. [Google Scholar] [CrossRef] [PubMed]
  2. Adler, L.; Buhlin, K.; Jansson, L. Survival and complications: A 9- to 15-year retrospective follow-up of dental implant therapy. J. Oral Rehabil. 2020, 47, 67–77. [Google Scholar] [CrossRef] [PubMed]
  3. Stavropoulos, A.; Bertl, K.; Eren, S.; Gotfredsen, K. Mechanical and biological complications after implantoplasty-a systematic review. Clin. Oral Implant. Res. 2019, 30, 833–848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Gealh, W.C.; Mazzo, V.; Barbi, F.; Camarini, E.T. Osseointegrated implant fracture: Causes and treatment. J. Oral Implantol. 2011, 37, 499–503. [Google Scholar] [CrossRef] [PubMed]
  5. Stoichkov, B.; Kirov, D. Analysis of the causes of dental implant fracture: A retrospective clinical study. Quintessence Int. 2018, 49, 279–286. [Google Scholar] [PubMed]
  6. Lee, D.W.; Kim, N.H.; Lee, Y.; Oh, Y.A.; Lee, J.H.; You, H.K. Implant fracture failure rate and potential associated risk indicators: An up to 12-year retrospective study of implants in 5124 patients. Clin. Oral Implant. Res. 2019, 30, 206–217. [Google Scholar] [CrossRef] [PubMed]
  7. Jung, R.E.; Zembic, A.; Pjetursson, B.E.; Zwahlen, M.; Thoma, D.S. Systematic review of the survival rate and the incidence of biological, technical, and aesthetic complications of single crowns on implants reported in longitudinal studies with a mean follow-up of 5 years. Clin. Oral Implant. Res. 2012, 23 (Suppl. 6), 2–21. [Google Scholar] [CrossRef] [PubMed]
  8. Carin, L.; Pencina, M.J. On deep learning for medical image analysis. JAMA 2018, 320, 1192–1193. [Google Scholar] [CrossRef]
  9. Hwang, J.J.; Jung, Y.H.; Cho, B.H.; Heo, M.S. An overview of deep learning in the field of dentistry. Imaging Sci. Dent. 2019, 49, 1–7. [Google Scholar] [CrossRef]
  10. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep learning for computer vision: A brief review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef]
  11. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  12. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef] [PubMed]
  13. Chung, S.W.; Han, S.S.; Lee, J.W.; Oh, K.S.; Kim, N.R.; Yoon, J.P.; Kim, J.Y.; Moon, S.H.; Kwon, J.; Lee, H.J.; et al. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm. Acta Orthop. 2018, 89, 468–473. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Adams, M.; Chen, W.; Holcdorf, D.; McCusker, M.W.; Howe, P.D.; Gaillard, F. Computer vs human: Deep learning versus perceptual training for the detection of neck of femur fractures. J. Med. Imaging Radiat. Oncol. 2019, 63, 27–32. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Cheng, C.T.; Ho, T.Y.; Lee, T.Y.; Chang, C.C.; Chou, C.C.; Chen, C.C.; Chung, I.F.; Liao, C.H. Application of a deep learning algorithm for detection and visualization of hip fractures on plain pelvic radiographs. Eur. Radiol. 2019, 29, 5469–5477. [Google Scholar] [CrossRef] [Green Version]
  16. Olczak, J.; Emilson, F.; Razavian, A.; Antonsson, T.; Stark, A.; Gordon, M. Ankle fracture classification using deep learning: Automating detailed ao foundation/orthopedic trauma association (ao/ota) 2018 malleolar fracture identification reaches a high degree of correct classification. Acta Orthop. 2020, 1–7. [Google Scholar] [CrossRef]
  17. Fukuda, M.; Inamoto, K.; Shibata, N.; Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Nakata, K.; Katsumata, A.; Fujita, H.; Ariji, E. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol. 2020, 36, 337–343. [Google Scholar] [CrossRef]
  18. Morris, K. Revising the declaration of Helsinki. Lancet 2013, 381, 1889–1890. [Google Scholar] [CrossRef]
  19. von Elm, E.; Altman, D.G.; Egger, M.; Pocock, S.J.; Gotzsche, P.C.; Vandenbroucke, J.P.; Initiative, S. The strengthening the reporting of observational studies in epidemiology (strobe) statement: Guidelines for reporting observational studies. J. Clin. Epidemiol. 2008, 61, 344–349. [Google Scholar] [CrossRef] [Green Version]
  20. Lee, J.H.; Kim, Y.T.; Jeong, S.N.; Kim, N.H.; Lee, D.W. Incidence and pattern of implant fractures: A long-term follow-up multicenter study. Clin. Implant. Dent. Relat. Res. 2018, 20, 463–469. [Google Scholar] [CrossRef]
  21. Liu, S.; Deng, W. Very deep convolutional neural network based image classification using small training sample size. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 730–734. [Google Scholar] [CrossRef]
  22. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv 2016, arXiv:1602.07261. [Google Scholar]
  23. Jin, H.; Song, Q.; Hu, X. Auto-keras: An efficient neural architecture search system. arXiv 2019, arXiv:1806.10282. [Google Scholar]
  24. Lee, J.H.; Kim, Y.T.; Lee, J.B.; Jeong, S.N. A performance comparison between automated deep learning and dental professionals in classification of dental implant systems from dental imaging: A multi-center study. Diagnostics 2020, 10, 910. [Google Scholar] [CrossRef] [PubMed]
  25. Keras: The Python Deep Learning Library. Available online: https://keras.io/ (accessed on 1 June 2020).
  26. Shan, T.; Tay, F.R.; Gu, L. Application of artificial intelligence in dentistry. J. Dent. Res. 2020. [Google Scholar] [CrossRef] [PubMed]
  27. Kim, J.E.; Nam, N.E.; Shim, J.S.; Jung, Y.H.; Cho, B.H.; Hwang, J.J. Transfer learning via deep neural networks for implant fixture system classification using periapical radiographs. J. Clin. Med. 2020, 9, 1117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Sukegawa, S.; Yoshii, K.; Hara, T.; Yamashita, K.; Nakano, K.; Yamamoto, N.; Nagatsuka, H.; Furuki, Y. Deep neural networks for dental implant system classification. Biomolecules 2020, 10, 984. [Google Scholar] [CrossRef]
  29. Lee, J.H.; Jeong, S.N. Efficacy of deep convolutional neural network algorithm for the identification and classification of dental implant systems, using panoramic and periapical radiographs: A pilot study. Medicine 2020, 99, e20787. [Google Scholar] [CrossRef]
  30. Faes, L.; Wagner, S.K.; Fu, D.J.; Liu, X.; Korot, E.; Ledsam, J.R.; Back, T.; Chopra, R.; Pontikos, N.; Kern, C.; et al. Automated deep learning design for medical image classification by health-care professionals with no coding experience: A feasibility study. Lancet Digit. Health 2019, 1, e232–e242. [Google Scholar] [CrossRef] [Green Version]
  31. Waring, J.; Lindvall, C.; Umeton, R. Automated machine learning: Review of the state-of-the-art and opportunities for healthcare. Artif. Intell. Med. 2020, 104, 101822. [Google Scholar] [CrossRef]
  32. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J. Periodontal Implant. Sci. 2018, 48, 114–123. [Google Scholar] [CrossRef] [Green Version]
  33. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef] [PubMed]
  34. Lee, J.H.; Kim, D.H.; Jeong, S.N. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020, 26, 152–158. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic illustration of deep convolutional neural network (DCNN) applications. Dataset prepared from anonymized raw panoramic and periapical radiographic images, and all included dental implants (DIs) were manually cropped and labeled. Training process was based on three different DCNN architectures to compare accuracy performance to detect and classify types of fractured DIs.
Figure 1. Schematic illustration of deep convolutional neural network (DCNN) applications. Dataset prepared from anonymized raw panoramic and periapical radiographic images, and all included dental implants (DIs) were manually cropped and labeled. Training process was based on three different DCNN architectures to compare accuracy performance to detect and classify types of fractured DIs.
Diagnostics 11 00233 g001
Figure 2. Receiver operating characteristic (ROC) curves for detection of fractured Dis consisting of (a–c) 40 panoramic images, (df) 49 periapical images, and (gi) 89 panoramic and periapical images. Plots include 95% confidence bounds.
Figure 2. Receiver operating characteristic (ROC) curves for detection of fractured Dis consisting of (a–c) 40 panoramic images, (df) 49 periapical images, and (gi) 89 panoramic and periapical images. Plots include 95% confidence bounds.
Diagnostics 11 00233 g002
Figure 3. ROC curves for classification of types of fractured Dis consisting of (ac) 19 panoramic images, (df) 20 periapical images, and (gi) 39 panoramic and periapical images. Plots include 95% confidence bounds.
Figure 3. ROC curves for classification of types of fractured Dis consisting of (ac) 19 panoramic images, (df) 20 periapical images, and (gi) 39 panoramic and periapical images. Plots include 95% confidence bounds.
Diagnostics 11 00233 g003
Table 1. Number of panoramic and periapical radiographic images for intact and fractured dental implants (DIs). Dataset collected from two dental hospitals: Veterans Health Service Medical Center and Daejeon Dental Hospital, Wonkwang University.
Table 1. Number of panoramic and periapical radiographic images for intact and fractured dental implants (DIs). Dataset collected from two dental hospitals: Veterans Health Service Medical Center and Daejeon Dental Hospital, Wonkwang University.
Dataset
FrequencyPercentage
Intact DIs
  Panoramic images11043.8
  Periapical images14156.2
Fractured DIs, Type I
  Panoramic images4148.8
  Periapical images4351.2
Fractured DIs, Type II
  Panoramic images5247.3
  Periapical images5852.7
Fractured DIs were classified as follows: Type I, horizontal and vertical fractures limited within and around crestal module of implant fixture; Type II: vertical fracture beyond crestal module of implant fixture.
Table 2. Detection accuracy of fractured DIs between three different DCNN architectures.
Table 2. Detection accuracy of fractured DIs between three different DCNN architectures.
VariablesAUC95% CISESensitivitySpecificityYouden Index
Panoramic images
  VGGNet-190.9020.765–0.9730.0490.9440.8180.762
  GoogLeNet Inception-v30.9200.790–0.9820.0450.8330.9090.742
  Automated DCNN0.9600.845–0.9970.0401.0000.9540.954
Periapical images
  VGGNet-190.9460.842–0.9900.0390.9200.9600.880
  GoogLeNet Inception-v30.9790.892–0.9990.0140.9200.9200.840
  Automated DCNN0.9840.900–1.0000.0120.8801.0000.880
Panoramic and periapical images
  VGGNet-190.9290.854–0.9720.0370.9330.9330.866
  GoogLeNet Inception-v30.9670.906–0.9930.0151.0000.8660.866
  Automated DCNN0.9720.913–0.9950.0140.8660.9660.833
DCNN, deep convolutional neural network; AUC, area under the curve; CI, confidence interval; SE, standard error.
Table 3. Classification accuracy of types of fractured DIs between different three DCNN architectures.
Table 3. Classification accuracy of types of fractured DIs between different three DCNN architectures.
VariablesAUC95% CISESensitivitySpecificityYouden Index
Panoramic images
  VGGNet-190.7450.504–0.9100.1220.7000.8000.500
  GoogLeNet Inception-v30.8050.569–0.9450.1101.0000.6000.600
  Automated DCNN0.8100.575–0.9480.1060.8000.8000.600
Periapical images
  VGGNet-190.8330.745–0.9000.0580.9000.7440.644
  GoogLeNet Inception-v30.8530.769–0.9160.0401.0000.6770.677
  Automated DCNN0.8690.778–0.9290.0850.9000.9110.811
Panoramic and periapical images
  VGGNet-190.8040.648–0.9120.0740.9000.7000.600
  GoogLeNet Inception-v30.8150.661–0.9200.0770.9010.7490.650
  Automated DCNN0.8290.677–0.9290.0720.8500.8500.700
DCNN, deep convolutional neural network; AUC, area under the curve; CI, confidence interval; SE, standard error.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, D.-W.; Kim, S.-Y.; Jeong, S.-N.; Lee, J.-H. Artificial Intelligence in Fractured Dental Implant Detection and Classification: Evaluation Using Dataset from Two Dental Hospitals. Diagnostics 2021, 11, 233. https://doi.org/10.3390/diagnostics11020233

AMA Style

Lee D-W, Kim S-Y, Jeong S-N, Lee J-H. Artificial Intelligence in Fractured Dental Implant Detection and Classification: Evaluation Using Dataset from Two Dental Hospitals. Diagnostics. 2021; 11(2):233. https://doi.org/10.3390/diagnostics11020233

Chicago/Turabian Style

Lee, Dong-Woon, Sung-Yong Kim, Seong-Nyum Jeong, and Jae-Hong Lee. 2021. "Artificial Intelligence in Fractured Dental Implant Detection and Classification: Evaluation Using Dataset from Two Dental Hospitals" Diagnostics 11, no. 2: 233. https://doi.org/10.3390/diagnostics11020233

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop