Skip to main content
Erschienen in: Dermatology and Therapy 2/2023

Open Access 28.12.2022 | Original Research

An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input

verfasst von: Lin Liu, Chen Liang, Yuzhou Xue, Tingqiao Chen, Yangmei Chen, Yufan Lan, Jiamei Wen, Xinyi Shao, Jin Chen

Erschienen in: Dermatology and Therapy | Ausgabe 2/2023

Abstract

Introduction

The diagnosis of melasma is often based on the naked-eye judgment of physicians. However, this is a challenge for inexperienced physicians and non-professionals, and incorrect treatment might have serious consequences. Therefore, it is important to develop an accurate method for melasma diagnosis. The objective of this study is to develop and validate an intelligent diagnostic system based on deep learning for melasma images.

Methods

A total of 8010 images in the VISIA system, comprising 4005 images of patients with melasma and 4005 images of patients without melasma, were collected for training and testing. Inspired by four high-performance structures (i.e., DenseNet, ResNet, Swin Transformer, and MobileNet), the performances of deep learning models in melasma and non-melasma binary classifiers were evaluated. Furthermore, considering that there were five modes of images for each shot in VISIA, we fused these modes via multichannel image input in different combinations to explore whether multimode images could improve network performance.

Results

The proposed network based on DenseNet121 achieved the best performance with an accuracy of 93.68% and an area under the curve (AUC) of 97.86% on the test set for the melasma classifier. The results of the Gradient-weighted Class Activation Mapping showed that it was interpretable. In further experiments, for the five modes of the VISIA system, we found the best performing mode to be “BROWN SPOTS.” Additionally, the combination of “NORMAL,” “BROWN SPOTS,” and “UV SPOTS” modes significantly improved the network performance, achieving the highest accuracy of 97.4% and AUC of 99.28%.

Conclusions

In summary, deep learning is feasible for diagnosing melasma. The proposed network not only has excellent performance with clinical images of melasma, but can also acquire high accuracy by using multiple modes of images in VISIA.
Key Summary Points
To reduce misdiagnosis and missed diagnosis, an effective and accurate method for melasma diagnosis is necessary.
On the basis of deep learning, we developed an intelligent diagnostic model for melasma.
Our model was trained with a large sample of melasma and non-melasma facial images and acquired a high accuracy and area under the curve.
In further experiments, we found that multichannel image input obtained by fusing multiple modes of images in VISIA increased our network performance.
More data from multiple centers and improved applicability are needed to make the model a likely valuable tool in clinical practice.

Introduction

Melasma is a commonly acquired pigmentation disorder characterized by symmetrical brown macules and patches on the face with irregular borders, which has a negative effect on appearance and self-esteem of patients [13]. Its pathophysiology is complex and unknown, but it is believed to relate to genetic and environmental factors [4]. Melasma mainly affects women and people with high pigmentation phenotypes. The prevalence of melasma is higher in East Asians, Indians, Latin Americans, and Hispanics [57]. However, the specific epidemiological data are still unclear.
The diagnosis of melasma usually depends on the naked-eye judgment of physicians according to the clinical characteristics of lesions. However, for pigmented skin lesions, the diagnostic ability of non-dermatologists is not at a comparable level to that of dermatologists [8, 9]. Melasma and other atypical hyperpigmentation like nevus of Ota are often missed and misdiagnosed [10]. Thus, the correct diagnosis of melasma with the naked eye alone may require physicians to have certain clinical experience, especially in complicated facial conditions. In addition, the use of diagnostic assistant tools, such as wood lamps and dermoscopy, is time consuming and inappropriate to accurately distinguish melasma from other pigmentation disorders [10]. These tools also need to be assessed by physicians, which could be a subjective process.
The misdiagnosis and missed diagnosis of melasma might have undesirable effects on patients in which treatment such as CO2 laser is required for other skin diseases that is not acceptable for melasma [1113]. The treatment for melasma should be selected cautiously because of its high rate of recurrence [14]. Moreover, improper treatment under misjudgment of melasma might result in serious sequelae, such as pigmentation and scarring after CO2 laser [1113]. In addition, the remoteness of certain regions and lack of knowledge make some patients with melasma seek help from beauty salons and estheticians instead of dermatologists. Nevertheless, owing to the lack of professional expertise and accurate diagnostic tools, such non-professionals usually cannot make a correct diagnosis and choose the wrong treatment. Thus, it is necessary to develop an accurate and rapid diagnostic method for melasma.
The purpose of this study was to develop and validate an intelligent diagnostic system for melasma images on the basis of deep learning and provide a reference for accurate and rapid diagnosis of melasma. In this study, we collected a large number of clinical melasma images and evaluated the performances of four deep learning models in melasma and non-melasma binary classifiers. We further conducted image fusion via multichannel image input and found an improvement in network performance.

Methods

This study was approved by the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University (no. 2022-K349). This study was performed in accordance with the Declaration of Helsinki of 1964. In the absence of any exclusion criteria, we retrospectively collected images of all patients with melasma that visited the Dermatology Clinic of Chongqing Medical University between January 2017 and September 2021. All images were stored in a VISIA imaging system (Canfield Scientific, NJ, USA). A similar number of images from patients without melasma were randomly selected from the VISIA system. Considering that the problem we intended to solve in this study was to judge the presence or absence of melasma on the basis of facial image, which required that the binary classifier be able to work in a variety of situations, the “non-melasma” images in this study were obtained from non-pigmentary diseases (such as rosacea and acne), other pigmentary diseases except melasma (such as freckles, lentigines, nevus of Ota), and healthy people.
With a resolution of 3128 × 4171 pixels, the VISIA system included an imaging chamber with a 15 megapixel resolution camera. Using three types of light sources, i.e., standard incandescent light, ultraviolet (UV) light, and polarized light, five images of different modes were obtained for each shot. The “NORMAL” mode was taken under standard incandescent light and used to identify spots, wrinkles, texture, and pores. The “UV SPOTS” and “PORPHYRINS” modes were taken under UV light to detect UV spots and porphyrins, respectively. The “BROWN SPOTS” and “RED AREAS” modes were taken under polarized light to observe brown spots and prominent blood vessels, respectively [15]. Thus, five different modes of images were obtained in one shot, i.e., “NORMAL”, “UV SPOTS”, “PORPHYRINS”, “BROWN SPOTS”, and “RED AREAS”, which could show different skin characteristics (Supplementary Material Fig. S1).
A total of 4005 melasma and 4005 non-melasma images were collected. The detailed clinical data of patients were not collected due to confidentiality requirements and inapplicability. The diagnosis of all patients was based on the discussion of three experienced dermatologists in accordance with the images, which was regarded as the ground truth in this study. For different patients, we randomly divided all images into the training and test sets with a ratio of approximately 2:1 to ensure that the images of the same patient did not appear simultaneously in the training and test sets (Fig. 1). To achieve a balanced distribution, the number of melasma and non-melasma images was approximately equal. Thus, there were 2650 melasma and 2670 non-melasma images in the training set, while 1355 melasma and 1335 non-melasma images were in the test set. The images of the training set were augmented by rotation, random erasing, and gray level adjustment. The resolution of all images was then adjusted to 480 × 640 pixels.
Our task was to build a binary classifier with “melasma” as the positive class and “non-melasma” as the negative class. Considering that there are five different modes of images in one shot, both single-mode and multimode binary classifiers were studied. The images of “NORMAL” mode were the same as those seen by clinicians with the naked eye; therefore, we used this mode to explore a network for direct and rapid diagnosis. Four deep learning models, i.e., MobileNetv2, Swin Transformer, ResNet50, and DenseNet121, were used to build the binary classifiers of melasma and non-melasma. The records with the best performance were saved. To visualize the features selected by the network, we used Gradient-weighted Class Activation Mapping (Grad-CAM) to demonstrate the interpretability of the optimal network via gradient-based localization. Subsequently, we further investigated the differences in network performance for four different modes of images: “UV SPOTS,” “PORPHYRINS,” “BROWN SPOTS,” and “RED AREAS.” Next, we studied multimode images to examine whether they had the potential to further improve the performance of our diagnostic system. We fused different modes of the same shot through a multichannel input and input the integrated multimode features to the network. A flowchart of the network for multimode images is shown in Fig. 2. For data analysis, the performance of all models on the test set was evaluated using the performance indices of accuracy, area under the curve (AUC), sensitivity, and specificity. All analyses were conducted using Python 3.7.3. All patient images shown in the figures were anonymized by covering the eyes manually for privacy purposes.

Results

Performance of Four Models

We examined the receiver operating characteristic (ROC) curves for the four models trained in this study, and the results of the test set are shown in Fig. 3. In terms of AUC (i.e., an indicator that describes the confidence of the prediction results and is considered to be an important performance index for binary classifiers), the DenseNet121 model outperformed the others with a value of 97.87%. In addition, confusion matrices were used to visualize the performances of the four models (Supplementary Material Fig. S2). The ResNet50 model achieved the highest sensitivity with a value of 97.14%, but obtained a relatively poor specificity of 88.76%, resulting in an accuracy of 91.45% for the test set. In contrast, the DenseNet121 model performed well in identifying both negative (95.88% specificity) and positive (94.29% sensitivity) samples. After comparing the performances of the four deep learning models, we found that the network based on DenseNet121 achieved the highest accuracy, with 93.68% (Table 1). Therefore, among these four deep learning models, DenseNet121 was regarded as the optimal model for melasma diagnosis on the basis of clinical images.
Table 1
Performance of four deep learning models on test set
Model
Accuracy
AUC
Sensitivity
Specificity
MobileNetv2
0.8587
0.9498
0.8
0.9213
SwinTransformer
0.8848
0.9313
0.8857
0.8652
ResNet50
0.9145
0.9666
0.9714
0.8876
DenseNet121
0.9368
0.9786
0.9429
0.9588
AUC area under the curve

Interpretability of Optimal Model

Subsequently, Grad-CAM was used to implement model interpretability. According to the results of Grad-CAM presented in Fig. 4, the red regions represent areas activated by the network, whereas the blue regions represent areas that were not activated. Activation was focused on the lesions of melasma, which mainly appeared in the cheek and malar area. Notably, in images of patients with melasma coexisting with facial skin conditions (such as seborrheic keratosis and post-acne hyperpigmentation), the network was able to focus more on melasma lesions on the face than other skin disorders.

Network Performance Using Multimode Input

Since different modes in the VISIA system showed different skin characteristics, we investigated the performance of the network on the basis of each image mode. From the five modes, “BROWN SPOTS” had the best performance, with an accuracy of 94.42% and AUC of 98.57% (Table 2 and Fig. 5). Additionally, the accuracy and AUC of “UV SPOTS” were 93.49% and 97.55%, respectively, which were similar to those of the “NORMAL” mode. The “PORPHYRINS” and “RED AREAS” modes performed slightly worse, with accuracy values of 88.29% and 82.34%, respectively. The results of the confusion matrices are shown in Supplementary Material Fig. S3.
Table 2
Network performance on single mode
Image mode
Accuracy
AUC
Sensitivity
Specificity
NORMAL
0.9368
0.9786
0.9429
0.9588
BROWN SPOTS
0.9442
0.9857
0.9714
0.9288
UV SPOTS
0.9349
0.9755
0.8857
0.9288
PORPHYRINS
0.8829
0.94
0.9429
0.8727
RED AREAS
0.8234
0.8811
0.8857
0.8165
AUC area under the curve
To date, we have acquired the ranks of the five modes in our network. The performance of the network on multimode input was further explored. On the basis of the results of each mode, we determined the combinations of multimode input as follows: “NORMAL + BROWN SPOTS,” “NORMAL + BROWN SPOTS + UV SPOTS,” “NORMAL + BROWN SPOTS + UV SPOTS + PORPHYRINS,” and “NORMAL + BROWN SPOTS + UV SPOTS + PORPHYRINS + RED AREAS.” Finally, for these combinations, the accuracy of “NORMAL + BROWN SPOTS + UV SPOTS” mode achieved the highest accuracy of 97.4% (Table 3). The AUC was 99.28%, which was slightly higher than those of the others (Fig. 6). Supplementary Material Figure S4 shows the confusion matrix results for these multimode combinations.
Table 3
Network performance on multimode input
Combination
Accuracy
AUC
Sensitivity
Specificity
NORMAL + BROWN SPOTS
0.9535
0.9887
0.9429
0.9438
NORMAL + BROWN SPOTS + UV SPOTS
0.974
0.9928
0.9714
0.9625
NORMAL + BROWN SPOTS + UV SPOTS + PORPHYRINS
0.961
0.9919
0.9429
0.97
NORMAL + BROWN SPOTS + UV SPOTS + PORPHYRINS + RED AREAS
0.9498
0.9898
0.9143
0.9476
AUC area under the curve

Discussion

In recent years, deep learning has gained widespread attention for medical diagnosis, grading, and efficacy evaluation. In particular, deep learning shows superior performance in image classification and recognition tasks and has been applied in the field of dermatology. A previous study reported an artificial intelligence-assisted decision making system for skin tumors with a recognition rate of 91.2% for benign and malignant skin tumors [16]. Lim et al. used a convolutional neural network to grade the severity of facial images of patients with acne, and obtained the best classification accuracy of 67% [17]. Additionally, in rosacea, psoriasis, eczema, and atopic dermatitis, deep learning has been proven to have excellent diagnostic or classification capabilities [1821].
So far, there have been a few reports on the application of deep learning to melasma. One study used a voting-based probabilistic linear discriminant analysis to classify non-tumorous skin pigmentation diseases, including melasma, with an accuracy of 67.7% for melasma [22]. Another study presented a spatial compounding-based denoising convolutional neural network for quantifying and evaluating melanin in melasma optical coherence tomography images [23]. However, there is still a lack of research on large training datasets and high accuracy diagnostic systems for melasma facial images. In this study, a large number of images were used as the training set in deep learning models, and we developed an accurate diagnostic system on the basis of DenseNet121 for clinical melasma facial images. In a further experiment with multimode image input, we fused different modes of images in the VISIA system and fed them to the network to simulate how a dermatologist used multiple modes of images to diagnose melasma. Finally, in the experiment with the “NORMAL + BROWN SPOTS + UV SPOTS” combination, we acquired a high accuracy of 97.4% and AUC of 99.28%.
In this study, we chose four deep learning models for comparison purposes, including three traditional convolutional neural networks (i.e., MobileNetv2, ResNet50, and DenseNet121) and a novel network (i.e., Swin Transformer). In previous studies, MobileNet v2 was able to work on lightweight computing devices and had high accuracy in classification of skin disease images [24]; ResNet50 showed superior performance for segmentation and classification in multiple skin lesions diagnostics [25]; DenseNet121 was also used to segment skin and lesion [26]; and Swin Transformer was a novel fine-grained recognition framework and showed more powerful and robust features in medical image segmentation [27]. Therefore, we selected these mainstream and high-performance deep learning networks to explore their performance in melasma diagnosis task. Our results indicated that in MobileNetv2, ResNet50, and DenseNet121, the performance of DenseNet121 was slightly better than that of ResNet50, and MobileNetv2 had the worst performance in our study. In recent studies, the comparison of DenseNet and ResNet has been discussed for the identification of ductal carcinoma in situ and microinvasion of the breast using ultrasound images [28], recognition of digital dental X-ray images [29], and classification of glaucomatous fundus images [30]. For each of these applications, DenseNet has been reported to perform better than ResNet. The ResNet bypass signals from one layer to the next through identity connections and combines features by summing them before passing them into a layer. In contrast, to ensure maximum information flow, all layers of DenseNet could take additional input from the previous layer, pass on their own feature maps to all subsequent layers, and combine features by concatenating them. These features of DenseNet appear to be useful in our clinical images of melasma and non-melasma. Similarly, in another set of images with pigmented facial skin lesions, DenseNet also had better performance evaluation than ResNet; this is consistent with our results [31]. On the other hand, Swin Transformer was proposed in 2021 as a novel model for computer vision, which constructed hierarchical feature maps and had linear computational complexity to image size. At present, only a few studies have reported the application of Swin Transformer in medical images [32, 33]. To the best of our knowledge, this is the first study to apply Swin Transformer in clinical dermatology images. Although Swin Transformer was not the best model for our task, its accuracy reached 88.48%. The medical application of Swin Transformer is worthy of further study.
In clinical practice, dermatologists usually combine clinical information of patients, such as age, medical history, dermoscopy results, and multimode images of the VISIA system, to make the final diagnosis. This could increase the likelihood of dermatologists making the correct decision; additionally, it is worth investigating whether this could be applicable to artificial intelligence. Previous studies have shown that multiple types of information can improve the diagnostic ability of deep learning models. In a study by Tschandl et al., both dermoscopic images and clinical close-ups were used to train the network; the combination was found to acquire a better result than the individual modalities [34]. Jin et al. found that multiple extracted histological features, including nuclei, mitosis, epithelial, and tubular cells, could further improve the detection of lymph node metastasis in patients with breast cancer [35]. This study used a multichannel image input method to fuse multiple mode images in the VISIA system and discovered that it could improve the accuracy of our network to some extent. As the amount of image information increased, the network performance improved. However, we found that the optimal combination was the “NORMAL + BROWN SPOTS + UV SPOTS” mode, which had slightly higher accuracy and AUC than the other five modes. It also seems that the “PORPHYRINS” and “RED AREAS” modes did not improve the network performance under this multimode input method. This is noteworthy because it was previously believed that more information in training data could improve network performance [36]. The five modes had the following characteristics: (1) the “NORMAL” mode image could identify spots by their color and contrast from the surrounding skin; (2) the “UV SPOTS” mode image was generated by the selective absorption of UV light from epidermal melanin; (3) the “BROWN SPOTS” mode could reflect the detection of deeper deposition of melanin under cross-polarized light; (4) the “PORPHYRINS” mode image was photographed in UV light on the basis that porphyrin could fluoresce in UV light; and (5) the “RED AREAS” mode image was used as a measurement of hemoglobin content through cross-polarized light [15]. Since the pigmentation of melasma was considered to be a combination of epidermis and dermis [37], the “NORMAL,” “UV SPOTS,” and “BROWN SPOTS” modes might be more useful for dermatologists in melasma diagnosis. Although the classification basis of the deep learning model was unrevealed and regarded as a black box, the information from “PORPHYRINS” and “RED AREAS” mode images might cause confusion to our multichannel input network in multiple information fusion, thereby degrading the network performance. Hence, the specific reason for this outcome requires further investigation.
On the basis of common clinical images, we developed a straightforward and rapid melasma diagnosis network, thus eliminating time-consuming invasive or noninvasive methods such as wood lamps, dermoscopy, and reflectance confocal microscopy. In the subsequent clinical transformation, we expected to develop a remote diagnostic tool using smartphones or networks on the basis of single-mode imaging for convenient self-diagnosis of patients, and a downstream medical software of the VISIA system based on multimode image input, aiming to become an accurate assistant diagnostic tool.
However, this study has some limitations. First, the data were obtained from a single center, and it was better to validate our network in multiple centers to include more patients with different skin conditions. Second, owing to the fixed background and lighting, almost all image backgrounds were black. Thus, in future practical applications, it might be necessary to add content to our diagnostic system to mask the background and better simulate various training conditions. Third, this study was conducted in Southwest China and the Fitzpatrick types of all included patients were III and IV. Since higher phototypes exhibited more melanocytes [38], there was a possibility that the network performance might variate in people of different Fitzpatrick phototypes. This required further exploration in the datasets of different skin types. In addition, our model was not able to quantify melasma severity on the basis of the Melasma Area and Severity Index. It required higher performance networks and more sophisticated algorithms. Finally, our results lacked further validation of the model use, i.e., whether it could play a role in the diagnosis of melasma by non-dermatologists. This would need further investigation.

Conclusions

This study is the first one to use a large sample of melasma images to compare the diagnostic performance of multiple deep learning models and develop a diagnostic system for melasma. Multimode image combination evaluation was performed using images under different lighting conditions in the VISIA system, which further increased the diagnostic accuracy of melasma. Our study could provide a basis for the development of clinical diagnostic applications for melasma and other skin disorders. However, more available clinical images of patients from multiple centers are needed to further improve the proposed diagnostic system.

Acknowledgements

Funding

This study was supported by the National Natural Science Foundation of China (n82073462). The journal’s Rapid Service Fee was funded by the authors.

Author Contributions

Conceptualization, Lin Liu and Chen Liang. Data collection and processing, Tingqiao Chen, Yufan Lan, and Jiamei Wen. Interpretation of data, Yangmei Chen. Software, Yuzhou Xue and Chen Liang. Lin Liu is responsible for writing the initial article. Revision and finalization, Xinyi Shao and Jin Chen. All authors contributed to the article and approved the submitted version.

Disclosures

Lin Liu, Chen Liang, Yuzhou Xue, Tingqiao Chen, Yangmei Chen, Yufan Lan, Jiamei Wen, Xinyi Shao, and Jin Chen have nothing to disclose.

Compliance with Ethics Guidelines

This study was performed in accordance with the Helsinki Declaration of 1964. This study was approved by the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University (the number: 2022-K349). All patient photographs were anonymized by covering the eyes manually for privacy purposes.

Medical Writing/Editorial Assistance

Editage company provided the assistance of English editing and it was funded by the authors.

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Open AccessThis article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by-nc/​4.​0/​.
Literatur
1.
Zurück zum Zitat Balkrishnan R, McMichael AJ, Camacho FT, et al. Development and validation of a health-related quality of life instrument for women with melasma. Br J Dermatol. 2003;149(3):572–7. CrossRef Balkrishnan R, McMichael AJ, Camacho FT, et al. Development and validation of a health-related quality of life instrument for women with melasma. Br J Dermatol. 2003;149(3):572–7. CrossRef
2.
Zurück zum Zitat Pandya AG, Hynan LS, Bhore R, et al. Reliability assessment and validation of the Melasma Area and Severity Index (MASI) and a new modified MASI scoring method. J Am Acad Dermatol. 2011;64(1):78–83. CrossRef Pandya AG, Hynan LS, Bhore R, et al. Reliability assessment and validation of the Melasma Area and Severity Index (MASI) and a new modified MASI scoring method. J Am Acad Dermatol. 2011;64(1):78–83. CrossRef
3.
Zurück zum Zitat Ikino JK, Nunes DH, Silva VP, Fröde TS, Sens MM. Melasma and assessment of the quality of life in Brazilian women. An Bras Dermatol. 2015;90(2):196–200. CrossRef Ikino JK, Nunes DH, Silva VP, Fröde TS, Sens MM. Melasma and assessment of the quality of life in Brazilian women. An Bras Dermatol. 2015;90(2):196–200. CrossRef
4.
Zurück zum Zitat Passeron T, Picardo M. Melasma, a photoaging disorder. Pigment Cell Melanoma Res. 2018;31(4):461–5. CrossRef Passeron T, Picardo M. Melasma, a photoaging disorder. Pigment Cell Melanoma Res. 2018;31(4):461–5. CrossRef
5.
Zurück zum Zitat Handel AC, Miot LD, Miot HA. Melasma: a clinical and epidemiological review. An Bras Dermatol. 2014;89(5):771–82. CrossRef Handel AC, Miot LD, Miot HA. Melasma: a clinical and epidemiological review. An Bras Dermatol. 2014;89(5):771–82. CrossRef
6.
Zurück zum Zitat Perez M, Luke J, Rossi A. Melasma in Latin Americans. J Drugs Dermatol. 2011;10(5):517–23. Perez M, Luke J, Rossi A. Melasma in Latin Americans. J Drugs Dermatol. 2011;10(5):517–23.
7.
Zurück zum Zitat Aishwarya K, Bhagwat P, John N. Current concepts in melasma - a review article. J Skin Sexually Transm Dis. 2022;2(1):13–7. Aishwarya K, Bhagwat P, John N. Current concepts in melasma - a review article. J Skin Sexually Transm Dis. 2022;2(1):13–7.
8.
Zurück zum Zitat Brochez L, Verhaeghe E, Bleyen L, Naeyaert JM. Diagnostic ability of general practitioners and dermatologists in discriminating pigmented skin lesions. J Am Acad Dermatol. 2001;44(6):979–86. CrossRef Brochez L, Verhaeghe E, Bleyen L, Naeyaert JM. Diagnostic ability of general practitioners and dermatologists in discriminating pigmented skin lesions. J Am Acad Dermatol. 2001;44(6):979–86. CrossRef
9.
Zurück zum Zitat Wagner RF Jr, Wagner D, Tomich JM, Wagner KD, Grande DJ. Diagnoses of skin disease: dermatologists vs nondermatologists. J Dermatol Surg Oncol. 1985;11(5):476–9. CrossRef Wagner RF Jr, Wagner D, Tomich JM, Wagner KD, Grande DJ. Diagnoses of skin disease: dermatologists vs nondermatologists. J Dermatol Surg Oncol. 1985;11(5):476–9. CrossRef
10.
Zurück zum Zitat Zeng X, Qiu Y, Xiang W. In vivo reflectance confocal microscopy for evaluating common facial hyperpigmentation. Skin Res Technol. 2020;26(2):215–9. CrossRef Zeng X, Qiu Y, Xiang W. In vivo reflectance confocal microscopy for evaluating common facial hyperpigmentation. Skin Res Technol. 2020;26(2):215–9. CrossRef
11.
Zurück zum Zitat Angsuwarangsee S, Polnikorn N. Combined ultrapulse CO 2 laser and Q-switched alexandrite laser compared with Q-switched alexandrite laser alone for refractory melasma: split-face design. Dermatol Surg. 2003;29(1):59–64. Angsuwarangsee S, Polnikorn N. Combined ultrapulse CO 2 laser and Q-switched alexandrite laser compared with Q-switched alexandrite laser alone for refractory melasma: split-face design. Dermatol Surg. 2003;29(1):59–64.
12.
Zurück zum Zitat Lai D, Zhou S, Cheng S, Liu H, Cui Y. Laser therapy in the treatment of melasma: a systematic review and meta-analysis. Lasers Med Sci. 2022;37(4):2099–110. CrossRef Lai D, Zhou S, Cheng S, Liu H, Cui Y. Laser therapy in the treatment of melasma: a systematic review and meta-analysis. Lasers Med Sci. 2022;37(4):2099–110. CrossRef
13.
Zurück zum Zitat Kim C, Gao JC, Moy J, Lee HS. Fractional CO 2 laser and adjunctive therapies in skin of color melasma patients. JAAD Int. 2022;8:118–23. CrossRef Kim C, Gao JC, Moy J, Lee HS. Fractional CO 2 laser and adjunctive therapies in skin of color melasma patients. JAAD Int. 2022;8:118–23. CrossRef
14.
Zurück zum Zitat Artzi O, Horovitz T, Bar-Ilan E, et al. The pathogenesis of melasma and implications for treatment. J Cosmet Dermatol. 2021;20(11):3432–45. CrossRef Artzi O, Horovitz T, Bar-Ilan E, et al. The pathogenesis of melasma and implications for treatment. J Cosmet Dermatol. 2021;20(11):3432–45. CrossRef
15.
Zurück zum Zitat Goldsberry A, Hanke CW, Hanke KE. VISIA system: a possible tool in the cosmetic practice. J Drugs Dermatol. 2014;13(11):1312–4. Goldsberry A, Hanke CW, Hanke KE. VISIA system: a possible tool in the cosmetic practice. J Drugs Dermatol. 2014;13(11):1312–4.
16.
Zurück zum Zitat Li CX, Fei WM, Shen CB, et al. Diagnostic capacity of skin tumor artificial intelligence-assisted decision-making software in real-world clinical settings. Chin Med J. 2020;133(17):2020–6. CrossRef Li CX, Fei WM, Shen CB, et al. Diagnostic capacity of skin tumor artificial intelligence-assisted decision-making software in real-world clinical settings. Chin Med J. 2020;133(17):2020–6. CrossRef
17.
Zurück zum Zitat Lim ZV, Akram F, Ngo CP, et al. Automated grading of acne vulgaris by deep learning with convolutional neural networks. Skin Res Tech. 2020;26(2):187–92. CrossRef Lim ZV, Akram F, Ngo CP, et al. Automated grading of acne vulgaris by deep learning with convolutional neural networks. Skin Res Tech. 2020;26(2):187–92. CrossRef
18.
Zurück zum Zitat Zhao Z, Wu CM, Zhang S, et al. A novel convolutional neural network for the diagnosis and classification of Rosacea: usability study. JMIR Med Inform. 2021;9(3): e23415. CrossRef Zhao Z, Wu CM, Zhang S, et al. A novel convolutional neural network for the diagnosis and classification of Rosacea: usability study. JMIR Med Inform. 2021;9(3): e23415. CrossRef
19.
Zurück zum Zitat Zhao S, Xie B, Li Y, et al. Smart identification of psoriasis by images using convolutional neural networks: a case study in China. J Eur Acad Dermatol Venereol. 2020;34(3):518–24. CrossRef Zhao S, Xie B, Li Y, et al. Smart identification of psoriasis by images using convolutional neural networks: a case study in China. J Eur Acad Dermatol Venereol. 2020;34(3):518–24. CrossRef
20.
Zurück zum Zitat Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J Invest Dermatol. 2018;138(7):1529–38. CrossRef Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J Invest Dermatol. 2018;138(7):1529–38. CrossRef
21.
Zurück zum Zitat Huang K, He X, Jin Z, et al. Assistant diagnosis of basal cell carcinoma and seborrheic keratosis in Chinese population using convolutional neural network. J Healthcare Eng. 2020;2020:1713904. CrossRef Huang K, He X, Jin Z, et al. Assistant diagnosis of basal cell carcinoma and seborrheic keratosis in Chinese population using convolutional neural network. J Healthcare Eng. 2020;2020:1713904. CrossRef
22.
Zurück zum Zitat Liang Y, Sun L, Ser W, et al. Classification of non-tumorous skin pigmentation disorders using voting based probabilistic linear discriminant analysis. Comput Biol Med. 2018;99:123–32. CrossRef Liang Y, Sun L, Ser W, et al. Classification of non-tumorous skin pigmentation disorders using voting based probabilistic linear discriminant analysis. Comput Biol Med. 2018;99:123–32. CrossRef
23.
Zurück zum Zitat Chen IL, Wang YJ, Chang CC, et al. Computer-aided detection (CADe) system with optical coherent tomography for melanin morphology quantification in melasma patients. Diagnostics (Basel, Switzerland). 2021;11(8):1498. Chen IL, Wang YJ, Chang CC, et al. Computer-aided detection (CADe) system with optical coherent tomography for melanin morphology quantification in melasma patients. Diagnostics (Basel, Switzerland). 2021;11(8):1498.
24.
Zurück zum Zitat Srinivasu PN, SivaSai JG, Ijaz MF, Bhoi AK, Kim W, Kang JJ. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors (Basel, Switzerland). 2021;21(8):2852. CrossRef Srinivasu PN, SivaSai JG, Ijaz MF, Bhoi AK, Kim W, Kang JJ. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors (Basel, Switzerland). 2021;21(8):2852. CrossRef
25.
Zurück zum Zitat Al-Masni MA, Kim DH, Kim TS. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput Methods Programs Biomed. 2020;190: 105351. CrossRef Al-Masni MA, Kim DH, Kim TS. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput Methods Programs Biomed. 2020;190: 105351. CrossRef
26.
Zurück zum Zitat Munthuli A, Intanai J, Tossanuch P, et al. Extravasation screening and severity prediction from skin lesion image using deep neural networks. Annual International Conference of the IEEE Engineering in Medicine and Biology Society IEEE Engineering in Medicine and Biology Society Annual International Conference. 2022;2022:1827–33. Munthuli A, Intanai J, Tossanuch P, et al. Extravasation screening and severity prediction from skin lesion image using deep neural networks. Annual International Conference of the IEEE Engineering in Medicine and Biology Society IEEE Engineering in Medicine and Biology Society Annual International Conference. 2022;2022:1827–33.
27.
Zurück zum Zitat Du H, Wang J, Liu M, Wang Y, Meijering E. SwinPA-Net: Swin Transformer-based multiscale feature pyramid aggregation network for medical image segmentation. In: IEEE Transactions on Neural Networks and Learning Systems; 2022. Du H, Wang J, Liu M, Wang Y, Meijering E. SwinPA-Net: Swin Transformer-based multiscale feature pyramid aggregation network for medical image segmentation. In: IEEE Transactions on Neural Networks and Learning Systems; 2022.
28.
Zurück zum Zitat Zhu M, Pi Y, Jiang Z, et al. Application of deep learning to identify ductal carcinoma in situ and microinvasion of the breast using ultrasound imaging. Quant Imaging Med Surg. 2022;12(9):4633–46. CrossRef Zhu M, Pi Y, Jiang Z, et al. Application of deep learning to identify ductal carcinoma in situ and microinvasion of the breast using ultrasound imaging. Quant Imaging Med Surg. 2022;12(9):4633–46. CrossRef
30.
Zurück zum Zitat Singh H, Saini SS, Lakshminarayanan V. Rapid classification of glaucomatous fundus images. J Opt Soc Am A: Optics Image Sci Vis. 2021;38(6):765–74. CrossRef Singh H, Saini SS, Lakshminarayanan V. Rapid classification of glaucomatous fundus images. J Opt Soc Am A: Optics Image Sci Vis. 2021;38(6):765–74. CrossRef
31.
Zurück zum Zitat Yang Y, Ge Y, Guo L, et al. Development and validation of two artificial intelligence models for diagnosing benign, pigmented facial skin lesions. Skin Res Tech. 2021;27(1):74–9. CrossRef Yang Y, Ge Y, Guo L, et al. Development and validation of two artificial intelligence models for diagnosing benign, pigmented facial skin lesions. Skin Res Tech. 2021;27(1):74–9. CrossRef
32.
Zurück zum Zitat Peng L, Wang C, Tian G, et al. Analysis of CT scan images for COVID-19 pneumonia based on a deep ensemble framework with DenseNet, Swin transformer, and RegNet. Front Microbiol. 2022;13: 995323. CrossRef Peng L, Wang C, Tian G, et al. Analysis of CT scan images for COVID-19 pneumonia based on a deep ensemble framework with DenseNet, Swin transformer, and RegNet. Front Microbiol. 2022;13: 995323. CrossRef
33.
Zurück zum Zitat Chi J, Sun Z, Wang H, Lyu P, Yu X, Wu C. CT image super-resolution reconstruction based on global hybrid attention. Comput Biol Med. 2022;150: 106112. CrossRef Chi J, Sun Z, Wang H, Lyu P, Yu X, Wu C. CT image super-resolution reconstruction based on global hybrid attention. Comput Biol Med. 2022;150: 106112. CrossRef
34.
Zurück zum Zitat Tschandl P, Rosendahl C, Akay BN, et al. Expert-Level diagnosis of nonpigmented skin cancer by combined convolutional neural networks. JAMA Dermatol. 2019;155(1):58–65. CrossRef Tschandl P, Rosendahl C, Akay BN, et al. Expert-Level diagnosis of nonpigmented skin cancer by combined convolutional neural networks. JAMA Dermatol. 2019;155(1):58–65. CrossRef
35.
Zurück zum Zitat Jin YW, Jia S, Ashraf AB, Hu P. Integrative data augmentation with U-Net segmentation masks improves detection of lymph node metastases in breast cancer patients. Cancers. 2020;12(10):2934. CrossRef Jin YW, Jia S, Ashraf AB, Hu P. Integrative data augmentation with U-Net segmentation masks improves detection of lymph node metastases in breast cancer patients. Cancers. 2020;12(10):2934. CrossRef
36.
Zurück zum Zitat Li Y, Kong AW, Thng S. Segmenting vitiligo on clinical face images using CNN trained on synthetic and internet images. IEEE J Biomed Health Inform. 2021;25(8):3082–93. CrossRef Li Y, Kong AW, Thng S. Segmenting vitiligo on clinical face images using CNN trained on synthetic and internet images. IEEE J Biomed Health Inform. 2021;25(8):3082–93. CrossRef
37.
Zurück zum Zitat Kang HY, Bahadoran P, Suzuki I, et al. In vivo reflectance confocal microscopy detects pigmentary changes in melasma at a cellular level resolution. Exp Dermatol. 2010;19(8):e228-233. CrossRef Kang HY, Bahadoran P, Suzuki I, et al. In vivo reflectance confocal microscopy detects pigmentary changes in melasma at a cellular level resolution. Exp Dermatol. 2010;19(8):e228-233. CrossRef
38.
Zurück zum Zitat Chan IL, Cohen S, da Cunha MG, Maluf LC. Characteristics and management of Asian skin. Int J Dermatol. 2019;58(2):131–43. CrossRef Chan IL, Cohen S, da Cunha MG, Maluf LC. Characteristics and management of Asian skin. Int J Dermatol. 2019;58(2):131–43. CrossRef
Metadaten
Titel
An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input
verfasst von
Lin Liu
Chen Liang
Yuzhou Xue
Tingqiao Chen
Yangmei Chen
Yufan Lan
Jiamei Wen
Xinyi Shao
Jin Chen
Publikationsdatum
28.12.2022
Verlag
Springer Healthcare
Erschienen in
Dermatology and Therapy / Ausgabe 2/2023
Print ISSN: 2193-8210
Elektronische ISSN: 2190-9172
DOI
https://doi.org/10.1007/s13555-022-00874-z

Weitere Artikel der Ausgabe 2/2023

Dermatology and Therapy 2/2023 Zur Ausgabe

Update Innere Medizin

Update Innere Medizin

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert – ganz bequem per eMail.