Introduction

Chest radiographs are among the most frequently used imaging procedures in radiology. They have been widely employed in the field of computer vision, as chest radiographs are a standardized technique and, if compared to other radiological examinations such as computed tomography or magnetic resonance imaging, contain a smaller group of relevant pathologies. Although many artificial neural networks for the classification of chest radiographs have been developed, it is still subject to intensive research.

Only a few groups design their own networks from scratch, while most use already established architectures, such as ResNet-50 or DenseNet-121 (with 50 and 121 representing the number of layers within the respective neural network)1,2,3,4,5,6. These neural networks have often been trained on large, openly available datasets, such as ImageNet, and are therefore already able to recognize numerous image features. When training a model for a new task, such as the classification of chest radiographs, the use of pre-trained networks may improve the training speed and accuracy of the new model, since important image features that have already been learned can be transferred to the new task and do not have to be learned again. However, the feature space of freely available data sets such as ImageNet differs from chest radiographs as they contain color images and more categories. The ImageNet Challenge includes 1,000 possible categories per image, while CheXpert, a large freely available data set of chest radiographs, only distinguishes between 14 categories (or classes)7, and the COVID-19 Image Data Collection only differentiates between three classes8. Although the ImageNet challenge showed a trend towards higher accuracies through increasing the number of layers in a CNN, this may not be necessary for a medical image classification task.

In radiology, sometimes only limited features of an image can be decisive for the diagnosis. Therefore, images cannot be scaled down as much as desired, as the required information would otherwise be lost. But the more complex a CNN, the more resources are required for training and deployment. As up-scaling the input-images resolution exponentially increases memory usage during training for large neural networks that evaluate many parameters, the size of a mini batch needs to be reduced earlier and more strongly (in ranges between 2 and 16), which may affect optimizers such as the stochastic gradient descent.

Therefore, it remains to be determined, which of the available artificial neural networks designed for and trained on the ImageNet dataset will perform best for the classification of chest radiographs. The hypothesis of this work is that the number of layers in a CNN is not necessarily decisive for good performance on medical data. CNN with fewer layers might perform similarly to deeper/more complex networks, while at the same time requiring less resources. Therefore, we systematically investigate the performance of sixteen openly available CNN on the CheXpert dataset and the COVID-19 Image Data Collection.

Methods

Data preparation

The free available CheXpert dataset consists of 224,316 chest radiographs from 65,240 patients. Fourteen findings have been annotated for each image: enlarged cardiomediastinum, cardiomegaly, lung opacity, lung lesion, edema, consolidation, pneumonia, atelectasis, pneumothorax, pleural effusion, pleural other, fracture and support devices. Hereby the findings can be annotated as present (1), absent (NA) or uncertain (− 1). Similar to previous work on the classification of the CheXpert dataset3,9, we trained these networks on a subset of labels: cardiomegaly, edema, consolidation, atelectasis and pleural effusion. As we only aim at network comparison and not on maximal precision of a neural network, for this analysis, each image with an uncertainty label was excluded, and other approaches such as zero imputation or self-training were also not adopted. Furthermore, only frontal radiographs were used, leaving 135,494 images from 53,388 patients for training. CheXpert offers an additional dataset with 235 images (201 images after excluding uncertainty labels and lateral radiographs), annotated by two independent radiologists, which is intended as an evaluation dataset and was therefore used for this purpose.

The COVID-19 Image Data Collection is a dataset focusing on chest radiographs for the novel coronavirus SARS-CoV-2 with the associated COVID-19 pneumonia. The dataset is still under active development, at the time of our analysis it consists out of 46,754 chest radiographs, of which 30,174 represent normal cases without pneumonia, 16,384 are cases with non-COVID-19 pneumonia and 196 include radiographs of confirmed COVID-19 pneumonia. We split the set into a dataset for training and validation consisting of 43,754 cases (28,240 normal, 15,333 non-COVID-19 pneumonia and 181 COVID-19) and a dataset for testing consisting including 3,000 cases (1,934 normal, 1,051 non-COVID-19 pneumonia and 15 COVID-19).

Data augmentation

For the first and second training session, the images were scaled to 320 × 320 pixels, using bilinear interpolation, and pixel values were normalized. During training, multiple image transformations were applied: flipping of the images alongside the horizontal and vertical axis, rotation of up to 10°, zooming of up to 110%, adding of random lightning or symmetric wrapping.

Model training

15 different convolutional neural networks (CNN) of five different architectures (ResNet, DenseNet, VGG, SqueezeNet, Inception v4 and AlexNet) were trained on two datasets1,2,10,11,12,13. All training was done using the Python programming language (https://www.python.org, version 3.8) with the PyTorch (https://pytorch.org) and FastAI (https://fast.ai) libraries on a workstation running on Ubuntu 18.04 with two Nvidia GeForce RTX 2080ti graphic cards (11 GB of RAM each)14,15. In the first training session, batch size was held constant at 16 for all models, while it was increased to 32 for all networks in the second session. We decided to use two different batch sizes, because maximum batch size is limited mainly by the available GPU-RAM and therefore can only increase to a limited amount, especially in larger and thus more memory-demanding networks. Especially with increased image resolution, lowering the batch size will be the major limitation to network performance.

Each model was trained for eight epochs, whereas during the first five epochs only the classification-head of each network was trained. Thereafter, the model was unfrozen and trained as whole for three additional epochs. Before training and after the first five epochs, the optimal learning rate was determined16. For CheXpert, it was between 1e − 1 and 1e − 2 for the first five epochs and between 1e − 5 and 1e − 6 for the rest of the training, while for the COVID-19 Image Data Collection, it was between 1e − 2 and 1e − 3 for the first five epochs and 1e − 5 and 1e − 6 for the rest of the training. We trained one multilabel classification head for each model for the CheXpert dataset and a multi-class model for the COVID-19 Image Data Collection. Since the performance of a neural network can be subject to minor random fluctuations, the training was repeated for a total of five times. The predictions on the validation data set were then exported as comma separated values (CSV) for evaluation.

Evaluation

Evaluation was performed using the “R” statistical environment including the “tidyverse” and “ROCR” libraries17,18,19. Predictions on the validation dataset of the five models for each network architecture were pooled so that the models could be evaluated as a consortium. For each individual prediction as well as the pooled predictions, receiver operation characteristic (ROC) curves and precision recall curves (PRC) were plotted and the areas under each curve were calculated (AUROC and AUPRC). AUROC and AUPRC were chosen as they enable a comparison of different models, independent of a chosen threshold for the classification. Sensitivity and specificity were calculated with an individual cut-off for each network. The cut-off was chosen so that the sum of sensitivity and specificity was the highest achievable for the respective network.

Results

The CheXpert validation dataset consists of 234 studies from 200 patients, not used for training with no uncertainty-labels. After excluding lateral radiographs (n = 32), 202 images from 200 patients remained. The dataset presents class imbalances (% positives for each finding: cardiomegaly 33%, edema 21%, consolidation 16%, atelectasis 37%, pleural effusion 32%), so that the AUPRC as well as the AUROC are reported. The performance of the tested networks is compared to the AUROC reported by Irvin et al.3 However, only values for AUROC, but not for AUPRC, are provided there.

In most cases, the best results were achieved with a batch size of 32, so all the information provided below refers to models trained with this batch size. Results achieved with smaller batch sizes of 16 will be explicitly mentioned.

Area Under the Receiver Operating Characteristic Curve

On the CheXpert dataset, deeper CCN generally achieved higher AUROC values than shallow networks (Table 1 and Figs. 1, 2, 3). Regarding the pooled AUROC for the detection of the five pathologies, ResNet-152 (0.882), DenseNet-161 (0.881) and ResNet-50 (0.881) performed best (Irvin et al. CheXpert baseline 0.889)3. Broken down for individual findings, the most accurate detection of atelectasis was achieved by ResNet-18 (0.816, batch size 16), ResNet-101 (0.813, batch size 16), VGG-19 (0.813, batch size 16) and ResNet-50 (0.811). For detection of cardiomegaly, the best four models surpassed the CheXpert baseline of 0.828 (ResNet-34 0.840, ResNet-152 0.836, DenseNet-161 0.834, ResNet-50 0.832). For congestion, the highest AUROC was achieved using ResNet-152 (0.917), ResNet-50 (0.916) and DenseNet-161 (0.913). Pulmonary edema was most accurately detected using DenseNet-161 (0.923), DenseNet-169 (0.922) and DenseNet-201 (0.922). For pleural effusion, the four best models were ResNet-152 (0.937), ResNet-101 (0.936), ResNet-50 (0.934) and DenseNet-169 (0.934), all of which performed superior to the CheXpert baseline of 0.928.

Table 1 Area under the receiver operating characteristic curves—CheXpert.
Figure 1
figure 1

ROC curves for AlexNet, DenseNet and Inception v4 models. The colored lines represent a single training, black lines represent the pooled performance over five trainings. The figure was generated in R20.

Figure 2
figure 2

ROC-curves for the models with ResNet architectures. The colored lines represent a single training, black lines represent the pooled performance over five trainings. The figure was generated in R20.

Figure 3
figure 3

ROC-curves for the models with Squeezenet and VGG architectures. The colored lines represent a single training, black lines represent the pooled performance over five trainings.

On the COVID-19 dataset, the AUROC did not substantially differ between the models with a range of the pooled values between 0.98 and 0.998 (Table 2, Appendix Figures S1S3 in Supplementary Information). The highest pooled AUROC of 0.998 was achieved using a DenseNet-169 and DensNet-201 with AUROC values of 1.00 for the detection of COVID-19 and 0.997 for the detection of non-COVID-19 pneumonia or the absence of pneumonia.

Table 2 Areas Under the Receiver Operating Characteristic Curve—COVID 19 image data collection.

Area Under the Precision Recall Curve and Sensitivity and Specificity

For AUPRC, CNN with less convolutional layers could achieve higher values than deeper network-architectures (Table S1 and Appendix Figures S1S3 in Supplementary Information). The highest pooled values for the AUPRC were achieved by training VGG-16 (0.709), AlexNet (0.701) and ResNet-34 (0.688). For atelectasis, CGG-16 and AlexNet both achieved the highest AUPRC of 0.732, followed by Resnet-35 with 0.652. Cardiomegaly was most accurately detected by SqueezeNet 1.0 (0.565), Alexnet-152 (0.565) and Vgg-13 (0.563). SqueezNet 1.0 also achieved the highest AUPRC values for consolidation (0.815) followed by ResNet-152 (0.810) and ResNet-50 (0.809). The best classifications of pulmonary edema were achieved by DenseNet-169, DenseNet-161 (both 0.743) and DenseNet-201 (0.742). Finally, for pleural effusion, ResNet-101 and ResNet-152 achieved the highest AUPRC of 0.591, followed by ResNet-50 (0.590).   For an overview of sensitivities and specificities (including confidence intervals), please refer to Tables 3 and 4.

Table 3 Sensitivity on the CheXpert dataset.
Table 4 Specificity on the CheXpert dataset.

On the COVID-19 dataset, very high AUPRC values could be reached with all sixteen different CNN architectures for the detection of non-COVID-19 pneumonia or absence of pneumonia (Table S2 in Supplementary Information, Figs. 4, 5, 6). However, for the detection of COVID-10 pneumonia, heterogenous performances were achieved. While using Densnet-121, an AUPRC of 0.329 could be achieved, employment of a VGG-19 could achieve values of 0.925. However, it should be noted that there were only 15 COVID-19 cases in the 3,000 image test data set, so even a single misclassification likely had a major impact on the measured performance.

Figure 4
figure 4

Illustration of the precision recall curves for AlexNet, DenseNet and Inception v4 models. The colored lines represent a single training, black lines represent the pooled performance over five trainings. The figure was generated in R20.

Figure 5
figure 5

Illustration of the precision recall curves for models with ResNet architectures. The colored lines represent a single training, black lines represent the pooled performance over five trainings. The figure was generated in R20.

Figure 6
figure 6

Illustration of the precision recall curves for models with Squeezenet and VGG architectures. The colored lines represent a single training, black lines represent the pooled performance over five trainings. The figure was generated in R20.

Training time

Fourteen different network-architectures were trained ten times each with a multilabel-classification head (five times each for batch size of 16 or 32 and an input-image resolution of 320 × 320 pixels) and once with a binary classification head for each finding, resulting in 210 individual training runs. Overall, the training took 340 h. As to be expected, the training of deeper networks required more time than the training of shallower networks. For an image resolution of 320 × 320 pixels, the training of AlexNet required the least amount of time with a time per epoch of 2:29–2:50 min and a total duration of 20:25 min for a batch size of 32. Using a smaller batch size of 16, the time per epoch raised to 2:59–3:06 min and a total duration of 24:01 min. In contrast, using a batch size of 16, training of a DenseNet-201 took the longest with 5:11:22 h and epochs requiring between 40:58 and 41:00 min. For a batch size of 32, training a DenseNet-169 required the largest amount of time with 3:05:49 h (epochs between 20:57 and 27:01 min). Increasing the batch size from 16 to 32 lead to an average acceleration of training by 29.9% ± 9.34%. Table 5 gives an overview of training times.

Table 5 Sensitivity and specificity for the detection of COVID-19 or non-COVID-19 pneumonia.

On the COVID-19 Image Data collection, training of an epoch took between 03:52 and 11:33 min. There was not much difference in duration of an epoch between the models. This is probably primarily due to the fact that, in contrast to the CheXpert data set, in which all images are available in a resolution of 320 × 320 px, an on-the-fly down-scaling of the images to 320 × 320 px had to be performed for the COVID-19 Image Data Collection, which likely represented the performance bottleneck of the training. Considering the nevertheless very short training times, we refrained from downsizing the images to 320 × 320 px in advance (Table 6).

Table 6 Duration of training for the different models.

Discussion

In the present work, different architectures of artificial neural networks are analyzed with respect to their performance for the classification of chest radiographs. We could show that deeper neural networks do not necessarily perform better than shallow networks. Instead, an accurate classification of chest radiographs may be achieved with comparably shallow networks, such as AlexNet (8 layers), ResNet-34 or VGG-16.

The use of CNN with fewer layers has the advantage of lower hardware requirements and shorter training times compared to their deeper counterparts. Shorter training times allow to test more hyperparameters and facilitates the overall training process. Lower hardware requirements also enable the use of increased image resolutions. This could be of relevance for the evaluation of chest radiographs with a generic resolution of 2,048 × 2,048 to 4,280 × 4,280 px, where specific findings, such as small pneumothorax, require larger resolutions of input-images, because otherwise the crucial information regarding their presence could be lost due to a downscaling. Furthermore, shorter training times might simplify the integration of improvement methods into the training data, such as the implementation of ‘human in the loop’ annotations. ‘Human in the loop’ implies that the training of a network is supervised by a human expert, who may intervene and correct the network at critical steps. For example, the human expert may check the misclassifications with the highest loss for incorrect labels, thus effectively reducing label noise. With shorter training times, such feedback loops can be executed faster. In the CheXpert dataset, which was used as a groundwork for the present analysis, labels for the images were generated using a specifically developed natural language processing tool, which did not produce perfect labels. For example, the F1 scores for the mentioning and subsequent negation of cardiomegaly were 0.973 and 0.909, and the F1 score for an uncertainty label was 0.727. Therefore, it can be assumed, that there is a certain amount of noise in the training data, which might affect the accuracy of the models trained on it. Implementing a human-in-the loop approach for partially correcting the label noise could further improve performance of networks trained on the CheXpert dataset21. Our findings differ from applied techniques used in previous literature, where deeper network architectures, mainly a DenseNet-121, were used to classify the CheXpert data set6,9,22. The authors of the CheXpert dataset achieved an average overall AUROC of 0.8893, using a DenseNet-121, which was not surpassed by any of the models used in our analysis, although differences between the best performing networks and the CheXpert baseline were smaller than 0.01. It should be noted, however, that in our analysis the hyperparameters for the models were probably not selected as precise as in the original CheXpert paper by Irvin et al., since the focus of this work was more on the comparison of different architectures instead of the optimization of one specific network. Keeping all other hyper-parameters constant across the models might also have affected certain architectures more than others, thus lowering the comparability between the different networks we evaluated.

Also, the comparability of our approach and the CheXpert dataset might be limited as we adopted a different method for evaluation of our results, excluding lateral radiographs. Inclusion of lateral radiographs makes the dataset more diverse and maybe more challenging for different models. On the other hand, some findings such as small effusions, can only be seen on lateral radiographs and would thus be missed by our models, leading to a lower accuracy in comparison to the CheXpert baseline.

Pham et al. also used a DenseNet-121 as the basis for their model and proposed the most accurate model of the CheXpert dataset with a mean AUROC of 0.940 for the five selected findings6. The good results are probably due to the hierarchical structure of the classification framework, which takes into account correlations between different labels, and the application of a label-smoothing technique, which also allows the use of uncertainty labels (which were excluded in our present work). Allaouzi et al. similarly used a DenseNet-121 and created three different models for the classification of the CheXpert and ChestX-ray14, yielding an AUC of 0.72 for atelectasis, 0.87–0.88 for cardiomegaly, 0.74–0.77 for consolidation, 0.86–0.87 for edema and 0.90 for effusion22. Except for cardiomegaly, we achieved better values with several models (e.g. ResNet-34, ResNet-50, AlexNet, VGG-16). This suggests, that complex deep networks are not necessarily superior to more shallow networks for chest X-ray classification. At least for the CheXpert dataset, it seems that methods optimizing the handling of uncertainty labels and hierarchical structures of the data are important to improve model performance. Sabottke et al. trained a ResNet-32 for classification of chest radiographs and therefore are one of the few groups using a smaller network9. With an AUROC of 0.809 for atelectasis, 0.925 for cardiomegaly, 0.888 for edema and 0.859 for effusion, their network performed not as good as some of our tested networks. Raghu et al. employed a ResNet-50, an Inception-v3 as well as a custom-designed small network. Similar to our findings, they observed, that smaller networks showed a comparable performance to deeper networks7. Regarding the COVID-19 Image Data Collection, only few analyses have been published due to the novelty of this dataset. Nearly all tests networks performed similar, again showing that for the analysis of chest radiographs very deep and complex neuronal network architectures are not necessarily needed. Farooq and Hafeez published a model with an accuracy of 100% for the detection of COVID-19; however, they only had eight cases in their dataset and thus even less than in the present analysis23. The original COVID-Net achieved a sensitivity of 91% for the detection of COVID-19, which was surpassed by all models in our analysis24. Yet, the dataset was substantially smaller at the time of training the COVID-Net, which could have had an effect on the accuracy.

A limitation of the present work is, that only two openly available datasets were used. As a consequence, an overfitting with a lower generalizability of the results cannot be excluded and should be considered when interpreting our results. However, this is a common problem in deep learning research25.

Conclusion

In the present work, we could show that increasing complexity and depth of artificial neural networks for the classification of chest radiographs is not always necessary to achieve state of the art results. In contrast to many previous studies, which mostly used a 121-layer DenseNet, we achieved comparable results with networks consisting of fewer layers (e.g. eight layers for AlexNet). Especially with limited hardware, using those networks could be advantageous because they can be trained faster and more efficiently.