Background
Pneumothorax (PTX) is an acute pulmonary disease with respiratory disorder caused by the abnormal accumulation of air in the pleural space between the chest wall and the lung [
1,
2]. According to the previous study in United States, PTX can occur in a variety of clinical settings and in individuals of any age, with a 35% recurrence rate in men [
3]. PTX can cause pleuritic chest discomfort and dyspnea, and in severe cases may precipitate life-threatening medical emergency with cardio-respiratory collapse, requiring immediate intervention and subsequent prevention [
4].
The screening and diagnosis of pneumothorax usually rely on chest radiographs that are formed by the differences in the absorption of X-ray ionizing radiation of different tissues in the chest [
5]. Since chest radiographs project all three-dimensional anatomical clues of the chest onto a two-dimensional plane, the pneumothoraces in chest X-rays may be very subtle and overlapped with the ribs or clavicles. The identification of pneumothorax in chest X-ray is difficult and largely depends on the experience of radiologists. The failure of radiologists to detect PTX in early examination is one of the leading causes of PTX death [
2]. Therefore, it is highly demanded to develop an automatic algorithm to reduce missed diagnosis and to help radiologists identify PTX accurately and timely.
Conventional PTX detection methods mainly consider the local and global texture cues [
6], features from phase stretch transform (PST) [
2], and local binary pattern (LBP) and then employ support vector machine (SVM) to classify the presence and absence of pneumothorax [
7]. These conventional algorithms, which count on hand-crafted features and require prior knowledge for the feature engineering that can be well modeled through shape and appearance features and consistent data distribution, are suited to the detection of regular organs and lesions. However, the modeling capability of the conventional method is very limited when the shape and size of PTX vary greatly and the characteristics are not obvious.
Recently, deep learning-based technologies, especially the convolutional neural networks (CNNs), have shown great potential in medical image analysis [
8,
9]. Several deep CNNs algorithms have been proposed for the identification of PTX with the image-level annotation. Wang et al. [
10] released a large-scale chest X-ray dataset with image-level annotation, and proposed a deep CNN for the classification of 14 abnormalities (including PTX) on chest X-ray. This study is a milestone of PTX detection in the era of deep learning. Later, the studies of [
11‐
14] proposed more accurate classification networks for the 14 kinds of chest diseases, and the studies of [
4,
15] proposed methods that only detect PTX. Despite these deep learning-based methods have demonstrated effectiveness in the PTX identification with image-level annotation, the utilization of image-level annotation makes the localization of pneumothorax on chest X-ray insufficiently precise. Since the segmentation of PTX region can help determine the large PTX for the automatic triaging scheme [
16], accurate segmentation of PTX with pixel-level annotation is very crucial to the accurate localization of pneumothorax. However, due to the difficulty in obtaining pixel-level annotations of PTX, there are few studies on PTX segmentation.
Lesion segmentation in medical images is the most fundamental tool for the support of lesion analysis and treatment planning. Automatic and accurate segmentation tool can better help radiologists in the quantitative image analysis and support precise diagnosis. In this study, we create a large chest X-ray dataset for pneumothorax with pixel-level annotation by radiologists and explore an automatic segmentation algorithm for PTX identification using fully convolutional networks (FCNs) [
17]. FCNs were introduced in the literature as a natural extension of CNNs to formulate semantic segmentation as pixel classification problem. FCNs and its further extensions like U-Net [
18] have achieved remarkable performance for several tasks like the segmentation of lungs, clavicles, heart in chest radiographs [
19], brain tumors [
20], estimation of cardiothoracic ratio [
21], etc. However, the PTX areas in chest X-rays may be very subtle and varied in shape, overlapping with the ribs or clavicle, and therefore the PTX segmentation task suffers from pixel imbalance and multi-scale problems.
In this study, we propose a fully convolutional multi-scale scSE-DenseNet framework for PTX segmentation and diagnosis with the pixel-level annotation on chest X-ray. The framework consists of three modules: (1) a fully convolutional DenseNet (FC-DenseNet), which is parameter efficient and served as the backbone of the framework; (2) a multi-scale module that captures the variability of viewpoint-related objects and learns the relationships across image structures at multiple scales; (3) a scSE module, which is incorporated into each convolution layer in the dense block of FC-DenseNet and can adaptively recalibrate feature maps to elucidate useful features while suppressing non-useful features without adding much parameters. To tackle the imbalance problem of pixels [
22], we also introduce a spatially weighted cross-entropy loss (SW-CEL) function to penalize the target areas, background and boundary pixels using different weights. The proposed method can not only reduce the impact of class imbalance, but also better describe the boundary areas to segment and diagnose pneumothorax accurately. This study extends our preliminary work [
23] by redesigning the automatic segmentation and diagnosis framework for PTX, adding extensive experiments to evaluate the automatic segmentation and diagnosis of PTX, and discussing the effects of different growth rates and loss functions on PTX segmentation.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit
http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (
http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.