Skip to main content
Erschienen in: Journal of Digital Imaging 6/2013

Open Access 01.12.2013

Model-Based Pancreas Segmentation in Portal Venous Phase Contrast-Enhanced CT Images

verfasst von: Matthias Hammon, Alexander Cavallaro, Marius Erdt, Peter Dankerl, Matthias Kirschner, Klaus Drechsler, Stefan Wesarg, Michael Uder, Rolf Janka

Erschienen in: Journal of Imaging Informatics in Medicine | Ausgabe 6/2013

Abstract

This study aims to automatically detect and segment the pancreas in portal venous phase contrast-enhanced computed tomography (CT) images. The institutional review board of the University of Erlangen-Nuremberg approved this study and waived the need for informed consent. Discriminative learning is used to build a pancreas tissue classifier incorporating spatial relationships between the pancreas and surrounding organs and vessels. Furthermore, discrete cosine and wavelet transforms are used to build texture features to describe local tissue appearance. Classification is used to guide a constrained statistical shape model to fit the data. The algorithm to detect and segment the pancreas was evaluated on 40 consecutive CT data that were acquired in the portal venous contrast agent phase. Manual segmentation of the pancreas was carried out by experienced radiologists and served as reference standard. Threefold cross validation was performed. The algorithm-based detection and segmentation yielded an average surface distance of 1.7 mm and an average overlap of 61.2 % compared with the reference standard. The overall runtime of the system was 20.4 min. The presented novel approach enables automatic pancreas segmentation in portal venous phase contrast-enhanced CT images which are included in almost every clinical routine abdominal CT examination. Reliable pancreatic segmentation is crucial for computer-aided detection systems and an organ-specific decision support.
Hinweise
Matthias Hammon and Alexander Cavallaro contributed equally to this work.

Introduction

The automatic pancreas segmentation in 3D computed tomography (CT) data enables computer-aided detection (CAD) that potentially supports radiologists in the daily routine. CAD systems could assist detecting malignant pancreatic pathologies, such as hypoattenuating lesions in contrast-enhanced CT images that might turn out to be adenocarcinomas or cystic pancreatic lesions possibly indicating intraductal papillary mucinous neoplasms. Also, diagnosing dilated pancreatic ducts or inflamed pancreatic tissues could be facilitated. An early and reliable detection of pancreatic lesions is crucial for the patient because certain types of pancreatic cancer, such as the ductal adenocarcinoma have high mortality rates with a 5-year survival below 5 % and are the most difficult cancers to treat when diagnosed in a progressed stage [1].
However, it is extremely difficult to accurately segment the pancreatic tissue. In many cases, it is challenging even for experienced radiologists to visibly distinguish the pancreatic tissue from surrounding organs, such as the small bowel or from adjacent lymphatic tissue. Especially as large parts of the small bowl are very variable in position, it may contact the pancreas at any given topographic location. The pancreatic head being commonly adjacent to the duodenum is what makes this region difficult to assess. Furthermore, surrounding organs like the liver, the stomach and the spleen are often difficult to discern from the pancreatic tissue. In some cases, contrast agent saturation however can help to differentiate these organs from the pancreas. The lobulated nature of the pancreatic tissue causes one more challenge for an automatic segmentation. Figure 1 outlines the described problems on example images.
Detailed review of the literature told us that up to now, only few publications on the automatic segmentation of the pancreas exist. Shimizu et al. proposed two different approaches. With the single-phase approach, an accuracy of a 32.5 % overlap was obtained [2]. Secondly, they proposed an algorithm that extracted the pancreas from contrast-enhanced multiple-phase [3] data and obtained an accuracy of a 57.9 % overlap [3]. Kitasaka et al. [4] proposed a method to extract the pancreas from four-phase CT data. Segmentation quality was judged based on visual inspection as FINE in 12 cases, MEDIUM in 6 cases, and POOR in 4 cases. Wolz et al. [5] recently reported an accuracy of a 49.6 % overlap.
In case of a dedicated examination of the pancreas or the liver, multiphasic CT scans may be required [6,7]. However, a scan in the portal venous contrast agent phase is included in every clinical routine abdominal CT examination with the exception of a few selected examinations, such as computed tomography angiography or in the case of patient-specific limitations (e.g., contrast agent allergy or renal impairment). Therefore, we trained and evaluated the software on portal venous CT scans.
An automatic segmentation of the pancreas is essential to facilitate CAD systems. Segmentation or rather exclusion of the pancreas may also create new or improve existing CAD systems, such as automatic segmentation approaches for other abdominal structures as the intestine or abdominal lymph nodes. To our knowledge, for these structures no robust automatic segmentation solutions exist yet. The knowledge about where the pancreas is located also facilitates a customized organ-specific decision support.
In this work, an algorithm that automatically detects and segments the pancreas in single-phase portal venous contrast-enhanced CT images is proposed. The system was evaluated on 40 consecutive CT scans regarding algorithm’s accuracy and runtime.

Materials and Methods

The institutional review board of the University of Erlangen-Nuremberg approved this study and waived the need for informed consent.
The software was developed within the framework of the German Theseus-Medico research program, a recently completed 5-year nationwide multicenter research project. Physicians, university health care professionals, and computer scientists collaborated in a joint venture. The Theseus-Medico software platform has been developed in the course of this program to support physicians in accurate and efficient patient diagnostics and patient monitoring in various application areas including pancreas segmentation.

CT Imaging Technique

CT imaging was performed with a Somatom Sensation® 64 scanner (Siemens AG, Healthcare Division, Erlangen, Germany) with the following parameters: craniocaudal abdominal scan, 120 kV; Care Dose® (Siemens AG, Healthcare Division, Erlangen, Germany); pitch, 0.9; collimation, 0.6 mm; inter-slice spacing, 5 mm; and soft recon kernel. Images were acquired at the portal venous contrast agent phase (intravenous application of weight adopted, warmed Imeron® 400 (Bracco Imaging, Konstanz, Germany) followed by a saline flush with a flow rate of 3 ml/s through a 20-gauge catheter in an antecubital vein.

Manual Segmentation

Two radiologists (a radiological resident with 3 years of experience (M.H.) and a board-certified radiologist with 16 years of experience (A.C.)) collected the CT data and manually segmented the pancreas in the CT images in consensus. These delineations have been taken as the reference standard in all tests.

Detection of Support Structures

Physicians naturally incorporate topographical knowledge in order to distinguish the pancreas from its surrounding tissue and organs. In particular, the location of the pancreas is estimated in relation to the liver and the spleen. Furthermore, the pathway of the pancreas usually follows the anterior margin of the splenic vein. This anatomical knowledge was transferred and applied by the algorithm (Fig. 2, right).
The detection process starts with the previously described automatic detection and segmentation of the liver and the spleen by adapting statistical shape models to the CT data [8] (Fig. 2, left). The major vessels between both organs are then extracted (Fig. 2 middle). A classifier is built that encodes the spatial relation between the pancreas and the vessels. Additionally, local texture features around the vessels are incorporated. In the last step, a statistical shape model of the pancreas is adapted to the CT image to create a final segmentation (Fig. 2, right).
Axis-aligned bounding boxes around the segmented liver and spleen are created, and a rough pancreas bounding box is estimated. This box is a bounding box around the liver and spleen boxes. From this area, the segmented areas of liver and spleen are subtracted. Subsequent calculations are then limited to this region.
The main vessels between liver and spleen are detected. This includes the portal vein as well as the splenic vein and the superior mesenteric vein. For the segmentation of tube-like structures, the vessel segmentation approach described in Ref. [9] is applied. Three different filter scales between 2 and 4 mm are used to ensure that the complete vessel system is extracted.
The pancreas is usually located close to two vessel branches: the superior mesenteric vein and the splenic vein. These branches have to be extracted from the before-segmented vessel system. In particular, three branch points p1, p2, and p3 are extracted (Fig. 2, middle). p1 denotes the first major branch point of the superior mesenteric vein, p2 is the branch point between portal and splenic vein, and p3 denotes the end of the splenic vein. p3 can be directly computed without further processing as the intersection point of the segmented vessel system with the spleen segmentation mask. For detecting p1 and p2, a graph representation of the vessel system is built. Using the method described in Ref. [10], the vessel system is decomposed to a set of sub-branches. Each branch contains two end points. The branch that is partially inside the liver mask is selected. The end point of this branch that is not inside the liver is selected as p2. From p2, there is one direct branch to p3 and an additional branch denoting the superior mesenteric vein. Therefore, the end point of this second branch is p1. The cropped vessel system from p1 to p3 represents a vessel pathway that is used in the following to build a spatial anatomy descriptor.

Spatial Anatomy Descriptor

Appearance-based features that are applied globally to the image are often inaccurate for pancreas detection, because surrounding tissue may have the same texture. Therefore, in this work, the spatial relationship between the extracted vessel path and the position of pancreas tissue is learnt. The vessel path v(l) ∈ R3 from p1 to p3 is parameterized by its normalized length \( \left| \in \right. \) (0, 1, … , 100). The feature vector
$$ \mathrm{F}(t)=\mathrm{D}\left( \mathrm{t} \right),\mathrm{d}\left( \mathrm{t} \right),\mathrm{L}\left( \mathrm{t} \right) $$
maps the spatial relationship between the vessel path and the pancreas tissue needing only a small set of features. Let D(t) be the vector field of the distance transform of the vessel path at position t ∈ R3 and d(t) the signed distance from t to the closest point in the vessel path. Let v(L(t)) be the intersection of −D(t) with the vessel path. For performance reasons, F(t) is only computed for d(t) < N mm. Using 40 ground truth datasets, N = 80 was determined as the maximum distance of the pancreas tissue in the ground truth datasets plus an additional safety margin of 30 %. Therefore, this distance should fully cover the pancreas also in unseen datasets. In contrast to probabilistic atlas-based registration, no time-consuming and potentially error-prone deformable registration is needed. The feature space of F(t) also leads to a better de-correlation of the data in comparison to the Cartesian coordinate system of a probabilistic atlas. Since the distance of the pancreas to v(l) does not deviate much over l, the proposed length-distance representation yields a much more compact data distribution. Simpler classification models can therefore be used to achieve classification.

Texture Descriptors

Texture features are computed in order to describe the appearance of the pancreas tissue around the vessel path v(l). Analogous to the spatial anatomy features described before, local features are created, i.e., the features are computed along perpendicular vectors around v(l) (Fig. 3). The pancreas usually follows v(l). The variance of the local texture between v(l) and a position t should therefore depend directly on the distance of t to v(l). Two texture descriptors are created that encode dominant frequency characteristics of the local texture around v(l). Let P(t) be a vector of length N with sampled intensities from v(L(t)) to t. P(t) is padded with zeros in case d(t) < N. The frequency descriptor B(t),
$$ {B_k}\left( \mathrm{t} \right)=\sum\limits_{n=1}^N {{P_n}\left( \mathrm{t} \right)\cos \left( {\frac{\pi }{N}k\left( {n+0.5} \right)} \right)} $$
maps the intensity distribution along D(t) by computing the discrete cosine transform of the intensity profile perpendicular to the vessel path. In the experiments, the largest k = 0, … , 10 coefficients were kept to build B. Since the pancreas tissue is relatively homogeneous and lies nearby the vessel path v(l), the proportion of high frequencies should be low at small distances from v(l) and larger if the texture becomes inhomogeneous, e.g., between a transition from pancreas to small bowel.
The second texture descriptor W(t) encodes the amount of intensity variation peaks along D(t) by applying the Mexican hat wavelet on the volume V and integrating its negative and positive responses separately. Let the mapping c:[0,1]→R3 be defined as c(p) = t − pD(t).
$$ W_{\mu}^{\pm }=\int\nolimits_{p=0}^1 {{{{\left[ {\left( {\mathrm{c}(p)} \right)*\nabla G\left( {\mathrm{c}(p)} \right)} \right]}}^{\pm }}} \mathrm{d}p $$
\( G=\frac{1}{{\sqrt{{{{{\left( {2\pi } \right)}}^3}{\mu^3}}}}}{e^{{-\frac{{{x^2}+{y^2}+{z^2}}}{{{\mu^3}}}}}} \) is a Gaussian and \( \overline{\mathrm{v}} \) is the Laplace operator. The standard deviation of G is varied using two scales \( \mu\;\in \) (1,2) (in millimeters). This descriptor will accumulate more responses the more different the tissue between t and v(l) is. In case of pancreas tissue, the detector will usually accumulate two strong responses: one from vessel or fat to pancreas and the other from pancreas to other tissue.
The final pancreas tissue feature vector is
$$ Z\left( \mathrm{t} \right)=\left( {F\left( \mathrm{t} \right),B\left( \mathrm{t} \right),W_{\mu}^{-}\left( \mathrm{t} \right),W_{\mu}^{+}\left( \mathrm{t} \right),\mathrm{H}\left( \mathrm{t} \right)} \right) $$
H contains low-level features at t-like intensity, gradient, and nonlinear combinations as described in Ref. [11].

Final Detector

The spatial anatomy and texture features are used to build a boosted classifier. Here, AdaBoost [12] is used. As weak classifiers, 200 classification and regression trees [13] of maximum depth 5 have been chosen. These values yielded slightly better results than using less trees (100) or tree stumps. However, further increasing the number of trees or increasing the tree depth did not improve the results. The strong classifier is trained with Z(t) to learn the probability \( p\left( {\left| = \right|\mathrm{Z}\left( \mathrm{t} \right)} \right) \). Simple thresholding \( \mathrm{p}\left( {\left| = \right|\mathrm{Z}\left( \mathrm{t} \right)} \right) \) > e would lead to an incoherent labeling since no neighborhood relations between voxels are considered. In order to approximate a globally optimal classification, belief propagation with a term X is used which incorporates the probability of the classifier and global priors from the segmentation masks as well as a term U to incorporate voxel neighborhood relations. The labels q ∈ (0, 1, 2) have to be found that satisfy
$$ {\min_q}E(q)=\sum\nolimits_i^{{\left| I \right|}} {X\left( {{I_i},{q_i}} \right)+} \sum\nolimits_i^{{\left| I \right|}} {\sum\nolimits_j^{{{N_i}}} {U\left( {{q_1},{q_2}} \right)} } $$
\( \left| \mathrm{l} \right| \) is the number of voxels in V and \( \left| \in \right.\;{R^{{3\left| I \right|}}} \) is the set of voxel coordinates. \( {N_t} \) denotes the set of neighbors of the voxel at position \( {I_i} \). X is defined as
$$ X\left( {t,q} \right)=p\left( {l=1\left| {Z(t)} \right.} \right)+G\left( {t,q} \right) $$
where
$$ G\left( {t,q} \right)=\left\{ {\begin{array}{*{20}c} {1,} \hfill & {\mathrm{if}\ q\;\in \left\{ {1,2} \right\}\;\mathrm{and}\;t\;\in\;M} \hfill \\ {0,} \hfill & {\mathrm{else}} \hfill \\ \end{array}} \right. $$
Here, M is the binary vessel path mask. U is defined as \( U\left( {\mathrm{q}1,\ \mathrm{q}2} \right) = {{\left| {\mathrm{q}1-\mathrm{q}2} \right|}^2} \) to penalize transitions between non-neighboring classes. A multiscale approach [14] is used to solve said minimization problem in linear time. The resulting label q2 denotes the classification of the pancreas tissue.

Model-Based Adaptation

The final segmentation of the pancreas is created by adapting a statistical shape model of the pancreas to the image data. The model is positioned and adapted based on the classification output created in the previous section. The statistical shape model is built according to the approach of Cootes et al. [15] and consists of 1,692 landmarks (Fig. 4). The relatively low number of landmarks has been set based on the fact that the pancreas is a small organ and in most CT images only covers a limited amount of slices. Point correspondences used to create the shape model have been established using spherical parameterization [16]. In order to position the model in the CT image, it is registered with a mesh that is created from the label q2 using the Marching Cubes algorithm. The registration is done using the approach described in Ref. [17]. After registration of the statistical shape model with the mesh, a constrained free-form deformation is applied [8] to yield the final segmentation of the pancreas.

Evaluation

Abdominal single-phase portal venous contrast agent phase CT data from 40 consecutive patients were included. Threefold cross validation was used for performance evaluation. For each fold, the statistical shape model as well as the classifiers were learned on the training data and evaluated on the test data. The Jaccard index was calculated to show the overlap of the segmentation result and the reference standard.

Results

Quantitative results of the proposed segmentation approach are shown in Table 1. The metrics used for evaluation have been taken from Ref. [18]. The metrics are volume difference between reference and generated binary segmentations in percent as well as average distance, RMS distance and maximum distance between reference and generated surfaces in millimeters. The 2nd row shows the results which are using the detector only. The 3rd row shows the results using the full processing pipeline including statistical shape model adaptation. The surface distance to the reference standard averaged over 40 cases is 1.7 ± 0.71 mm. The average overlap is 61.2 ± 9.08 %. The average volume difference deviation is 5.62 ± 3.47 %. The root-mean-square (RMS) distance deviation is 3.10 ± 1.13 mm, and the maximum distance deviation is 16.13 ± 5.18 mm. Figures 5 and 6 show an exemplary segmentation of the software in an unseen dataset. A comparison to other pancreas segmentation methods is given in Table 2. Runtime results of the proposed method in seconds for each processing step on an Intel Quad Core 2.93 GHz CPU are shown in Table 3. The overall runtime is 20.4 min on average. The organ segmentation takes 90 s, the vessel segmentation 5 s, and the landmark detection 3 s. Taking 1,069 s (17.8 min), the feature computation and classification is most time consuming. The belief propagation takes 35 and the model adaptation 25 s.
Table 1
Volume and surface error metrics (see [18]) for the proposed pancreas segmentation method using threefold cross validation on 40 consecutive portal venous phase contrast-enhanced CT data
Method
Overlap (%)
Volume difference deviation (%)
Average distance deviation (mm)
RMS distance deviation (mm)
Max distance deviation (mm)
Detector only
52.2 ± 10.31
10.53 ± 3.73
2.58 ± 0.77
4.69 ± 1.15
23.62 ± 5.45
Final result
61.2 ± 9.08
5.62 ± 3.47
1.70 ± 0.71
3.10 ± 1.13
16.13 ± 5.18
Results are shown using the tissue detector only (2nd row) and using the whole segmentation pipeline (3rd row). Jaccard index was calculated to show the overlap of the segmentation result and the reference standard
Table 2
Listing of state of the art methods to automatically segment the pancreas in CT images
Method
Number of evaluated CT data
Needed CT scans per case
Runtime (min)
Accuracy
Shimizu et al. level set [2]
10
1 (noncontrast)
32.5 % overlap
Shimizu et al. atlas [3]
20
3 (multiple-phase)
45
57.9 % overlap
Kitasaka et al. [4]
22
4 (multiple-phase)
Visual inspection: 12 high, 6 medium, and 4 no overlap
Wolz et al. atlas [5]
100
49.6 % overlap
Proposed method
40
1 (portal venous)
20.4
61.2 % overlap
Jaccard index was calculated to show the overlap of the segmentation result and the reference standard
“–” unpublished/unavailable information
Table 3
Runtime (in seconds] of the single steps of the proposed segmentation pipeline (Intel Quad Core 2.93 GHz CPU)
Organ segmentation
Vessel segmentation
Landmark detection
Feature computation and classification
Belief propagation
Model adaptation
Total
90
5
3
1,069
35
25
1,224

Discussion

Previously proposed algorithms to segment the pancreas are based on multiple-phase contrast-enhanced CT data or show a limited accuracy. Since in the clinical routine a CT scan of the abdomen is predominantly performed acquiring portal venous phase contrast-enhanced images, this work presents an approach for an automatic segmentation of the pancreas in these single-phase CT images. To meet this challenging task, discriminative learning is used to build a pancreas tissue classifier that incorporates spatial relationships between the pancreas and the surrounding organs and vessels. Furthermore, discrete cosine and wavelet transforms are used to build texture features in order to describe local tissue appearance. Classification is then used to guide a constrained statistical shape model to fit the data.
The results of the evaluated algorithm show an average surface distance of 1.7 mm and an average overlap of 61.2 % compared with the reference standard. The proposed method is about twice as accurate as the best previously described method applying single-phase CT data [2]. Striking is the fact that our algorithm’s accuracy is slightly better than the best method utilizing multiple-phase CT scans [3]. This can possibly be explained by the utilization of variant approaches and the inclusion of most CT data for the evaluation of the algorithm. Interestingly, the proposed method is about twice as fast (20.4 versus 45 min) compared with other approaches, possibly due to the fact that no time consuming atlas-based registration was implemented. However, it has to be noted that the runtime of the method which was presented in 2010 will probably be faster on modern computing architectures. An overview of the results of state of the art pancreas segmentation methods is shown in Table 2.
In contrast to prior work, we do not primarily rely on the tissue appearance, but incorporate topographical anatomical knowledge, which is also used by the radiologist to distinguish the pancreas from adjacent tissue partially showing a low contrast and a similar texture. This is done by detecting clinically meaningful support structures and building a classifier that models local spatial relationships between the pancreas and the support structures. Furthermore, texture descriptors based on wavelets and cosine transform are proposed to model local appearance. The resulting classification is used to guide a statistical shape model for fine segmentation. Discriminative learning techniques are used to detect the pancreas based on both local texture appearance and topographical anatomical knowledge. It is known that atlas-based registration methods are generally very inaccurate in the abdominal area due to large inter-subject variation of the intestine, contrast-enhancement and organ anatomy.
Shimizu et al. [2] proposed a simultaneous segmentation framework for 12 organs including pancreas based on a combination of atlas-guided segmentation and level sets. Evaluation on ten noncontrast-enhanced CT scans showed an average overlap of 32.5 % for the pancreas. As the authors note, the algorithm would need major revisions to make it applicable to contrast-enhanced CT data. The authors recently proposed a fully automatic pancreas segmentation system using three-phase contrast-enhanced CT data [3]. They first register the data of the multiple-phase CT scans to a common space followed by a landmark-based deformable registration with a certain patient chosen as reference. A rough segmentation is gained through patient-specific probabilistic atlas guided segmentation. An intensity-based classifier is used together with morphological operations for the final pancreatic segmentation. Evaluation on 20 cases showed an average overlap of 57.9 %. As the authors conclude, their approach requires three-phase contrast-enhanced CT data, because in comparison to single-phase scans more information regarding the contrast-enhancement behavior is available to guide segmentation. Kitasaka et al. [4] proposed a method to extract the pancreas from four-phase CT data based on an estimation of organ distributions using expectation maximization and subsequent fine segmentation using a modified region growing algorithm. Applying a 3-point scale on 22 cases, segmentation quality was judged based on visual inspection as fine in 12 cases, medium in 6 cases, and poor in 4 cases, where fine represents little over- and under-extraction and poor no overlap at all. Wolz et al. [5] recently presented an automated segmentation of several abdominal/retroperitoneal organs (liver, spleen, pancreas, and kidneys) using hierarchically weighted subject-specific atlases. They evaluated the algorithm on 100 CT scans and also reported the Jaccard index (JI) to show the overlap of the segmentation result and the reference standard. The JI for the pancreas segmentation was 61.2 % in our study while Wolz et al. reported 49.6 %. The overall system’s runtime for all organs was around three hours on a machine with eight Intel Xeon cores clocked at 3 GHz and 32 GB RAM. However, runtime for standalone pancreas segmentation and the contrast agent phase of the evaluated CT scans was not reported. It would be very interesting to compare the performance of both algorithms on the same set of CT scans.
An advantage of the presented algorithm could be that it automatically segments the pancreas in portal venous phase contrast-enhanced CT data. Since, a portal venous phase scan is performed during almost every CT examination of the abdomen; the algorithm can be utilized on the majority of abdominal CT examinations acquired in the clinical routine. In contrast, multiple-phase CT examinations are only performed in particular situations.
A robust pancreatic segmentation in commonly acquired CT data is necessary to develop CAD systems. In addition, this algorithm can be applied to improve detection rates and segmentation performance of other challenging abdominal structures, especially in case of enlarged retroperitoneal and mesenteric lymph nodes or of the shape and location variable intestine.
Our study faces some limitations. The algorithm’s overall computational time is roughly 20 min. This constitutes a challenge to apply this system as a pre-processing step in a clinical setting. Table 3 shows the runtime of the single steps of the proposed segmentation pipeline. The feature computation and classification requires the longest time by far (17.8 of 20.4 min). Additional work is planned to further reduce system’s runtime.
For the evaluation of the algorithm, CT data with nonpathologic pancreases were included. Especially when the pancreas segmentation shall serve as first step in a CAD system for pathologic changes, the algorithm should also be evaluated on CT data with pancreases showing pathologic changes such as cancer or an inflamed area. However, none of the other proposed approaches included CT data showing pathologic changes.

Conclusions and Outlook

A method for the automatic segmentation of the pancreas in portal venous phase contrast-enhanced CT images was presented. The method consists of several steps that incorporate shape as well as texture characteristics of the pancreas and its surrounding structures in order to deal with the often low contrasted appearance of the pancreas in CT images. Cross validation on 40 CT data of consecutive patients showed an average surface distance of 1.7 mm and an average overlap of 61.2 %. Pancreatic segmentation in commonly acquired portal-venous phase CT images is crucial for the development of CAD systems and facilitates an organ specific decision support. Further work needs to be done to reduce the computational time and the algorithm should be applied to CT data showing pathologic changes.

Acknowledgments

This work was done in the framework of the Theseus-Medico research program supported by the German Federal Ministry of Economics and Technology (project numbers: 01MQ07026/01MQ07018). Parts of this research were done for Fraunhofer IDM@NTU, which is funded by the National Research Foundation (NRF) and managed through the multi-agency Interactive and Digital Media Programme Office (IDMPO) hosted by the Media Development Authority of Singapore (MDA).

Conflict of Interest

None
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Unsere Produktempfehlungen

e.Med Interdisziplinär

Kombi-Abonnement

Für Ihren Erfolg in Klinik und Praxis - Die beste Hilfe in Ihrem Arbeitsalltag

Mit e.Med Interdisziplinär erhalten Sie Zugang zu allen CME-Fortbildungen und Fachzeitschriften auf SpringerMedizin.de.

e.Med Radiologie

Kombi-Abonnement

Mit e.Med Radiologie erhalten Sie Zugang zu CME-Fortbildungen des Fachgebietes Radiologie, den Premium-Inhalten der radiologischen Fachzeitschriften, inklusive einer gedruckten Radiologie-Zeitschrift Ihrer Wahl.

Literatur
1.
Zurück zum Zitat Ghaneh P, Costello E, Neoptolemos JP: Biology and management of pancreatic cancer. Postgrad Med J 84:478–97, 2008PubMedCrossRef Ghaneh P, Costello E, Neoptolemos JP: Biology and management of pancreatic cancer. Postgrad Med J 84:478–97, 2008PubMedCrossRef
2.
Zurück zum Zitat Shimizu A, Ohno R, Ikegami T, Kobatake H, Nawano S, Smutek D: Segmentation of multiple organs in non-contrast 3D abdominal CT images. Int J Comput Assist Radiol Surg 2:135–142, 2007CrossRef Shimizu A, Ohno R, Ikegami T, Kobatake H, Nawano S, Smutek D: Segmentation of multiple organs in non-contrast 3D abdominal CT images. Int J Comput Assist Radiol Surg 2:135–142, 2007CrossRef
3.
Zurück zum Zitat Shimizu A, Kimoto T, Kobatake H, Nawano S, Shinozaki K: Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography. Int J Comput Assist Radiol Surg 5:85–98, 2010PubMedCrossRef Shimizu A, Kimoto T, Kobatake H, Nawano S, Shinozaki K: Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography. Int J Comput Assist Radiol Surg 5:85–98, 2010PubMedCrossRef
4.
Zurück zum Zitat Kitasaka T, Sakashita M, Mori K, Suenaga Y, Nawano S: A method for extracting pancreas regions from four-phase contrasted 3D abdominal CT images. Int J Comput Assist Radiol Surg 3:40, 2008 Kitasaka T, Sakashita M, Mori K, Suenaga Y, Nawano S: A method for extracting pancreas regions from four-phase contrasted 3D abdominal CT images. Int J Comput Assist Radiol Surg 3:40, 2008
5.
Zurück zum Zitat Wolz R, Chu C, Misawa K, Mori K, Rueckert D: Multi-organ abdominal CT segmentation using hierarchically weighted subject-specific atlases. MICCAI LNCS 7510:10–17, 2012 Wolz R, Chu C, Misawa K, Mori K, Rueckert D: Multi-organ abdominal CT segmentation using hierarchically weighted subject-specific atlases. MICCAI LNCS 7510:10–17, 2012
6.
Zurück zum Zitat McNulty NJ, Francis IR, Platt JF, Cohan RH, Korobkin M, Gebremariam A: Multi-detector row helical CT of the pancreas: effect of contrast-enhanced multiphasic imaging on enhancement of the pancreas, peripancreatic vasculature and pancreatic adenocarcinoma. Radiology 220:97–102, 2001PubMedCrossRef McNulty NJ, Francis IR, Platt JF, Cohan RH, Korobkin M, Gebremariam A: Multi-detector row helical CT of the pancreas: effect of contrast-enhanced multiphasic imaging on enhancement of the pancreas, peripancreatic vasculature and pancreatic adenocarcinoma. Radiology 220:97–102, 2001PubMedCrossRef
7.
Zurück zum Zitat Miller FH, Butler RS, Hoff FL, Fitzgerald SW, Nemcek Jr, AA, Gore RM: Using triphasic helical CT to detect focal hepatic lesions in patients with neoplasms. AJR Am J Roentgenol 171:643–649, 1998PubMedCrossRef Miller FH, Butler RS, Hoff FL, Fitzgerald SW, Nemcek Jr, AA, Gore RM: Using triphasic helical CT to detect focal hepatic lesions in patients with neoplasms. AJR Am J Roentgenol 171:643–649, 1998PubMedCrossRef
8.
Zurück zum Zitat Erdt M, Kirschner M, Steger S, and Wesarg S: Fast automatic liver segmentation combining learned shape priors with observed shape deviation. In: IEEE International Symposium on Computer-Based Medical Systems (CBMS), 2010, pp 249–254 Erdt M, Kirschner M, Steger S, and Wesarg S: Fast automatic liver segmentation combining learned shape priors with observed shape deviation. In: IEEE International Symposium on Computer-Based Medical Systems (CBMS), 2010, pp 249–254
9.
Zurück zum Zitat Erdt M, Raspe M, Suehling M: Automatic hepatic vessel segmentation using graphics hardware. Medical Imaging and Augmented Reality 5128:403–412, 2008CrossRef Erdt M, Raspe M, Suehling M: Automatic hepatic vessel segmentation using graphics hardware. Medical Imaging and Augmented Reality 5128:403–412, 2008CrossRef
10.
Zurück zum Zitat Drechsler K, Laura CO: Hierachical decomposition of vessel skeletons for graph creation and feature extraction. In: IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2010, pp 456–461 Drechsler K, Laura CO: Hierachical decomposition of vessel skeletons for graph creation and feature extraction. In: IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2010, pp 456–461
11.
Zurück zum Zitat Zhen Y, Barbu A, Georgescu B, Scheuering M, Comaniciu D: Fast automatic heart chamber segmentation from 3d ct data using marginal space learning and steerable features. In: 11th Int. Conf. on Computer Vision (ICCV), 2007, pp 1–8 Zhen Y, Barbu A, Georgescu B, Scheuering M, Comaniciu D: Fast automatic heart chamber segmentation from 3d ct data using marginal space learning and steerable features. In: 11th Int. Conf. on Computer Vision (ICCV), 2007, pp 1–8
12.
Zurück zum Zitat Hastie T, Tibshirani R, Friedman J: The elements of statistical learning: data mining, inference, and prediction. The Mathematical Intelligencer 27:83–85, 2005CrossRef Hastie T, Tibshirani R, Friedman J: The elements of statistical learning: data mining, inference, and prediction. The Mathematical Intelligencer 27:83–85, 2005CrossRef
13.
Zurück zum Zitat Breiman L, Friedman JH, Olshen RA, Stone CJ: Classification and regression trees. Chapman & Hall, New York, 1984 Breiman L, Friedman JH, Olshen RA, Stone CJ: Classification and regression trees. Chapman & Hall, New York, 1984
14.
Zurück zum Zitat Felzenszwalb PF, Huttenlocher DP: Efficient belief propagation for early vision. Int J Comput Vis 70:41–54, 2006CrossRef Felzenszwalb PF, Huttenlocher DP: Efficient belief propagation for early vision. Int J Comput Vis 70:41–54, 2006CrossRef
15.
Zurück zum Zitat Cootes TF, Taylor CJ, Cooper DH, Graham J: Active shape models—their training and application. Comput Vis Image Underst 61:38–59, 1995CrossRef Cootes TF, Taylor CJ, Cooper DH, Graham J: Active shape models—their training and application. Comput Vis Image Underst 61:38–59, 1995CrossRef
16.
Zurück zum Zitat Kirschner M, Wesarg S: Construction of groupwise consistent shape parameterizations by propagation. In: Dawant BM et al. (eds.) Medical Imaging 2010. Image Processing, 762352, Bellingham, WA, SPIE, 2010, pp 14–16 Kirschner M, Wesarg S: Construction of groupwise consistent shape parameterizations by propagation. In: Dawant BM et al. (eds.) Medical Imaging 2010. Image Processing, 762352, Bellingham, WA, SPIE, 2010, pp 14–16
17.
Zurück zum Zitat Kirschner M, Wesarg S: Automatische Initialisierung von Formmodellen mittels modellbasierter Registrierung. Bildverarbeitung für die Medizin. Informatik aktuell, 2011. doi:10.1007/978-3-642-19335-4_16 Kirschner M, Wesarg S: Automatische Initialisierung von Formmodellen mittels modellbasierter Registrierung. Bildverarbeitung für die Medizin. Informatik aktuell, 2011. doi:10.​1007/​978-3-642-19335-4_​16
18.
Zurück zum Zitat Heimann T, van Ginneken B, Styner MA, Arzhaeva Y, Aurich V, Bauer C, Beck A, Becker C, Beichel R, Bekes G, Bello F, Binnig G, Bischof H, Bornik A, Cashman PM, Chi Y, Cordova A, Dawant BM, Fidrich M, Furst JD, Furukawa D, Grenacher L, Hornegger J, Kainmüller D, Kitney RI, Kobatake H, Lamecker H, Lange T, Lee J, Lennon B, Li R, Li S, Meinzer HP, Nemeth G, Raicu DS, Rau AM, van Rikxoort EM, Rousson M, Rusko L, Saddi KA, Schmidt G, Seghers D, Shimizu A, Slagmolen P, Sorantin E, Soza G, Susomboon R, Waite JM, Wimmer A, Wolf I: Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans Med Imaging 28:1251–1256, 2009PubMedCrossRef Heimann T, van Ginneken B, Styner MA, Arzhaeva Y, Aurich V, Bauer C, Beck A, Becker C, Beichel R, Bekes G, Bello F, Binnig G, Bischof H, Bornik A, Cashman PM, Chi Y, Cordova A, Dawant BM, Fidrich M, Furst JD, Furukawa D, Grenacher L, Hornegger J, Kainmüller D, Kitney RI, Kobatake H, Lamecker H, Lange T, Lee J, Lennon B, Li R, Li S, Meinzer HP, Nemeth G, Raicu DS, Rau AM, van Rikxoort EM, Rousson M, Rusko L, Saddi KA, Schmidt G, Seghers D, Shimizu A, Slagmolen P, Sorantin E, Soza G, Susomboon R, Waite JM, Wimmer A, Wolf I: Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans Med Imaging 28:1251–1256, 2009PubMedCrossRef
Metadaten
Titel
Model-Based Pancreas Segmentation in Portal Venous Phase Contrast-Enhanced CT Images
verfasst von
Matthias Hammon
Alexander Cavallaro
Marius Erdt
Peter Dankerl
Matthias Kirschner
Klaus Drechsler
Stefan Wesarg
Michael Uder
Rolf Janka
Publikationsdatum
01.12.2013
Verlag
Springer US
Erschienen in
Journal of Imaging Informatics in Medicine / Ausgabe 6/2013
Print ISSN: 2948-2925
Elektronische ISSN: 2948-2933
DOI
https://doi.org/10.1007/s10278-013-9586-7

Weitere Artikel der Ausgabe 6/2013

Journal of Digital Imaging 6/2013 Zur Ausgabe

Darf man die Behandlung eines Neonazis ablehnen?

08.05.2024 Gesellschaft Nachrichten

In einer Leseranfrage in der Zeitschrift Journal of the American Academy of Dermatology möchte ein anonymer Dermatologe bzw. eine anonyme Dermatologin wissen, ob er oder sie einen Patienten behandeln muss, der eine rassistische Tätowierung trägt.

Ein Drittel der jungen Ärztinnen und Ärzte erwägt abzuwandern

07.05.2024 Klinik aktuell Nachrichten

Extreme Arbeitsverdichtung und kaum Supervision: Dr. Andrea Martini, Sprecherin des Bündnisses Junge Ärztinnen und Ärzte (BJÄ) über den Frust des ärztlichen Nachwuchses und die Vorteile des Rucksack-Modells.

Endlich: Zi zeigt, mit welchen PVS Praxen zufrieden sind

IT für Ärzte Nachrichten

Darauf haben viele Praxen gewartet: Das Zi hat eine Liste von Praxisverwaltungssystemen veröffentlicht, die von Nutzern positiv bewertet werden. Eine gute Grundlage für wechselwillige Ärztinnen und Psychotherapeuten.

Akuter Schwindel: Wann lohnt sich eine MRT?

28.04.2024 Schwindel Nachrichten

Akuter Schwindel stellt oft eine diagnostische Herausforderung dar. Wie nützlich dabei eine MRT ist, hat eine Studie aus Finnland untersucht. Immerhin einer von sechs Patienten wurde mit akutem ischämischem Schlaganfall diagnostiziert.

Update Radiologie

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.