Introduction

Evolution and innovation in CT image reconstruction are often driven by advances in CT system designs, which in turn are driven by clinical demands. From a historical perspective, for example, the fan-beam axial filtered back-projection (FBP) algorithm was the dominant mode of image reconstruction for CT for decades until the introduction of helical or spiral CT, which forced the development of new reconstruction algorithms to compensate for helically-induced motion artifacts. The development of the helical scanner itself, on the other hand, was driven by the clinical desire to cover an entire human organ in a single breath-hold. The fan-beam helical algorithm was later replaced by the cone-beam FBP, necessitated by the introduction of the multi-slice scanner; while the multi-slice scanner itself was born to meet the clinical need for routinely achieving isotropic spatial resolution. With the huge success of CT as a diagnostic imaging modality, increased public awareness of CT-induced radiation, and the need to reduce the associated risks, inspired the development of low-dose clinical protocols and techniques which, in turn, accelerated the recent development of statistical iterative reconstruction.

Besides application-specific reconstruction techniques, image reconstruction for x-ray CT can be generally classified into two major categories: analytical reconstruction and iterative reconstruction. The analytical reconstruction approaches, in general, try to formulate the solution in a closed-form equation. Iterative reconstruction tries to formulate the final result as the solution either to a set of equations or the solution of an optimization problem, which is solved in an iterative fashion (the focus of this paper is more on the iterative solution of an optimization problem). Naturally, there are pros and cons with each approach. As a general rule of thumb, analytical reconstruction is considered computationally more efficient while iterative reconstruction can improve image quality. It should be pointed out that many reconstruction algorithms do not fall into these two categories in the strict sense. These algorithms can be generally classified as hybrid algorithms that leverage advanced signal-processing, image-processing, and analytical reconstruction approaches. Given the limited scope of this chapter, discussions on these algorithms are omitted.

Analytical Reconstruction

Cone-Beam Step-and-Shoot Reconstruction

The concept of “cone-beam CT” is not as clearly defined as one may think. In the early days of multi-slice CT development, there was little doubt that a 4-slice scanner should be called a multi-slice CT or multi-row CT, and not a cone-beam CT. As the number of slices and the z-coverage increase, the angle, θ, formed by two planes defined by the first and the last detector rows (Fig. 1) increases, and it becomes increasingly difficult to decide where to draw the line between a multi-slice scanner and a cone-beam scanner.

Fig. 1
figure 1

Illustration of cone-beam geometry

The step-and-shoot cone-beam (SSCB) acquisition refers to the data collection process in which the source trajectory forms a single circle. During the data acquisition, a patient remains stationary while the x-ray source and detector rotate about the patient. The advantage of SSCB is its simplicity and robustness, especially when data collection needs to be synchronized with a physiological signal, such as an EKG. If the anticipated data acquisition is not aligned with the desired physiological state, as in the case of arrhythmia in coronary artery imaging, the CT system can simply wait for the next physiological cycle to collect the desired data. This acquisition results in a significant reduction in radiation dose compared with low pitch helical acquisitions [1••, 2]. When the z-coverage is sufficiently large, the entire organ, such as a heart or a brain, can be covered during a single gantry rotation to further minimize the probability of motion [3].

The inherent drawback of the SSCB is its incomplete sampling since the data acquisition does not satisfy the necessary condition for exact reconstruction [4]. This data incompleteness can be viewed in terms of Radon space [5] or in terms of the local Fourier transform of a given image voxel. In the case of the SSCB acquisition, image artifacts arise from three major causes: missing frequencies, frequencies mishandled during the reconstruction, and axial truncation. Missing frequencies (Fig. 2a, low in-plane and high in z spatial frequencies) refers to the fact that there is a small void region in the three-dimensional Fourier space of the collected data for all image locations off the central slice, and increasingly for more distant slices. Mishandled frequencies (Fig. 2a, shaded region) refer to the issue of improper handling of the redundant information in the Fourier space. Axial truncation (Fig. 2b) is caused by the geometry of cone beam sampling, in which the object is not completely contained in the cone beam sampling area over all views. Figure 3b, c depict coronal images of a CD-phantom (similar to a Defrise phantom) scanned in helical and SSCB modes, respectively. Since parallel CDs are placed horizontally with air gaps in between, the corresponding coronal image should exhibit a pattern similar to Venetian blinds. The image formed with the SSCB mode clearly shows deficiencies for areas away from the detector center-plane.

Fig. 2
figure 2

Sampling challenges for step-and-shoot cone beam geometry illustration of missing frequencies and mishandled data in Fourier space (a) and axial truncation (b)

Fig. 3
figure 3

Reformatted images of a CD-phantom. a Phantom. b Coronal image of helical scan. c Coronal image of a single step-and-shoot scan

When the cone angle is moderate, various approximate reconstruction algorithms have been shown to effectively produce clinically acceptable images. One of the most commonly used reconstruction algorithm is the Feldkamp–Davis–Kress (FDK) reconstruction. This algorithm differs from the conventional fan-beam reconstruction in replacing the two-dimensional back-projection with a three-dimensional back-projection to mimic the x-ray path as it traverses the object, and in including a cosine weighting in the cone direction to account for path length differences of oblique rays [6]. Extensions from the FDK algorithm have been introduced to solve one of the three major issues with reconstruction from an arc trajectory.

Several solutions to the first problem of the missing data have been proposed [7, 8]. These algorithms generally compensate for the missing information by approximating it via frequency-domain interpolation or two-pass corrections to estimate the missing frequencies, and are often sufficient to provide clinically acceptable image quality. An example of such a correction is shown in Fig. 4.

Fig. 4
figure 4

Reformatted images of simulated chest phantom reconstruction, where the vertical axis is orthogonal to the scan plane. a FDK reconstruction. b Two-pass reconstruction to compensate for missing frequencies shown in Fig. 2a

The second issue of proper weighting of the collected data is important in the case of a short scan acquisition where all measured frequency values are not measured the same number of times. One solution to this problem is the application of an exact image reconstruction framework using arc-based cone-beam reconstruction with equal weighting (ACE) [9••, 10, 11]. The ACE algorithm (Fig. 5) equally weights the measured frequencies by introducing non-horizontal filtering lines in the detector, and applies to any length of arc. In the case of the full scan, the algorithm is equivalent to the FDK algorithm [6] plus Hu’s correction [12] based on the derivative along the z direction in the detector. Another approximation, based on an exact framework, for the circular trajectory utilizes the back-projection filtration (BPF) framework [1315] where the data is differentiated, back-projected, and filtered along approximate filtering lines [16]. A study compared these algorithms for circular short-scan data and showed that the ACE method provides visually superior image quality to other algorithms [17].

Fig. 5
figure 5

Slices through the Shepp Logan phantom with window parameters of [0.98 1.05]. Note the improvement in the soft tissue away from the central slice as well as the reduction in streaky cone-beam artifacts (arrows) which are due to improper weighting. The vertical direction in these images is the z direction, which is orthogonal to the plane of the source trajectory

The third problem of axial data truncation has been handled by the selective use of measured projection data in the corner region to avoid relying too heavily on extrapolated data. This has been accomplished by weighting conjugate rays differently based on their cone-angle [18] or by essentially using multiple rotated short scan reconstructions in the corner region [19, 20].

Further reduction of artifacts can be achieved with modified data acquisition trajectories. Several modified trajectories, such as circle-plus-arc, circle-plus-line, dual-circle, or saddle trajectories (Fig. 6; [2128]) try to resolve the first problem listed above (missing frequency data). In Fig. 7, a phantom known to demonstrate such a problem is shown for just a single-circle trajectory on the left and a dual-circle trajectory on the right. Although these ‘circle plus’ trajectories meet the conditions for mathematically exact reconstruction, utilization of these trajectories in a real clinical environment still faces significant challenges because of many practical considerations: large mass of a CT gantry, high gantry rotation speed, and desire to minimize overall data acquisition time.

Fig. 6
figure 6

Alternative source trajectories to enable complete sampling: circle-plus-line (a), circle-plus-arc (b), dual-circle (c), and saddle (d)

Fig. 7
figure 7

An example of exact reconstruction from the source trajectory of two concentric circles. On the left is the contribution to the final image from the horizontal circle only. In the center, is contribution from the vertical circle only and the sum is on the right. Reprinted from [24], with permission

Another relatively recent discovery is the fact that, in order to reconstruct a given region of interest (ROI), one does not need 180° plus the fan angle of the system [10, 29]. The 2D sufficiency condition is similar to Tuy’s condition [4] for 3D exact reconstruction, in that any line which can be drawn through the ROI must intersect the source trajectory at least once. This condition can be exploited to reduce the temporal window of acquisition for off-center ROI imaging [30, 31].

Multi-Slice Helical Reconstruction

Since the introduction of helical scanning over two decades ago, significant research efforts have been devoted to the subject of cone-beam helical reconstruction, and the majority of the clinical protocols have switched from step-and-shoot to helical mode. In the helical mode of data acquisition, the table is indexed along the z-axis while the gantry rotates about the patient to collect projection measurements.

Approaches to analytic helical cone beam reconstruction can be generally divided into two major categories: exact reconstruction methods and approximate reconstruction methods. The exact reconstruction methods, as the name implies, try to derive analytical solutions that match the scanned object in mathematical precision if the input projections are the true line integrals [32]. One of the most exciting developments in the past few years is the development of exact FBP algorithms as first developed by Katsevich [9••, 33••, 34]. This algorithm falls into the category of FBP with a shift-invariant one-dimensional filter, and the flow of this reconstruction algorithm is similar to that of the fan-beam FBP. Since its invention, many new algorithms have been proposed and developed to allow more general trajectories, more efficient use of the projection samples, and region-of-interest reconstruction [14, 3539]. The advantage of these algorithms is of course their mathematical exactness. However, to date, these algorithms have yet to be implemented widely in commercial products, mainly due to their noise properties, limited robustness to patient motion, and lack of flexibility. For example, in a cardiac acquisition, a half-scan is typically used to improve the temporal resolution. In clinical practice, a sub-volume of the heart (in z) needs to be reconstructed for each half-scan acquisition to keep the overall acquisition time to a minimum. This is difficult to achieve with the exact reconstruction algorithms.

The second algorithm category is the approximate reconstruction approaches derived from the original FDK algorithm [4048]. Each projection is weighted, filtered on a row-by-row basis, and back-projected three-dimensionally into the object space. Although these algorithms are approximate in nature, they do offer distinct advantages, such as volume reconstruction in a single half-scan acquisition. Because of their flexibility, their ability to intrinsically handle data from long objects and their computational efficiency, the approximate reconstruction algorithms are still the dominant force behind most commercial CT reconstruction engines.

It should be pointed out that the non-exact algorithms often borrow techniques developed in the exact reconstruction algorithms. For example, the Katsevich algorithm performs the projection filtration along a set of κ-curves (instead of row-by-row) as shown in Fig. 8, and each filtered “virtual row” falls along a tangent to the helix. The same approach can be incorporated into the approximate reconstruction to significantly improve image quality. Figure 9 shows a computer simulated phantom with (1) an approximate algorithm with tangential filtering [47] and (2) the Katsevich algorithm: comparable image quality between the exact and approximate reconstruction algorithms can be observed.

Fig. 8
figure 8

An example of the κ-curves used in the filtering operation of the Katsevich algorithm (helical pitch 87/64)

Fig. 9
figure 9

Simulated body phantom with 64 × 0.625 mm detector configuration and 63/64 helical pitch. a Modified FDK algorithm with tangential filtering and 3D weighting. b Katsevich reconstruction algorithm

Iterative Reconstruction

Over the years, many algorithmic approaches have been proposed to reduce image artifacts and noise during the image generation process [4951]. Noise and artifact reductions are necessary steps toward dose reduction in CT. The recent burst of research activities in iterative reconstruction can also be credited to the increased awareness of the radiation dose generated by a CT scan [5259], and advancements in reconstruction hardware capability.

The use of an iterative reconstruction to solve the inverse problem of x-ray computed tomography has a long history. As a matter of fact, reconstruction of the very first clinical image utilized an iterative technique called algebraic reconstruction technique (ART) to invert a large matrix [60, 61]. Although the objective of ART at the time was to find a solution to a complicated inverse problem and did not involve complex modeling of the CT system, it nonetheless demonstrated the efficacy of IR techniques for x-ray CT.

If we look beyond the CT reconstruction, statistical IR has been used extensively in single photon emission computed tomography (SPECT) and positron emission tomography (PET) to combat photon starvation issues and to show great benefits in noise reduction and improved accuracy in the reconstructed images. Although the success of IR in SPECT and PET seemingly implies a straightforward transfer of such technology to CT, reality has shown that such a transition has been met with great challenges due to the significant differences between the modalities. First, in x-ray CT, pre-processing and calibration steps are numerous and critical to the production of CT images, and these correction steps significantly change the statistical properties of the measured projection samples. If proper modeling is not performed, the desired noise or dose reduction is difficult to achieve. Second, spatial resolution is much higher in x-ray CT than either SPECT or PET. Achieving high spatial resolution in IR requires carefully modeling the optics of the system as well as using an “edge-preserving” regularization design to control image properties and avoid needless sacrifice of spatial resolution for noise reduction. Third, due to the large amount of data and complexity of the inverse CT problem and associated models, IR calls for the design of robust and stable converged results by carefully crafting an optimization cost function, the solution of which is performed iteratively and typically demands long reconstruction time. This has prevented wide application of IR in clinical practice until recent advancements in computer hardware and fast algorithms.

Unlike the analytical reconstruction algorithms where each projection sample is weighted, filtered, and back-projected to formulate an image, iterative reconstruction arrives at the final solution in an iterative manner: the initial reconstructed images are refined and modified iteratively until certain criteria are met. The criteria are written in the form of a cost function to be minimized, which measures the fit of the image to the data according to a model of the imaging system. However, the enormous size of the optimization problem as well as the complexity of the model makes it difficult to solve, and the cost function needs to be minimized iteratively. Each iteration typically involves the forward- and back-projection of an intermediate image volume. The forward-projection step simulates the x-ray interactions with the “object” (the intermediate images) and produces a set of synthesized projections. The synthesized projections are compared against the real projection measurements collected by the CT scanner according to a statistical metric, and the differences between the two are attributed to the “error” or “bias” in the intermediately estimated image volume. The error is then used to update the intermediate image volume to reduce the discrepancy between the image and the acquired data. The new intermediate image is then forward-projected again in the next iteration, and, provided the choice of an adequate cost function and a globally convergent optimization algorithm, the image volume will converge to an estimate near the minimizer of the cost function after several iterations.

In the context of this chapter, due to limited space, we consider only statistical IR as defined in an optimization sense, whereby the reconstructed result is achieved by minimizing a cost function that determines the desired characteristics of the final image estimate as it relates to the measurement data, and the iterative approach mostly determines the speed of the convergence to that solution. Other IR formulations exist, which do not always provide the same properties of predictable convergence and stable results.

Advantages and Requirements

To understand the advantages of a model-based iterative reconstruction (MBIR) over other approaches, let us examine a typical CT system and the assumptions being made to derive the FBP algorithm, or analytical reconstruction algorithms in general. Figure 10 depicts a schematic diagram of a CT scanner with intentionally enlarged x-ray source, detector, and image voxel for easier illustration. In the derivation of an FBP algorithm, a series of assumptions are made regarding the physical dimension of the x-ray focal spot, the detector cell, and the image voxels in order to make the mathematics manageable. Specifically, the size of the x-ray focal spot is assumed to be infinitely small and therefore can be approximated as a point source. Although the detector-cell spacing is correctly considered in the derivation of the reconstruction filters, the shape and dimension of each detector cell are ignored, and all x-ray photon interactions are assumed to take place at a point located at the geometric center of the detector cell. Similarly, the image voxel’s shape and size are ignored and only the geometric center of the voxel is considered. These assumptions lead to the formation of a single pencil beam representing the line integral of the attenuation coefficients along the path which connects the x-ray source and a detector cell. In addition, each of the projection measurements is assumed to be accurate and is not influenced by the fluctuation induced by photon statistics or electronics noise.

Fig. 10
figure 10

Schematic diagram of CT system

The CT system model traditionally used by analytical algorithms clearly does not represent physical reality. For most clinical CT scanners, typical focal spot size is about 1 mm × 1 mm and detector cell spacing is slightly larger than 1 mm with an active area of 80–90 % of the spacing. The image voxel is normally modeled as rectangular in shape with its dimensions determined by the reconstruction field-of-view (FOV) and slice thickness (Due to limited scope, other image voxel shapes, such as a blob, will not be discussed). For example, for a 512 × 512 matrix size image representing a 50 cm FOV and 5 mm slice thickness, the reconstructed pixel is 0.98 mm × 0.98 mm × 5 mm in x, y, and z. Because each projection sample is acquired over a finite sampling duration (less than a millisecond on modern scanners) with limited x-ray flux, each measurement at a detector cell has finite photon statistics. The statistical fluctuation is further complicated by the fact that the electronics used in the data acquisition system has an inherent noise floor which contributes to the overall noise in the measurement. There is little doubt that the simplifications made in the derivation of an FBP should have an impact on reconstructed image quality.

MBIR Algorithms

One way of incorporating an accurate system model while overcoming the mathematical intractability of an analytical reconstruction is to use a technique called model-based iterative reconstruction, MBIR. In 1982, by incorporating only the noise properties of emission tomography, an expectation–maximization algorithm (EM) was proposed for SPECT and was published in a classic paper [62••]. The development of IR techniques for x-ray computed tomography, however, started a few years later due to its technical complexities and the perception that less benefit was possible than in emission CT [63•, 64••, 65, 66••, 67••, 6870, 71•, 7275].

The accuracy of modeling is critical to the quality of the MBIR reconstruction. There are three main key models in the IR algorithm: the forward model, the noise model, and the image model. The forward model accounts for the system optics and all geometry-related effects. An accurate system model is necessary to ensure that modeling errors do not grow during the iterative convergence process to form artifacts in the reconstructed images One example of an accurate forward model is casting of multiple pencil-rays through the x-ray focal spot, the image voxel, and the detector cell. These pencil-beams mimic different x-ray photon paths going through the object, and the complicated CT optics is approximated by a weighted summation of many simplistic line integral models, as depicted in Fig. 11 [76]. One strategy to ensure an unbiased coverage of different points on the focal spot, different locations on a detector cell, and different sub-regions inside an image pixel is to sub-divide the focal spot, detector cell, and pixel into equal-sized small elements. Needless to say, this process is time consuming.

Fig. 11
figure 11

Illustration of modeling the system optics with ray casting

An alternative approach is to model the “shadow” cast by each image voxel onto the detector and rely on point-response functions to perform the optics modeling, as shown in Fig. 12 [71•, 77•]. In this approach, only one ray is cast between the center of the focal spot and the center of the image voxel to locate the point of intersection on the detector, and a response function is used to distribute the contribution of the image voxel to different detector cells during the forward-projection process. The response function is non-stationary and changes with the voxel location to account for different magnifications and orientations. Compared to the ray casting approach, this method is computationally more efficient, and can introduce advanced system modeling such as detector effective area, cross-talk, or focal spot energy distribution.

Fig. 12
figure 12

Illustration of modeling the system optics with a response function

Note that accurate system modeling implies accounting for spatially-varying behavior where the projection of an image voxel onto the detector depends on its shape and position relative to the x-ray source, contrary to simplistic models used in FBP-type reconstruction. Using this accurate system model, it is also important that the forward- and back-projection operators remain adjoint to each other in order to prevent the propagation of errors in the reconstruction.

The modeling of noise distributions in the projections is not as straightforward as one may think, complicated by the fact that the CT projection measurement undergoes complicated pre-processing and calibration processes prior to its use in the reconstruction. Although the original x-ray photon statistics follow approximately the Poisson distribution (approximation due to the poly-energetic x-ray source used in CT), the projection data after the calibration process no longer exhibits this behavior. The same complication applies to the electronic noise after the pre-processing steps. If a simplistic model is used to approximate the statistical properties, suboptimal results are likely to result. It should be pointed out that the level of bias or error introduced by the simplified model increases with the reduction of the x-ray flux received at the detector, and the image quality degradation increases as a result. This is undesirable when considering the fact that one of the major driving-forces behind MBIR is dose reduction, which leads naturally to a reduced x-ray flux on the detector.

Once the projection noise is properly modeled, the philosophy behind MBIR is to treat each projection sample differentially based on its estimated noise variance. In general, a higher confidence is placed on samples with high signal-to-noise ratio (SNR) and lower weights are placed on samples with low SNR. This is in direct contrast to the noise treatment in analytical reconstruction where all samples are treated equally from a statistical point of view.

The image model is somewhat more subjective. The general approach is to model the image volumes as Markov random fields, which results in a cost function term that penalizes noise-induced intensity fluctuations in the neighborhood of a voxel. By carefully examining the human anatomy, one naturally concludes that the attenuation characteristics of a voxel-size element inside the human body are not completely isolated from or independent of the surrounding elements. Therefore, if a voxel CT number is far from its neighbors, it should be penalized and modified. The term used to enforce such conditions in the cost function is often called the regularization term in IR.

Although the concept looks simple, several challenges need to be addressed in the design of the regularization term. The differentiation of the noise from the real structures in the image can be quite difficult, since real object structures are often buried in the noise and there is little prior information available about the object itself. The general approach taken by researchers is to construct an “edge-preserving prior” that preserves structural edges [71•, 78, 79]. Another challenge is to maintain the uniform image quality across the volume. In CT reconstruction, the noise distribution in the image varies as a function of local attenuation; highly attenuated regions tend to have higher noise. Therefore, it requires a spatially varying design of the regularization term to achieve uniform image resolution and noise behavior [80]. Another difficulty is to properly balance the regularization and data likelihood terms in the cost function to achieve the desired IQ: designing for robust IQ behavior across a range of clinical scenarios without end-user interaction can pose a significant challenge.

The MBIR cost function typically incorporates all the models discussed above. High spatial resolution is usually a direct result of an accurate forward model combined with appropriate regularization. For demonstration, Fig. 13a, b depict a clinical example of standard FBP and MBIR reconstructions of the same projection data. The MBIR image exhibits not only significantly reduced noise but also much improved spatial resolution as demonstrated by the visibility of fine anatomical structures.

Fig. 13
figure 13

Example of coronal images reconstructed with a FBP and b MBIR

It is clear from the discussion that fully modeled IR is more computationally intensive than analytical reconstruction methods. Although it provides superb image quality in terms of spatial resolution, noise reduction, and artifact correction, the time delay in the image generation process is not negligible. In today’s clinical environment, “real time” image reconstruction is often demanded and the CT operator expects all images will become available shortly after the completion of the data acquisition.

Extensive research has been conducted over the years to speed-up IR algorithms. Fast hardware implementation, distributed parallel processing, and efficient algorithm convergence properties are equally important to achieve this goal. To speed up convergence, one approach is to improve the update strategy at each iteration. There are two types of update strategies: global updates, or local updates. Algorithms using the global update strategy update the entire image volume at once. These algorithms are relatively easy to parallelize. However, since the inverse problem is typically ill-conditioned based on the large number of unknowns relative to the measurements and the range of data statistics, simple global update algorithms such as gradient descent and conjugate gradient tend to require a large number of iterations to converge. Therefore, various acceleration techniques such as preconditioner-based methods [81] and ordered-subset (OS) methods [75] have been proposed to accelerate the convergence of global update algorithms. The OS algorithms perform their updates based on a subset of the projection data: at each iteration, image updates start with one subset of the data, and rotate around all consecutive subsets until all projection data are used. Today, this acceleration technique is used extensively in commercial PET and SPECT scanners.

Contrary to global updates, local updates focus on updating a single voxel or a subset of voxels at a time. These voxels are selected according to a pattern, possibly random, and their modifications are estimated based on all projection data. After the first set of voxels is updated, another set is selected and updated in sequential fashion. An example of such algorithm is the iterative-coordinate-descent (ICD) algorithm [63•, 64••, 71•, 72]. If we define the single update of all voxels in the image as one full-iteration, the local update strategy generally takes only a few iterations to reach acceptable convergence. The convergence speed can be further improved by exploring the flexibility to update certain pixels more frequently than others in the so-called non-homogeneous update strategy [72, 82].

From an implementation point of view, however, it can be difficult to parallelize a local update algorithm to run on a large number of threads to take full advantage of massive parallel computer architectures. The global update strategy seems to be just the opposite: it is easier to parallelize, but typically needs a larger number of iterations to reach convergence. Historically, the computational complexity of MBIR was one of the most important factors that prevented its application to CT. Although the same issue is still present today, the magnitude of the obstacle has been significantly reduced. With renewed research interests, this issue is likely to be resolved in the near future with the combination of advanced computer architectures and improved algorithm efficiency. It needs to be stated that the investigation of MBIR for CT is still in its infancy and new clinical applications of MBIR are likely to be discovered.

Discussion and Conclusion

Given the limited scope of this chapter, it has been necessary to focus only on a few major areas of tomographic reconstruction. Unfortunately, many exciting developments to address special clinical needs could not be adequately covered. For example, despite the novel scanner hardware development, cardiac coronary artery imaging still poses significant challenges to CT due to the presence of irregular and rapid vessel motion. Motion-induced artifacts even appear in patients with low heart-rates, resulting from the relatively high velocities of some of the coronary arteries [83]. Recent development of an advanced algorithm seems to offer good solutions to address this issue [84••]. The motion-correction algorithm characterizes the vessel motion using images reconstructed from slightly different cardiac phases and utilizes these motion vectors to correct for the imaging artifacts. Figure 14 depicts an example of a patient’s cardiac images without (a) and with (b) the motion compensation. The improvement in image quality is obvious.

Fig. 14
figure 14

Advanced motion correction examples (heart rate: 84 bpm). a Conventional FBP algorithm. b With advanced motion correction

New developments in other areas, such as the generation of material-density images and monochromatic images in dual-energy CT, variable-pitch helical acquisition and reconstruction, metal artifact reduction algorithms, and region-of-interest scanning and reconstruction, are equally exciting. As technology evolves, it is expected that tomographic reconstruction algorithms and other advanced algorithmic approaches will play an even greater role in CT.