Background
Since a biological pathway for body iron surplus elimination is non-existent in humans, excess iron from gut absorption or transfusion is stored in a stable hemosiderin form but once this storage capacity is exhausted free iron can be left unbound. This free ferrous iron can consequently catalyse cellular Fenton reactions and create highly reactive hydroxyl radicals causing mitochondrial dysfunction, [
1] apoptosis and other cellular dysfunction [
2‐
4]. The most common cause of iron overload is the transfusion dependent anaemias, in which cardiac iron toxicity and contractile dysfunction can lead to heart failure, arrhythmias and death [
5,
6].
Cardiovascular magnetic resonance (CMR) T2* assessment of myocardial iron burden at 1.5 Tesla (T) [
7] plays a crucial role in the early diagnosis and management of patients with cardiac iron overload. [
8] It has been validated against tissue iron concentration, [
6,
9] and there is a powerful relation between low T2* values and future occurrence of heart failure [
5]. It is used worldwide with T2* values between centres having excellent intercenter reproducibility, [
10,
11] which allows consistency of clinical management between centres and a coherent approach to international guidelines [
12]. Data also shows that adoption of T2* CMR leads to a substantial reduction in mortality in thalassaemia, which has been attributed to the adoption of aggressive iron chelation tailored to the heart before the onset of heart failure [
13].
Against this background of substantial gains in clinical management of transfusion dependent patients over the last 15 years are new advances in hardware and software technology. Increasingly 3T scanners are entering clinical practice and these may offer advantages over conventional 1.5T magnets for iron burden quantification. Unfortunately, 3T CMR may suffer from increased artefacts making myocardial T2* assessment challenging, particularly in patients with a high degree of iron overload [
14,
15]. Another more recent advance is improved T1 mapping of the heart by CMR. The Modified Look-Locker inversion recovery (MOLLI) and its shortened version (ShMOLLI) T1sequences (as well as a number of other sequence variants) have now been used to assess myocardial T1. Native T1 measurements have shown promise in the assessment of diffuse interstitial fibrosis, [
16] and may help characterise myocardial infiltration in amyloidosis or Anderson-Fabry disease [
17,
18]. T1 measurements can also be combined with gadolinium for quantification of extracellular myocardial volume, which is promising for clinical application for assessment of prognosis in heart disease, [
19,
20] and the measurement of myocardial amyloid burden [
21]. Native T1 has been shown to shorten with increasing tissue iron concentration in vitro, [
22] and we recently reported the use of T1 in thalassaemia patients at 1.5T [
23]. Therefore T1 is a credible technique to evaluate for iron measurement. We further investigated the relationship between native T1 at 1.5T and 3T against the current gold standard technique of BB T2* at 1.5T in normals and a spectrum of patients with transfusion dependent anaemia.
Discussion
These data show reasonable correlations between cardiac T1 and cardiac T2* measures across a wide range of cardiac iron loading, but with significant scatter, and there is reasonable agreement with our previous data comparing T1 and T2* at 1.5T in thalassaemia patients, [
23] and with the more recent data from Sado et al., [
29] although with differences as discussed below. There is a prima facie case therefore for supporting the notion that T1 has a potential role in iron measurement. The question that arises is what might be the relative advantages and disadvantages for using T1.
The advantages of T1 CMR include: A) At low iron levels, T1 is less affected by non-iron influences that affect T2* measurements, making T1 appear to be more suitable for clinical conditions where low iron levels may be responsible for disease pathogenesis; B) T1 has good inter-study reproducibility, which appears superior to T2* which may be of benefit for longitudinal patient follow-up and clinical trials. Counterbalancing views to these advantages are: A) The dominant pathologies associated with iron dysfunction by prevalence are those involving substantial iron overload (eg transfusion dependent anaemias, haemochromatosis) and for these conditions there is rather little obvious merit in defining iron levels moving within the non-significant range, when movements of T2* are robustly identifiable in the low-risk mild to moderate iron overload range of cardiac T2* between 10–20 ms. Nonetheless, some more unusual pathologies are apparently associated with iron dysfunction at low iron levels, [
23,
30] for which T1 might be a more suitable iron assay than T2*. B) The improved inter-study reproducibility of T1 measurements is potentially valuable. This being said, it is clear that the performance of cardiac T2* could hardly be described as weak. The relative CoV’s at 1.5T for T1 and T2* in this paper were 1.79 % vs 4.55 % with near identical ICCs of 0.993 vs 0.994. Both parameters therefore appear to be strong performers in absolute terms. Certainly sample sizes in the order of 60 have been used in randomised controlled trials using T2* as the primary endpoint, that showed a significant difference between the performance of competing iron chelators, [
31,
32] and a sample size of less than this may not be clinically credible. However, T1 might have an advantage if 2 drugs with similar efficacy were to be compared. This would depend on other factors as well, including standardisation of T1 measurements, as addressed further below. It should be noted that improvements in T2* technology may improve reproducibility [
33].
It has also been suggested that cardiac T1 might have improved utility for identifying patients with cardiac iron loading through a better delineation of the lower limit of normal for cardiac iron loading [
29]. It is difficult to sympathise with this view. The normal range for cardiac iron has received rather little scientific attention from basic science, with the most commonly cited paper from Collins and Taylor published in 1987 indicating a normal range in eight hearts of 0.34 mg/g dw (range 0.29–0.47) [
34]. From the work of Carpenter et al., [
6] the accepted mean normal myocardial T2* value of 40 ms yields a myocardial iron concentration of 0.5 mg/g dw which is in reasonable accord with this upper range. It is remarkable that such a level of agreement should occur given that the methodologies of myocardial iron measurement are completely different and performed some 24 years apart, and does lend credence to the T2* calibration. The definition of the normal range of cardiac iron from T2* is however philosophically challenging. Early attempts to justify the lower limit of normal were made on the simple basis of taking the mean measurement in normals and subtracting two standard deviations. However, it is now known that T2* is not normally distributed and this approach which leaves the lower ~5 % of the normal population with “cardiac iron overload” is neither sensible statistically nor satisfying clinically. A more pragmatic approach is to sample a normal population and take the lowest T2* value obtained as the lower threshold (assuming that the normal population does not include a subject with occult iron loading). For myocardial T2*, this has generally been accepted as 20 ms, although other values can and have been suggested. A further persuasive argument for definition of a normal range is to associate the T2* to outcomes, because this is patient focussed rather than test focussed. The work from Kirk et al. clearly showed that heart failure occurs in a very high proportion of cases only when myocardial T2* <10 ms (myocardial iron equivalent of 2.7 mg/g dw). This threshold of severe iron loading is therefore beyond question. However, it is clear that to suggest that there needs a hard and fast rule to define the threshold for the onset of mild cardiac iron overload is somewhat misguided and unhelpful. Since the range of myocardial T2* between 20 ms and 10 ms is large and easily measured, it serves as a valuable early warning zone that problems are likely in the future should the T2* drop further. Whether this range starts at 20 ms or at a slightly higher value takes no account of the continuous nature of iron loading and suggests a vital clinical importance for decision making based on an arbitrary threshold, which is not the case. It is therefore possible to consider that questions of whether T1 or T2* better identify a threshold for early cardiac iron loading are moot. The position that T1 is better for identifying the lower limit of normal for myocardial T1 is also problematic unless the measurement can be standardised, with for example Sado et al. reporting a value of 904 ms (Sado et al. [
29]) vs 939 ms in this report.
Another potential advantage of T1 measurement of iron concentration which is identified in this paper, is the use of T1 at 3T. There has been substantial increase in the use of 3T for MR of the brain and musculoskeletal system, and a number of centres no longer have 1.5T scanners available causing problems for thalassaemia patients. The two most likely solutions to this problem are to calibrate T2* at 3T against 1.5T, on which progress has already been made, [
14,
15,
35] or to consider using T2 or T1 mapping. In this paper, we show that T1 mapping at 3T is feasible and causes only rather modest prolongation of T1 values (median normal at 1.5T vs 3T: 1014 ms vs 1165 ms). The scatter of T1 vs T2* values at 3T (Fig.
3a) is similar to that at 1.5T (Fig.
2a) and the inter-study reproducibility of T1 at 3T (1.45 %) was slightly superior to 1.5T (1.79 %). This approach may be worth exploring further, because T2* imaging at 3T has increased artefacts and shorter T2* values, which can be problematic to image compared with imaging at 1.5T.
A problem associated with the current implementation of T1 mapping is the variation of absolute values of T1 between sequences and scanners. This is illustrated in this paper where the median value of T1 at 1.5T in normal subjects was 1014 ms which compares with the mean of 968 ms from Sado et al. [
29] Sado used the shortened form of MOLLI (shMOLLI) as well as MOLLI and they found a 25 ms difference between their techniques [
29]. This difference is not insignificant, being almost one standard deviation in their normal range (32 ms). These differences arise from the complex nature of measuring T1, which is affected by which sequence is used, [
36] and a large number of technical parameters that are difficult to standardise [
27]. Work is ongoing to try to resolve these issues, [
37,
38] but T1 mapping for iron assessment would currently require each centre to establish its own normal range. This issue for T1 is less problematic for marked iron loading however, because the dynamic range of decrement in T1 reaches ~300 ms, although the direct comparison of T1 values between centres would remain difficult. It is also clear that the direct comparison of T1 against T2* values which are understood clinically, relies considerably on the modelling used to relate these 2 parameters, and this is currently unresolved. The three papers that have modelled T1 against T2* at 1.5T have found significantly different regression lines due to data scatter, such that for example, a T2* of 10 ms approximates to a T1 of 620 ms by Feng et al., [
23] 650 ms by Sado et al., [
29] and 720 ms in the current paper. This situation is different to that faced by T2*, which proved straightforward to transfer between centres, [
10] between different manufacturers’ scanners, [
39,
40] and between centres internationally, [
11,
39] yielding excellent intercenter agreements of T2* values at an early stage of development [
11,
39].
Some further technical issues need to be considered in the use of T1 for iron assessment. Diffuse fibrosis raises native T1, whereas myocardial iron lowers native T1. There is debate about the prevalence of myocardial fibrosis in thalassaemia, [
41,
42] but its occurrence would increase the T1 value causing iron underestimation. T2* does not appear to be significantly affected by myocardial fibrosis [
43,
44]. A further confounding issue for T1 measurements is that with increasing iron loading, T2* effects influence the measurement of T1, [
16,
23] which affects its accuracy. T1 measurements need to be performed on carefully selected regions of interest in the myocardium that exclude other tissues with very different T1, such as blood and fat, to prevent erroneous T1 values [
45]. Finally, there is a report that T1 varies by gender and age, with increased values by 24 ms in females up to the age of 45 years, [
45] which would be unwelcome if substantiated. All these issues need further consideration.
Further issues with the validation of T1 also exist, that are troublesome. Clearly this paper and others have shown that T1 can be used to measure tissue iron concentration, but in humans the only comprehensive validation in the heart has been performed using T2*. Myocardial T2* measured at 1.5T is firmly established as the gold standard measurement of myocardial iron concentration, [
12] and this is because of a number of important reasons: 1) The scan is fast (short single breath-hold) which makes it easy for patients to tolerate (even children) and therefore efficient on time use of the MR scanner; 2) It is robust in clinical practice and scan failures are unusual; 3) Absolute T2* values are highly reproducible between different manufacturers scanners and sequences allowing individual centres to directly compare patients’ results; [
39] 4) The inter-study reproducibility is very good allowing small sample sizes for clinical treatment studies in the order of 60 patients; 5) The T2* technique is calibrated directly to human heart tissue, [
6,
9] but attempts to calibrate T1 against human heart tissue have to date failed, [
46] as a result of the effects of formalin on T1 relaxation. 6) The relation between cardiac T2* value and future cardiac events is well described and powerful; [
5] 7) The technique is widely disseminated in clinical practice around the world, [
47] which is not the case for T1; 8) And a reduction in cardiac mortality has been demonstrated using T2* as a guide to aggressive iron chelation therapy that is tailored to the heart in advance of the onset of heart failure [
13]. Therefore any MR scanning alternative to T2* has a substantial burden of re-validation and outcomes analysis to climb to reach the same level of clinical confidence in use. Another approach to this problem is simply to assume that putative alternatives such as T1 would simply be calibrated against conventional T2* values at 1.5T and therefore directly cross-referenced to the established cardiac T2* literature. Such an approach has merit, but certainly leaves room for doubt as to veracity of the measurement, because there is no guarantee that T1 and T2* measure identical chemical iron species with identical effects on outcome. The additional problem already mentioned is that there are significant problems in identifying the best fit curve between T1 and T2* which leads to uncertainty.
Competing interests
DJP is a consultant to ApoPharma and a director of CVIS. The other authors declare that they have no competing interests.
Authors’ contributions
Data were acquired by MHA, RW, GCS, TH, PD. Data analysis was performed by MHA, DA, GCS, TH, VV, DJP. The manuscript was drafted by DA, DJP, JB and MHA. All authors contributed to the design and intellectual content of this study, and have read and approved the final manuscript.