Background
Infection with
Plasmodium falciparum (
P. falciparum) causes over 1 million deaths each year [
1]. The risks of death and clinical illness is highest in young children (<5 years old), whereas adults living in endemic areas show reduced prevalence of infection, reduced parasitemia, and reduced incidence of clinical illness. This resistance to infection and illness with age is often referred to as ‘naturally acquired immunity’, and understanding the mechanisms of this may facilitate the development of a vaccine for the control of malaria. Studies of naturally acquired immunity rely on identifying variation in susceptibility in the population, and then characterizing the differences in immune responses between susceptible and resistant individuals. If immune responses associated with resistance can be identified, these may provide useful targets in the development of vaccines.
A key feature in the study of naturally acquired immunity is the identification of individuals that are relatively protected from infection or illness. If immune responses can be characterized at baseline, and subsequent infection rates identified, then it is possible to retrospectively identify those responses most closely associated with protection. Prospective cohort studies offer the opportunity to measure immune responses at baseline, and investigate these as predictors of either infection (parasitemia) or clinical illness. Susceptibility may be measured as either the presence or absence of infection or clinical episodes in a fixed time period, the number of episodes in a period, or the time to an episode. An alternative approach to studying malaria susceptibility and resistance is through a prospective study of time-to-infection in a cohort of individuals treated to eliminate malaria, and then undergoing natural exposure in an endemic setting. By observing which baseline immunological factors predict a delay in the time-to-infection, it is hoped to detect protective immune responses. Such studies have been used to explore the relationship between antibody responses and protection from both infection and clinical episodes [
2-
4].
Although these are generally referred to as time-to-infection studies, very different results can be obtained depending on whether infection is detected by microscopic examination of blood, or more sensitive PCR techniques [
5-
7]. Since these two techniques give different times of ‘infection’, it is probably more accurate to discuss these studies as measuring ‘time-to-detection’ of infection (using a particular detection method). Thus, it is important to understand that current ‘time-to-infection’ studies are always measuring ‘time-to-detection’. If we had a sensitive enough assay, the time of initiation of infection and time of detection would coincide. However, in the absence of this, we will use the term ‘time-to-initiation’ to refer to the time until initiation of blood-stage infection, and ‘time-to-detection’ to refer to what is usually described as ‘time-to-infection’.
In time-to-infection (detection) studies a major assumption is that delayed acquisition of infection (or clinical disease) is the result of the level of immune protection. However, the timing of when infection or disease is first detected depends on two major factors. The first is the random timing of when a particular individual experiences a new infection (from an infectious bite from a mosquito). The second factor is how the immune system subsequently modifies the outcome of the bite to determine whether and when infection or clinical illness is detected. For example, liver stage immunity may reduce the probability that an infectious mosquito bite results in a blood-stage infection (and only a small fraction of infected mosquito bites are thought to reach the blood stage [
8,
9]). Similarly, blood-stage immunity may delay the timing of parasite detection or clinical illness after the initiation of blood-stage infection, and may reduce the peak levels of parasite or the clinical manifestations of infection [
5]. It is generally assumed that immunity plays a role in determining differences in time-to-infection, and thus that time-to-infection can be used as a correlate of immunity [
2-
4]. The major effect of pre-erythrocytic immunity would be to delay the average time-to-initiation of infection. Blood-stage immunity would not change time-to-initiation, but would change time-to-detection, because slower parasite growth would increase the delay between initiation and detection.
We have previously analysed the mechanisms of naturally acquired immunity by studying the dynamics of infection of individuals of different ages [
5]. We found that the growth rate of parasites in blood-stage infection decreased with age and that this decrease in growth rate explains the differences in time-to-detection observed in individuals of different ages [
5,
10]. Our modelling suggested that time-to-initiation of blood-stage infection was not significantly different between age groups, and thus found little evidence for pre-erythrocytic immunity delaying time to initiation. By contrast, we found a decreased blood-stage growth with age and that this decreased growth explained the delayed time-to-detection with age. Understanding how heterogeneity in blood-stage immunity and parasite growth rate affect time-to-infection studies is important to interpreting immune correlates arising from these studies.
Herein, we have analysed the kinetics of infection in a treatment-time-to-infection (detection) study performed in Kenya [
2], in order to understand the ability of this approach to identify differences in susceptibility or resistance to infection. We argue that, in most cases, the major factor that determines the time-to-detection is simply the random timing of when infection happened to be initiated. We show that, depending on the age cohort and method used to detect infection, stratifying individuals based on time-to-detection will not be useful in identifying individuals who are more susceptible or immune to infection. As a result, the timing of infection between individuals often carries little information about the level of immunity of the individuals concerned. We illustrate how the sensitivity of the method of detection of parasites can also play an important role in determining how powerful this technique is at estimating the level of immune protection; paradoxically, the higher the sensitivity of the detection method, the lower the ability to discern differences in parasite growth rate. Overall, our analysis suggests that time-to-infection studies need to be interpreted with caution, and alternative approaches such as direct measurement of parasite growth rate may be much more sensitive at detecting differences in acquired immunity to
P. falciparum infection.
Discussion
Identifying naturally acquired immune responses that are able to control parasite growth in the infected host and reduce the frequency of clinical malaria provides a potential avenue for the development of novel vaccination strategies. A number of investigators have studied how baseline (pre-treatment) immune responses affect time-to-infection after treatment [
2,
4]. This has also been studied using time-to-infection from cohorts that have naturally cleared malaria infection during the dry season in areas of seasonal transmission [
23,
24]. The underlying premise of such studies is that time-to-infection is determined by the level of immunity of the host. This assumption is supported by the fact that, when stratified by age, older individuals in endemic areas are consistently observed to have a longer time to infection, and this delay is thought to be due to naturally acquired immunity [
5,
25].
The association between age and time-to-infection suggests that this is a useful correlate of naturally acquired immunity. However, since the phenomenon of acquired resistance with age and exposure is well-known, comparing immune responses and time-to-infection in different age groups seems rather laborious if the same information can be obtained simply from date of birth. The major utility of time-to-infection studies would be in differentiating those of similar age and level of exposure, but who differ in their levels of immunity. By identifying the differences in immune responses between such ‘exposure matched’ individuals with different levels of protection, we should be able to identify protective responses and antigens. By comparing narrow age cohorts in localized geographical areas, we may be able to identify such responses. However, there are two major problems in this approach. First, it is also likely that such narrow cohorts may also not differ greatly in their levels of immunity, so a study design that is very sensitive to small differences in susceptibility may be required. Secondly, we are most interested in differing levels of immunity and protection in young children, as they are most at risk of clinical illness. However, since children also have the highest parasite growth rates, they are the most difficult population in which to identify differences in time-to-infection due to differences in growth rates. A major question is whether time-to-infection studies are sensitive enough to detect such differences in immunity.
Herein, we have analysed data from a time-to-infection cohort in Kenya in order to test whether such an approach is able to differentiate varying levels of protection in a group of age-matched individuals in an endemic area. We find that when infection is detected by microscopy, time-to-detection identifies adults with slower parasite growth rates; however, it does not do this well in children. When infection is detected using a more sensitive PCR approach, more adults are detected as being infected (and infection is detected earlier), but time-to-detection is less useful at identifying individuals with slower parasite growth rates. This analysis demonstrates that time-to-infection studies are very sensitive to the distribution of parasite growth rates in the group being studied, as well as the method of detection of parasites. Microscopy detection is more sensitive at segregating individuals based on parasite growth rate than PCR detection is (and using a higher threshold for detection is more sensitive still). However, in either case, the rapid growth rates of parasites in children indicate that it is very difficult to identify differences in growth rates in children using this method.
Using a simulation approach, we investigated how different rates of infection would affect the ability of time-to-detection studies to sort individuals based on parasite growth rate. This illustrates that choosing populations with higher underlying infection rates will always lead to a greater role for parasite growth rates in determining time-to-detection and thus be more sensitive at sorting individuals based on time-to-detection. Similarly, using a parasite-detection assay with a higher detection threshold will lead to an increased effect of parasite growth on time-to-detection, and thus also be more sensitive.
The mechanisms of naturally acquired immunity are generally divided into pre-erythrocytic versus blood-stage immunity. Pre-erythrocytic immunity affects the proportion of infectious bites that initiate blood-stage infection by blocking infected bites prior to or in the liver stages – thus affecting time-to-initiation (of blood-stage infection). Blood-stage immunity affects the growth rate of parasites in blood and hence the time from infection to detection. Our modelling of time-to-infection (Figure
6) reveals an inherent limitation of using such approaches to study naturally acquired immunity. Since so much of the outcome is determined by the random time-to-infection, it is very difficult to determine immune effects on parasite growth rate unless they are large. This also has potential implications for studies using time-to-infection as a means to assess vaccine effects on blood-stage parasite growth, as these may have very limited power to detect changes in time-to-detection. For studies of liver-stage vaccines the problem is slightly different, given that it is changes in the infection rate that are the primary concern (the limitations of the statistical power of such studies has been dealt with elsewhere [
26]). In our previous studies [
5,
10], we have found no evidence for differences in infection rate with age and have shown that measured differences in parasite growth rate with age explain the observed differences in time-to-infection for different age groups. Moreover, the good fit of an exponential model of time-to-infection in children suggests little effect of pre-erythrocytic immunity. Tran et al. [
23] have recently used a similar study and PCR detection to also show no difference in time-to-detection in different age groups. We note that differences in infection rate with age may make it easier to detect differences in growth rate in groups with a higher infection rate (Figure
6). It is important to understand how time-to-infection studies can be used to understand differences in parasite growth rates both in naturally acquired immunity and in studies of vaccination.
It is important to note that many studies of vaccine efficacy rely on time-to-clinical-episode, rather than time-to-infection. Infection and parasite growth are a pre-requisite for a clinical episode, and thus time-to-infection may still confound such studies. However, one approach to reducing this effect is to restrict that analysis of clinical episodes to individuals with demonstrated infection [
27]. We note that, in our study, there were too few clinical episodes in the 10-week monitoring interval to allow a separate assessment of time-to-episode.
An important question is why, given the limited power of time-to-infection studies, many studies have reported significant associations between time-to-infection and both pre-erythrocytic and blood-stage immunity? One answer to this lies in the aggregation of age groups in many studies. That is, in our analysis, we considered relatively narrowly stratified age groups. If we pool all age groups, it is relatively simple to show large differences in time-to-detection with age. Since immunity also varies with age (and exposure), it is obvious that in the cohort as a whole, time-to-detection will correlate with the accumulation of immunity with age. However, since age is such a strong confounder here, it is questionable what additional information the time-to-infection adds; one could have simply correlated immune response with age and presumably reached similar conclusions. Moreover, since immunity accumulates with age and exposure, it is not possible to disentangle which immune responses are simply the result of prolonged exposure and which may be actually playing a role in reducing parasite growth rates. Where we would ideally like to identify protective responses would be in young children with similar levels of exposure but with differences in either phenotype or specificity of their immune response and who differ in infection outcome. However, in young children, we predict that time-to-infection studies are not able to discriminate differences in blood-stage immunity and parasite growth rates unless infection rates are extremely high.
Time-to-infection studies are only one approach to identify susceptible and resistant individuals. Other approaches include observing the presence or absence of infection in a given time interval, or time to presence of clinical malaria (rather than simply infection). We note that if the underlying time to acquisition of infection is a random process, then the presence or absence of infection in a given time interval is also random. For example, if we truncated our study at day 28 (Figure
1A), we would see 41% of children aged 1 to 4 infected and 59% uninfected. However, whether children were in one group or another would be due to the random distribution of time-to-initiation. Similarly, time-to-clinical-malaria is dependent on time-to-infection rate of parasite growth and underlying sensitivity to clinical malaria. Thus, the random time-to-infection may still play a dominant role. We note that others have suggested studying the rate of clinical episodes only in individuals shown to be infected [
27-
29]. Interestingly, this is sometimes used as a measure to decrease heterogeneity in exposure [
30]. However, we suggest that this may also have the effect of reducing the impact of the random factor of when or whether infection occurred, even in the presence of homogenous levels of exposure. Further work is clearly required to determine the role of random factors versus host factors in studies of resistance to clinical malaria.
Detecting differences in parasite growth rate is difficult using time-to-infection studies, since the random timing of infection is often the major factor determining time-to-infection. Human challenge studies provide a much simpler approach for identifying differences in growth rate, as time-of-infection is known. Since all patients are infected synchronously, any delay between patients can be attributed to either a reduced initial burden of infection, or reduced subsequent growth rate. Thus, both time-to-detection and serial measurement of parasitemia can be used to estimate growth rates following infection [
13,
31,
32]. Previous studies have shown major differences in parasite growth rates in naïve versus exposed populations [
33], and similar studies could, in principle, be used to correlate prior immune responses with
in vivo parasite growth rates following natural infection. Alternatively, the direct measurement of parasite growth rates in time-to-infection studies provides a more direct way to identify differences in blood-stage immunity than using time-to-detection in these studies. Since time-to-infection studies involve regular sampling for infection, if parasites can be detected (by PCR) in two or more sequential samples, then parasite growth rates can be directly estimated, independent of when infection was initiated. Given the significant limitations of time-to-infection studies in detecting differences in both infection rates [
26] or differences in growth rates (illustrated here), we propose that direct measurement of parasite growth rates
in vivo will be a much more useful correlate of immune control than time-to-infection itself.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Competing interests
All authors declare that they have no competing interests.
Authors’ contributions
KC, JV, JWK, and AMM designed and implemented the field study. MP and MPD developed the concepts, designed the approach, and carried out statistical analysis, mathematical modelling of the data, and writing the manuscript. All authors read and approved the final manuscript.