Background
Replicating studies is a vital part of the scientific process of accumulating knowledge [
1]. It has been recommended that at least two rigorous trials must have shown an intervention to be efficacious, in order to avoid building recommendations on findings due chance or specific to a time, place or person [
2]. From this follows that users of research evidence are encouraged to base their decisions on systematic synthesis of research, e.g. systematic reviews and meta-analyses, rather than individual studies [
3,
4]. This puts replication at the centre of the research-to-practice pathway.
Replications can be direct (an exact copy of the study) or conceptual, which involves testing the intervention with different methods or, more commonly, in different contexts, thus investigating the generalisability of the findings [
5,
6]. Direct replication requires adherence, whereas conceptual replications imply some type of adaptations to the original intervention, target population or setting. In a global world, conceptual replications may involve testing the intervention in a new country, where differences in terms of care systems, norms, regulations and cultures are to be expected [
7]. In this, researchers need to consider whether it is possible to follow the original intervention protocol or if adaptations are needed in order to make the intervention work in the new context. Adaptations may be particularly relevant for interventions that consist of several components that interact with each other as well as with factors related to the implementation and context where they are set (i.e. complex interventions) [
8]. Previous research into the adherence and adaptation dilemma has focussed on how and why professionals adapt evidence-based methods, but little is known about how researchers approach this issue.
According to the principle of programme uniqueness [
9], interventions are being developed and evaluated under circumstances that are different from where they will be used (e.g., in funding, homogeneity of patients and training of staff). It can be argued that this makes direct replication of interventions impossible; instead, re-testing of interventions will involve changes in various aspects of the intervention or context [
9].
This, in combination with the fact that interventions are seldom sufficiently described to determine the degree of adherence and adaptations [
10‐
14], imposes challenges for knowledge accumulation, because what seems like the same intervention may in fact be fundamentally different versions of it. The more variation in how the interventions are composed (and the less that is known about it), the greater the challenge of determining how to categorise interventions in systematic reviews [
8]. Unknown variation between interventions that on the paper are the same may result in erroneous conclusions when results from individual studies are synthesised. In addition, information about how variations in intervention components, such as when they are adapted to fit different contexts, is lost [
8]. Such information is central for decision makers and professionals who need to sort out what methods work in their context. Thus, the way in which adaptation and adherence are approached and reported in replication studies has implications for what is being reproduced and, in the end, what interventions patients receive and the likelihood that these will benefit them. However, there is a lack of studies on how adaptation and adherence are approached in replication studies.
Empirical findings support that interventions might require adaptations when used in new contexts, such as a new country, in order to obtain positive outcomes. For example, a recent meta-analysis evaluating the effects of evidence-based youth psychotherapies showed that the interventions were no longer effective when they were applied, without any adaptations, in other countries [
15]. The authors concluded that evidence-based methods may not generalise well beyond their culture of origin and that adaptations may be needed. Another meta-analysis comparing interventions that were applied in a new country with or without adaptations showed that although non-adapted versions were effective, the adapted versions were more effective [
16]. This is in line with meta-analysis and reviews from the field of culturally adapted interventions covering psychotherapy, substance abuse and family interventions, showing that culturally adapted interventions are at least as effective as non-adapted ones, while generally being superior in recruiting and retaining minority groups (see, for example, [
17‐
19]). A recent systematic review of evidence-based psychotherapies showed similar results, indicating that adapted interventions generally were effective, albeit few of them tested against the original protocol [
20].
In line with these findings, several researchers argue that adaptations are necessary at all steps of the research-to-practice pathway [
21,
22]. Yet, while there is emerging knowledge about what type of adaptations professionals make and why (e.g., [
12,
23,
24]), less is known about adaptations when researchers evaluate interventions in new contexts. Thus, this study aims to describe adaptations that researchers make when conducting replication studies and to explore how they reason about adaptations and adherence and report adaptations in scientific journals. To our knowledge, this is the first study on researchers’ views on adherence and adaptations in replication studies.
Discussion
This study addresses how researchers who replicate interventions in another country reason about adherence and adaptations. The findings suggest that evaluating interventions in new contexts introduces a conflict between adhering to the original intervention protocol and making adaptations to make the intervention fit the context. The resolution of this conflict was influenced by the researchers’ implicit aim of the inquiry, the nature of the evidence underlying the intervention, the context to which the intervention was transferred and the target stakeholders’ (professionals and clients) views. Reporting was also conflictive, as the wish to be transparent clashed with practical constraints such as word count. How the conflict between adherence and adaptations is approached has several implications for intervention research, as well as for how findings are synthesised across studies (e.g., in systematic reviews and meta-analyses). This is discussed below.
Despite the fact that all included studies were replications in that they involved evaluating a previously tested intervention in a new context, the findings indicate that the researchers’ implicit aims with their studies could differ. Two types of implicit aims emerged: 1) to replicate an intervention by repeating a previous study as closely as possible, albeit in a new context (i.e. a conceptual replication), or 2) to incrementally improve an intervention, making sure learning from previous experiences was harvested when testing in a new context. These two aims seem to reflect two divergent approaches to knowledge accumulation. Those stating that the aim of the study was to replicate a previous intervention emphasised that successful reproduction of the original study is a necessary step to ensure that findings are trustworthy, before being spread and implemented in practice. With this approach to knowledge accumulation, adherence is a core feature. The approach aligns with the way the research-to-practice pathway is currently set up, with the focus on internal validity before external and the gradual process of establishing efficaciousness through a series of distinct steps, from development of interventions and feasibility studies to direct and conceptual replication of studies, first under optimal conditions (efficacy-studies) and then normal conditions (effectiveness) [
2,
27].
The second approach, incrementally improving the intervention, has other implications for the accumulation of knowledge and the research-to-practice pathway. Rather than aiming to establish a stable knowledge base through repetition, the researchers described how each study was set up to refine the knowledge base, which is in line with Roger’s notion of reinvention [
28]. This kind of replication study is, thus, neither a direct replication (test of the exact same intervention in the same type of context but by another research group), nor a typical conceptual replication, where previously tested interventions are re-tested in different populations or contexts [
5,
6]. Instead, this strategy for accumulation of knowledge can be described as
incremental, based on the emphasis on continuously improving interventions and ultimately outcome over testing the boundaries or generalisability of the original intervention. In this, incremental accumulation does not follow the distinct steps of first developing, then testing interventions, but rather a continuous development-testing loop.
The strategy for knowledge accumulation – accumulation through replication (direct or conceptual) or incremental accumulation – has implications for how interventions are designed, analysed and reported. Whereas investigating the intervention as an entity may be feasible for replications without adaptations, we suggest that adaptations and the subsequent incremental approach to knowledge accumulation require more focus on understanding the impact of different intervention components and intervention interaction with context. This would change the research question from if an intervention as a whole is efficacious, to what intervention components and combinations of components and contexts result in a certain outcome, as well as what works for whom, when and why.
Increased understanding of intervention components and interaction between component and contextual factors can be done either by alternative ways of analysing data, without changing the design of the trial, or by alternative designs of the trial. Alternative ways of analysing the data include dismantling studies, component analysis, mediation and moderation analysis, realist evaluation, and Bayesian statistics [
29‐
32]. Examples of alternative designs are adapted (flexible) designs, factorial designs, randomized micro-trials and hybrid designs as well as tailored interventions and implementations [
33‐
37]. These suggestions are in line with calls for designs that make intervention studies more responsive to societal changes [
38]. It is also in line with a recent review of adaptations to evidence-based psychotherapies, which concluded that alternative designs are needed to illuminate the impact of specific adaptations, and gives specific design recommendations for how this can be achieved [
20]. Such studies, may, for example, include testing adapted versus non-adapted versions in the same trial [
7]. None of the researchers in this study reflected on this possibility. Rather they seemed to treat the adaptation-adherence dilemma more as a discourse, not an empirical question to test.
Incremental accumulation of knowledge may also call for complementary strategies to research synthesis. To reconcile the need to both summarize evidence and retain details about interventions and the context in which they are used, a number of alternative strategies have been developed. For example, by integrating program logic models with systematic reviews, core components, change mechanisms and contextual influences can be explicated [
39,
40]. Specific varieties of such approaches are realist synthesis and qualitative comparative analysis which aims to illuminate what works for whom when and under what circumstances [
41,
42]. Other approaches that may be useful when knowledge accumulation is incremental are meta-regression, which allows analysis of moderators across studies, network meta-analysis and mixed treatment comparisons [
43]. These are methods that allow three or more interventions (or versions of interventions) to be compared instead of only contrasting effects between intervention and control groups [
44,
45].
In addition to the findings showing that the researchers’ implicit aim of the inquiry influenced how they reasoned about adherence and adaptations, the researchers were also influenced by the nature of the evidence underlying the intervention, the context to which the intervention was transferred and the target stakeholders’ (professionals and clients) views. All three themes can be described to be dealing with the need to create a practical, philosophical or cultural fit between the intervention and the context where it was set. Adaptations can be viewed as the tool to create this fit. This expands previous findings reporting that creating fit between intervention and context is an important reason for why practitioners make adaptations [
23,
24]. It also shows that the fit concept, which has previously been studied extensively in organizational research, including fit between the organisational environment (e.g., work processes) and people [
46], and between interventions and organisation and its members [
47], is also applicable in the context of intervention research. As adaptations may be a way to achieve fit, they may be integral in making interventions work in new settings. Overall, the fact that researchers may need to consider adaptations regardless of their strategy for knowledge accumulation indicated that the adherence and adaptation issue is central for understanding how interventions are conducted not only in clinical practice but also in replication studies. It also underlines the importance of describing interventions, contexts and adaptations in greater detail, as encouraged in recent reporting guidelines [
48].
The respondents did not describe any efforts to control, monitor or support how adaptations were made by the professionals involved in the intervention studies. This was despite the fact that they often anticipated that the professionals were going to make adaptations. This is in contrast to how the researchers, for example, used manuals to support adherence. The combination of using manuals to support adherence whilst neglecting to control, monitor or support adaptations may increase the risk of adaptations being done ad hoc or in a way that is conceptually inconsistent with the intervention, something that is common in clinical practice (e.g [
23]). This calls for intervention researchers to focus more on monitoring adaptations, both planned and unplanned, so that these can be described and analysed. There is a need to support professionals in conducting adaptations so that they are made proactively and in line with the logic of the intervention. There are several frameworks that can be used in this regard, providing systematic, theoretically guided approaches to adaptations (e.g., [
49‐
52]). There is also a need to support professionals in monitoring adaptations as the intervention unfolds, providing them with a feedback system that makes it possible to manage adaptations in the light of client progress [
53‐
55].
Reporting adaptations in scientific journals evoked a conflict between the norm of transparency and the practical reality. The respondents described how in theory, transparency was non-negotiable and all adaptations made to interventions should be reported. However, in practice, minor adaptations were not mentioned; the word limits and fear of obscuring the story made overly detailed descriptions impossible. As most adaptations were perceived as minor, many were left out. However, this is risky, particularly when core components are not known. Even though the adaptations may be perceived as minor by the researcher, they may in fact be critical to intervention success or for the professionals aiming to use the method in their practice [
56]. Overall, the reporting of adherence and adaptations in peer-reviewed articles did not seem to do justice to the deliberations that the researchers vocalised.
Methodological considerations
This study is exploratory in its nature. Several limitations might have biased the results and our conclusions. One is that the participants were identified based on information available in articles; only principal investigators of studies reporting adaptations were invited to participate. This might have biased the sample towards researchers who were more aware of the need to report adaptations or who were published in journals that encouraged this type of reporting. It is also possible that their answers were influenced by social desirability, and as always with interviews, it was their subjective experience that was in focus; it is possible that others involved in the different studies would have provided contrasting perspectives.
In addition, some of the studies were conducted more than 20 years ago, raising the issue of memory bias. Some respondents pointed out that adaptation was not on the agenda at the time of their study. Thus, it is possible that the reporting of adaptations may be more frequent in later studies, or that current knowledge about adaptations might have distorted the original motives for adaptation. Furthermore, this study primarily deals with psychological and social interventions (behavioural health interventions) in health and social care, and all studies were conducted in Sweden. Nevertheless, the sample did cover a broad range of different interventions with different target groups and thus did not focus on only one specific method.
Conclusions
This study adds to the limited knowledge about how and why researchers make adaptations and how they reason about adherence and adaptations when conducting replication studies. The findings show that adherence and adaptations are related to implicit assumptions about the role of the trial and suggest it matters if the goal is 1) to test an intervention in a new context to confirm or disconfirm those findings, or 2) to expand or limit the application of the intervention, making sure learning from previous experiences is harvested by improving the intervention. The latter goes beyond what is usually involved in so-called conceptual replications because not only the context where the study is set varies, but also the intervention. As the goal is improvement rather than reproduction, we call this strategy for accumulation of knowledge “incremental accumulation”. We suggest that direct and conceptual replications and incremental accumulation require different approaches to adaptation and adherence; adherence being central to direct replications and adaptations to incremental accumulation. As incremental replications may involve variation in intervention components as well as variation in, and interaction with the context, methodologies and designs that allow this variation to be studied, not controlled, is warranted.
To be able to accumulate findings from incremental replications in systematic reviews, there is a need for alternative approaches also at this stage of the research-to-practice pathway. By increasing the awareness of the implicit aims underlying replication of interventions, a more systematic consideration of how one best accumulates knowledge in systematic reviews can be achieved. In addition, regardless of the type of replication, the findings suggest that interventions often need to be adapted to fit the context of application, for practical or cultural reasons. Thus, adaptations need to be monitored and reported, not only adherence. Lastly, the participants acknowledged that professionals and clients may often make adaptations. This points to a greater need for the research community to provide structured support for adaptations, not only for adherence.