Background
-
Rehabilitation interventions are often complex, hence, their investigation can be particularly demanding [1, 2]. Complex interventions can be defined as those made up of a number of components or active ingredients that interact with each other and with outside factors to bring about changes to outcomes [3]. Complex interventions are regarded as having inherent heterogeneity [4]. They will often be offered multiple times to multiple participants, the location and site of delivery can change as well and they can be delivered to individuals, families, combinations, etc. [5]. Similarly, they are designed in a number of sessions to allow time for individuals to learn and comprehend their content [6]. Rehabilitation interventions are complex and present a number of specific challenges: They often involve complex behavioural treatments in contrast to passive or surgical treatments [7].
-
They are often delivered face to face, where personal interactions and relationships play an important role in influencing patient engagement.
-
They are linked treatment plans which will need to be tailored to patients’ needs, and wider social circumstances.
-
They are context specific and defined as the interaction between the individual and the environment [8]. In other words, rehabilitation interventions can be shaped by the wider environmental and therapeutic milieu in which it is practiced.
Evidence in process evaluation research
Methods
Formal consensus – Study design
Statements under consideration
Area of interest | N of statements |
---|---|
Complex interventions and theoretical approaches | 4 |
Context | 3 |
Recruitment | 10 |
Description of intervention staff | 4 |
Description of intervention | 5 |
Preparing and assessing intervention staff | 7 |
Delivery of the trial intervention | 10 |
Understanding and interpreting process evaluation results | 4 |
Methodology | 10 |
Phase I - nominal group meeting
Data analysis
Phase II - second round of feedback
Data analysis
Results
Expert panel
Current research role | Background | Phase I | Phase II |
---|---|---|---|
Professor of Clinical Biostatistics | Biostatistics | √ | |
Doctoral Research Fellow | Speech pathology and therapy | √ | |
Professor of Stroke and Older People’s Care | Nursing | √ | |
Honorary Research Associate | Nursing | √ | |
Senior Research Fellow | Nursing | √ | |
Professor in Exercise Physiology | Exercise physiology | √ | |
Reader in Psychology | Psychology | √ | |
Clinical Senior Lecturer | Medical sciences | √ | |
Professor of Stroke Medicine | Medical sciences | √ | |
Research Officer | Physiotherapy | √ |
Theme | Description |
---|---|
The practicalities of doing research – being realistic about what ‘can be done’ | All participants agreed that there is a degree of compromise which impacts on what can realistically be achieved at the time of evaluating processes. Participants expressed their desire to not only rate recommendations in terms of the need for them to be included in the guidelines, but also to rank these statements in terms of their relative importance. |
Stand point – role of theory, concepts and roles | Participants expressed how it is important for any guidelines to include an explanation of the assumptions that underpin it. The participants’ epistemological and ontological stance highly influenced their views regarding proposed recommendations and their understanding of the guidelines’ content. Likewise, participants expressed different views in regards of the role that theory plays at the time of designing and carrying out a process evaluation. Participants considered that for guidelines to work, they need to clearly explain their underlying assumptions. In this way, the rehabilitation researcher can make an informed decision at the time of following the proposed guidelines. |
Investigating tailoring and ‘making connections’ | Participants identified the need for a process evaluation to investigate the level of tailoring and its impact on outcomes. They discussed in depth the challenges in assessing the degree of tailoring taking place at the time of trialling a rehabilitation intervention. Participants widely agreed on the fact that in the everyday running of a trial it was unrealistic to assume complete consistency in the way professionals deliver proposed rehabilitation interventions. |
Who is the end user? | Participants unanimously agreed on the fact that all process evaluations should have clear aims and objectives and that these would differ according to the type of trial under evaluation and the timing of the evaluation. The proposed guidelines need to state who the end users are; rehabilitation researchers will then be responsible for tailoring its recommendations to best fit their evaluation aim. Participants agreed that the process evaluation guidelines would need to be tailored, not only to a particular process evaluation, but also to end users’ needs. |
The consensus guidelines
-
Theoretical work: addressed issues in relation to the theoretical underpinnings of the trialled intervention. Researchers are guided to review the theoretical underpinnings not only of the rehabilitation intervention but also the implementation approach. For example, Byng et al. [30] carried out the process evaluation of an intervention to improve primary healthcare for patients with long-term mental illness following a realist evaluation approach. They reported that through realist evaluation the team was able to identify the interactions taking place, not only between intervention components, but also with the embedded external context.
-
Design and methods: this describes a number of steps aimed at treating a process evaluation as a piece of research in its own right. Researchers are advised to provide a clear definition of chosen process evaluation terminology, define clear aims and objectives and provide a detail description of selected data collection methods and timings. Finally, the guidelines recommend researchers addressing the interactions between process and outcome measures. For example, a number of protocols for process evaluations have been published alongside the main trial’s protocol [31, 32].
-
Context: this section addresses the importance of understanding and accounting for contextual factors, their role and their potential impact on process and outcomes over time For example, the process evaluation of a randomized controlled trial (RCT) looking at the benefits of a programme for caregivers of inpatients after stroke (TRACS study) [33]. This evaluation investigated the impact that contextual factors had during the process of embedding the intervention into the routine practice of a stroke unit. The researchers explored in detail contextual factors such as organisational history and policies, team relationships, responsibility sharing and staff engagement.
-
Recruitment and retention. The process evaluation should review the outcome evaluation’s recruitment and retention procedures in order to identify potential barriers and facilitators. It should also clearly describe the strategies and criteria informing the recruitment of participants into the process evaluation. Scianni et al. [34] reviewed in detail their recruitment procedures and identified transport to and from the health setting as the main barrier to participation in a trial investigating the impact of gait training for stroke survivors.
-
Intervention staff. This section firstly addresses the need to investigate the characteristics of staff in charge of delivering the intervention and identify how these can potentially have an effect on intervention implementation and impact. Secondly, it recommends the process evaluation to review the training provided to intervention staff in order to identify possible impact on outcomes. For example, Chung [35], in his study assessing the impact of a reminiscence programme for older adults with dementia provided a detailed description of the training component and expected learning outcomes. Intervention staff’s knowledge on delivering the programme was assessed using quizzes and questionnaires.
-
Delivery of the intervention. The guidelines recommend that process evaluation researchers should focus on tailoring and investigate the strategies in place in order to guide it and measure it. In addition, researchers should investigate barriers and enablers to implementation by reviewing strategies in place to improve or support the fidelity of the rehabilitation intervention. The process evaluation should review strategies in place to measure ‘dose delivered’ and ‘dose received’. Finally, participant’s experiences and acceptability of the intervention should be investigated. To date, it is rare for research studies to provide intervention providers with clear guidance on how to assess which is the ‘right amount’ of tailoring [27]. However, studies such as Mayo et al. [36] set an example by investigating how an exercise programme post-stroke was tailored to patients needs whilst keeping to the protocol guidelines.
-
Results. This section addresses the need to describe in detail the synthesis of process and outcome evaluation results. This synthesis should be informed by the theoretical underpinnings behind both, the outcome evaluation and its implementation. For example, in their study looking at a rehabilitation intervention for adults with brain injury, Letts and Dunal [37] developed a logic model through consensus work, which integrated information on process and outcomes.
Section | No | Recommendation |
---|---|---|
Theoretical work | ||
1.1 | Review and state the theoretical underpinnings of the rehabilitation intervention under investigation | |
1.2 | Review and state the theoretical underpinnings of the implementation approach of the rehabilitation intervention under investigation | |
1.3 | Describe in depth the structure of the rehabilitation intervention in terms of its components and their potential interactions | |
Design and Methods | ||
2.1 | Provide a clear definition of chosen terminology (e.g. adherence, fidelity, integrity etc.) | |
2.2 | Have a defined scope and clear aims and objectives - a process evaluation protocol should be produced | |
2.3 | Clearly describe and justify the use of a set of measures and evaluation criteria for the process evaluation | |
2.4 | Provide a detail description and justification of selected process evaluation data collection methods | |
2.5 | Clearly explain and justify chosen timings for process evaluation data collection | |
2.6 | Collect relevant/appropriate data from both intervention and control sites | |
2.7 | Use a variety of methods and strategies to gather data, including both qualitative and quantitative approaches | |
2.8 | Should aim at publishing its results alongside outcome evaluation results (in order to reduce the chance of biases) | |
2.9 | Address the interactions between process and outcome evaluations (e.g. researchers should decide if they take the risk of threatening the outcome evaluation via evaluating processes or if they accept that there will be tailoring which can be guided through the process evaluation) | |
Context | ||
3.1 | Clearly describe and investigate contextual factors and their potential impact on the process and outcome evaluation. The role of context in shaping both implementation (e.g. how it’s done) and impact (whether it works) should be clearly investigated | |
3.2 | Account for the dynamic nature of context - investigate contextual changes and their potential impact on the process and outcome evaluation over time | |
Recruitment and Retention | ||
4.1 | Review the outcome evaluation’s recruitment procedures in order to identify potential recruitment barriers and facilitators | |
4.2 | Review the strategies that the outcome evaluation has in place to maximize participant retention levels | |
4.3 | Clearly describe the strategies and criteria informing the recruitment of participants into the process evaluation | |
4.4 | Investigate the barriers and facilitators to the recruitment of participants into the process evaluation | |
Intervention staff | ||
5.1 | Review the characteristics of the outcome evaluation intervention staff (e.g. level of skill, experience, number, demographics, motivations and perceptions regarding the outcome evaluation) and identify those potentially impacting on intervention delivery and impact | |
5.2 | Review the training provided to intervention staff in order to identify possible impacts on outcomes. Explore issues such as: does the training define a performance criteria and set of goals to achieve? Is skill acquisition/competence of intervention staff assessed post training? Does the training include systems in place in order to maintain and support staff’s skills over time? | |
5.3 | Review the outcome evaluation’s strategies in place to assess competence of intervention staff over time in order to identify possible learning curve effects | |
Delivery of the intervention | ||
6.1 | Investigate any strategies in place in order to guide, inform and measure the tailoring of the outcome evaluation intervention | |
6.2 | Review and assess the quality of any implementation strategies to improve/support the fidelity of the proposed intervention. | |
6.3 | Investigate, in detail, barriers and enablers to the implementation and delivery of the intervention and evidence surrounding the chances of implementation failure | |
6.4 | Review the strategies in place in order to measure the ‘dose delivered’ | |
6.5 | Review the strategies in place in order to measure the ‘dose received’ | |
6.6 | Investigate in detail participants’ experiences and acceptability of the intervention | |
Results | ||
7.1 | Describe in detail the synthesis of process evaluation and outcome evaluation results | |
7.2 | The theoretical underpinnings behind both, the outcome evaluation intervention and its implementation should inform the explanations and the synthesis of process and outcome evaluation results |