Background
Context | In realist terms, context is any system, structure or condition that affects outcomes, including individuals’ attributes and social interactions [3] |
Mechanism | Mechanisms are what makes an intervention work: “They are not the observable machinery of program activities, but the response that interaction with a program activity or resource triggers (or does not trigger) in the reasoning and behaviour of participants” [70] |
Process effects | These are proximal impacts that influence intervention outcomes or are of evaluative interest for other reasons (e.g. they help explain unexpected variation in implementation); others use the term ‘formative outcomes’ [84]; Desired process effects are those the investigators consider to be prerequisites for a successful intervention |
Programme theory | This is, “An explicit theory or model of how an intervention contributes to a set of specific outcomes through a series of intermediate results” [85]; programme theory should be plausible, useful and consistent with the evidence |
Proposition | |
Realist process evaluation | Process evaluation helps explain how an intervention had its effects [7]; realist process evaluation applies realist principles to this process and investigates causal patterns (known as demi-regularities) to show how intervention strategies may be operating under what conditions to generate process effects for which groups [3] |
Retroduction | This is a form of analysis that “involves constant shuttling between theory and empirical data, using both inductive and deductive reasoning” [88] |
Understanding interventions
The study being evaluated: SPIRIT
The role of process evaluation
Aims
Methods
Realist evaluation
Initial programme theory
SPIRIT will engage and motivate agency leaders to ‘own’ the intervention using audit feedback, deliberative goal-setting and program tailoring. This agency-driven approach will generate a priority-focused program that offers locally relevant practice support and accommodates differences in agencies’ values, goals, resources and remits. The program will comprise a suite of andragogical activities, tools, and connection across the research-policy divide that provide resources and build knowledge, skills and relationships. It will be supported via modelling and opinion leadership by agency leaders and dynamic external experts. CEOs will promote SPIRIT in their agencies and liaison people will facilitate the tailoring and implementation. These strategies will act synergistically to stimulate and resource participants at different organisational levels, leading to changes in values, practice behaviours and agency processes. This will facilitate increased use of research in policy processes.
Process effects
Data collection
-
Semi-structured interviews with 5–9 participants from each agency early in the intervention period (n = 33) and post-intervention (n = 43). Interviewees were purposively selected for maximum variation in work roles, attitudes to research and experiences of SPIRIT in order to explore the breadth of dimensions expected to influence interactions with the intervention [7]. Open-ended questions and prompts explored interviewees’ work practices and contexts, and their experiences and perceptions of SPIRIT, including their explanations for any change. The interview questions are available elsewhere [40]. This combination of context-, causal- and impact-focused questions across diverse participants was used to refine theory about what was working (or not), for whom and in what circumstances.
-
Observations of intervention workshops (n = 59), and informal opportunistic conversations with participants before and after workshops. Workshops were audio recorded and field notes were written immediately afterwards. A checklist was used for fidelity coding through which we monitored the extent to which ‘essential elements’ of the intervention were delivered (detailed elsewhere [59]).
-
Anonymous participant feedback forms (n = 553). These comprised Yes/No ratings on six statements: (1) The workshop was interesting, (2) The workshop was relevant to my work, (3) The workshop was realistic about the challenges and constraints of our work, (4) The presenter had appropriate knowledge and skills, (5) It is likely that I will use information from this workshop in my work, (6) It is likely that SPIRIT will benefit my agency (Additional file 1). Some workshops had additional items, e.g. the forms for audit feedback forums included items about the clarity of the data and participants’ confidence that SPIRIT would be adequately tailored for their agency. All forms contained three open-ended questions: (1) ‘What worked well?’, (2) ‘What could be improved?’ and (3) ‘Any other comments?’ Forms were distributed prior to intervention workshops and completed immediately afterwards.
-
Formal and informal interviews with the people implementing SPIRIT and the commissioned presenters.
-
Limited access to information from the interviews conducted as part of SPIRIT’s outcome evaluation. These interviews focused on (1) organisational support for research use (n = 6), and (2) the role of research in the development of a recent policy or programme (n = 24). We reviewed transcripts from the first round of interviews (prior to the intervention), but thereafter were blinded to this data so that it would not influence the ongoing process evaluation analysis.
Data management and analysis
Qualitative data
Quantitative data
Realist analysis
Results
Implementation
Process effects
Desired process effects for the trial | Observed process effects | Supporting data sources |
---|---|---|
1. Leaders espouse SPIRIT and its goals | All CEOs disseminated initial information about their agency’s participation in SPIRIT, but only four had a continuing visible role in supporting the intervention, e.g. sending updates and attending workshops; some executive members participated in each site, but to very different extents ranging from a half hour ‘drop in’ to repeated and enthusiastic participation; many managers talked about SPIRIT in team meetings and encouraged their staff to attend | Interviews at two time points (early-intervention ‘context’ and post-intervention ‘perceptions and impact’), ad hoc conversations with participants |
2. Liaison people facilitate the intervention effectively | The use of a liaison person was very effective in the sites where the liaison person was enthusiastic about SPIRIT; four of the six worked hard to promote, tailor and administer the intervention, harnessing insider knowledge and using creative strategies, whereas the other two did not tailor or promote the intervention as thoroughly and expressed negative views to colleagues about SPIRIT | Observations of workshops, interviews and conversations as above, feedback from the SPIRIT team about their communications with liaison people |
3. Targeted policymakers participate in, and are receptive to, intervention activities | Participation levels were good in that they met the SPIRIT team’s expectations for each site; each agency targeted different groups for different components so proportions and types of participants varied, but liaison people were satisfied with attendance and were occasionally surprised by very high numbers; attendance at workshops averaged between 11 and 20 participants per workshop, with between 102 and 158 total occasions of attendance across the six sites; there was full participation in other activities (e.g. trialling the commissioned research services); receptivity varied tremendously within, but especially between, agencies: see next section for more details, including possible reasons | Quantitative fidelity data from observations (using check lists and sign-in sheets), observations, interviews and conversations as above |
4. Participants actively contribute to the content of those activities | Where there was opportunity, participants contributed greatly to workshop content via questions, discussion and case examples; interactivity was limited on some occasions in all agencies, usually because the presenter provided few opportunities; in larger groups, more senior staff tended to dominate, but other participants said this was still useful. Some liaison people helped craft workshop content and provided agency-based case examples; one agency co-presented a workshop; the agency staff nominated to test the research commissioning service were actively involved | Observations of workshops, including descriptive accounts of interactions and dynamics |
5. Participants identify potentially useful ideas, techniques and/or resources | 94% of those who completed a feedback form said they found workshops to be both relevant to their work and realistic about policy challenges and constraints; many interviewees identified specific benefits from SPIRIT, including improved awareness of useful researchers and research resources, understanding of the evidence relating to a policy problem and access to existing agency resources | Participant feedback forms, observations of workshops, interviews and ad hoc conversations with participants and liaison people |
6. Participants use, or plan to use, these ideas, techniques and/or resources | Workshops facilitated less discussion than intended about how learning might be applied, but 95% of participants who completed a feedback form agreed, “It is likely that I will use information from this workshop in my work”; some interviewees said they planned to use ideas or resources, and a few had done so, especially newer staff; three liaison people had managerial-approved plans underway for research-focused education and/or systems improvement, e.g. mandated consideration of research in policy proposals; two agencies had plans to use their commissioned research products | |
Desired process effects for the evaluation | Observed process effects | Supporting data sources |
7. Liaison people facilitate data collection effectively | All liaison people facilitated data collection sufficiently, although it was occasionally delayed and required prompting; where liaison people championed SPIRIT they used additional strategies to encourage participation in data collection, in one agency this achieved a 100% response rate | Outcome measures completion figures, interviews with participants and liaison people, feedback from SPIRIT team |
8. Targeted participants take part in data collection | In all agencies, there was full participation in the two interview-based measures, but more variable responses to the anonymous online survey; response rates dipped in the second measurement point, but stabilised after the survey was shortened; overall, the online survey response rate was 56% and there was a mean 74% response rate for process evaluation feedback forms; only three-quarters of invitees took part in a process evaluation interview | Outcome measures completion figures, interviews with participants and liaison people |
9. The benefits of the intervention are judged to outweigh the burdens of the trial | Interviewees differed considerably in their assessments of the intervention, but where they felt it had benefits these were deemed to outweigh the trial’s burdens, this included those liaison people who championed SPIRIT from the start; workshops with high profile and dynamic ‘service-orientated’ presenters were especially valued; nearly 98% of all feedback form respondents agreed with the statement, “It is likely that SPIRIT will benefit my agency” | Early-intervention and post-intervention interviews, ad hoc conversations with participants and liaison people, feedback form data |
How were these process effects generated?
Mechanism 1
Mechanism 2
Mechanism 3
Mechanism 4
Mechanism 5
Mechanism 6
Mechanism 7
Mechanism 8
Mechanism 9
Mechanism interactions and feedback
Revised programme theory
Initial programme theory (a-contextual) | Revised programme theory (contextually contingent) |
---|---|
SPIRIT will engage and motivate agency leaders to ‘own’ the intervention using audit feedback, deliberative goal-setting and programme tailoring –this agency-driven approach will generate a priority-focused programme that offers locally relevant practice support and accommodates differences in agencies’ values, goals, resources and remits. The programme will comprise a suite of andragogical activities, tools and connections across the research-policy divide that provide resources and build knowledge, skills and relationships, and will be supported via modelling and opinion leadership by agency leaders and dynamic external experts. CEOs will promote SPIRIT in their agencies and liaison people will facilitate the tailoring and implementation – these strategies will act synergistically to stimulate and resource participants at different organisational levels, leading to changes in values, practice behaviours and agency processes. This will facilitate increased use of research in policy processes | Where agencies have an existing orientation to use academic research and are on a trajectory of improved use with perceived room for improvement, SPIRIT will be used to complement or trigger organisational initiatives. Where liaison people and agency leaders believe in the value of the intervention and have confidence in the measures, they will play a pivotal role in tailoring the intervention and championing its goals. Leaders will be motivated by deliberative audit feedback and goal-setting. In all sites, ownership will be increased by greater consultation, collaboration and choice. Agency-attuned communications will be vital in explaining goals, conveying value and addressing concerns. Andragogical activities, tools and connection across the research-policy divide will be valued in all agencies where they leverage existing strengths and address local concerns pragmatically. Staff will make use of these opportunities where they see concrete benefits, and newer staff may benefit most |