Applying Qualitative Methods to RE-AIM Dimensions.
Qualitative research provides meaning and understanding. It is utilized in both exploratory and explanatory research. This is in contrast to quantitative methods that utilize numbers and address statistical outcomes. In general, qualitative methods help understand how and why results on various individual RE-AIM dimensions, or patterns of results across dimensions (e.g. high reach and low effectiveness) occur. A wide variety of qualitative techniques and approaches can be used to address RE-AIM issues. As the focus of this paper is not to provide a comprehensive description of qualitative data collection and analysis methods, we refer the reader to excellent texts. [
13‐
16] Instead of using one strategy, the methods selected should be tailored to the setting, research questions and resources available. Table
1 provides simple translational questions that can be used to inquire about RE-AIM issues by and with clinicians and community members. [
17] In summary, there are a variety of methods conducive to qualitative work in exploration of RE-AIM dimensions. These include interviews, observations, focus groups, photovoice, digital storytelling, and ethnography. Analysis methods are also varied and dependent on the research or evaluation issue and question. Choices include grounded theory, thematic analysis, matrix, and immersion crystallization. Below we describe how qualitative methods can be used to address each RE-AIM dimension and key issues involved. Table
2 provides examples of questions and possible qualitative methods for each RE-AIM dimension.
Table 2
RE-AIM Elements and Qualitative Data Questions and Examples
Reach | What factors contribute to the participation/non-participation of the participants? What might have been done to get more of the target audience to participate? | Focus groups and/or interviews at baseline and post-program to determine contributors to use of the program. Consider user-centered design and acceptability principles. |
Effectiveness | Did the intervention work to effect the outcomes noted? What other factors contributed to the results? Are the outcomes found accurate? Are the results meaningful? | Ethnography in the setting to observe and reflect on outcomes as they are occurring. Key informant interviews to add participant reflections on the observed outcomes. |
Adoption | What factors contributed to the organization and its individuals taking up the intervention? What barriers interacted with the intervention to prevent adoption? Was there partial or complete adoption? Why did some staff members in these organizations participate and others did not? | Key informant interviews with organizational leaders and “on the ground” implementers to identify their intentions and concerns before the intervention; during it, to understand adoption barriers and facilitators in real time; at the end to explain level of adoption and reflect on the intervention experience. To glean multiple perspectives on adoption from those who did and did not adopt and to what extent. |
Implementation | How was the intervention implemented? By whom and when? What influenced implementation or lack of implementation? What combination of implementation effects affected the outcome results? How and why was the program or policy adapted or modified over time? | Photovoice with participants as they move through the intervention and their experiences. Critical incident analysis during the intervention to determine interaction of the intervention with contextual and personal factors. Observation at baseline and during the intervention as a fidelity check and to deepen understanding of issues as they emerge in practice. |
Maintenance | Is the intervention being implemented (and adapted) after the intervention core period? What is sustained, what discontinued, what modified- and why? | Interviews and observation post-program to determine if the intervention is continuing and why. |
Reach.
Standard means of assessing reach are to describe the number and percent of participants who participate in a desired initiative. From a qualitative method perspective, key issues concerning reach are understanding why people accept or decline participation and describing characteristics of participants versus non-participants that are not available from quantitative data or records. For example, if the desired goal is to reach all patients with diabetes and a hemoglobin A1c level over 8, the quantitative measure of reach would be the number or percent participating in the initiative out of the total eligible. Knowing that 25% of patients are participating provides insight into what degree of penetration occurred as a result of the initiative, but does not help to understand situations in and characteristics of the reached population that distinguish them from non-participants. Often, quantitative approaches have been used to describe reach in terms of the demographics of the reached versus non-reached population. For example, maybe the reach was 25%, but three quarters of the participants were female, Caucasian, and privately insured. Thus, the reach for this program largely misses Medicaid insured participants of both genders. These data represent identifiable characteristics of participants that provide a more comprehensive picture of who is missing. However, there are often characteristics that impact participation versus non-participation that are not routinely collected, readily available from EMRs or other databases, or not easy to quantify. Perhaps reach is limited by factors such as lack of trust in health care providers, disinterest in medication taking, or social determinants barriers faced by non-participants such as lack of transportation or family support to participate. These factors are difficult to ascertain without qualitative inquiry. To thoroughly understand reach, it is often necessary to conduct more in-depth and qualitative work to identify root cause issues of suboptimal reach.
Effectiveness.
Effectiveness focuses on important clinical or behavioral outcomes of interest and is most frequently summarized quantitatively. Key qualitative issues relevant to effectiveness are understanding whether various stakeholders find the effectiveness findings meaningful, why interventions produce different pattern of results across different RE-AIM dimensions, reasons for differences in results across subgroups, and why unanticipated negative results are observed. Continuing the example above with effectiveness being the objective of lowering hemoglobin A1c for patients with diabetes; assume that 50% were able lower their A1c to under 8, and that the mean A1c reduction from the intervention was 0.8%. However, this is just the beginning of an answer to the question: “was intervention X effective?”
Qualitative methods can contribute to assessing effectiveness in several ways. The first is understanding whether quantitative effectiveness findings are meaningful to various stakeholders (e.g., clinicians, patients). By meaningful we mean two things. First, if the measured outcome is valuable to the stakeholder (i.e., it provides them with information that helps them make decisions and/or achieve their respective goals). Second, if the actual quantitative change is meaningful enough to make the intervention worthwhile. In other words, what outcomes are of value to what stakeholders and how did the intervention fare on these factors? In our scenario, the question might be, was an average reduction of A1c by 0.8% meaningful for clinicians and participants?
Corollary qualitative questions include “Is reduction in A1c levels an appropriate indicator of an effective intervention for managing diabetes for the clinician and the patient? Does this provide them with information to decide what to do next and whether they are approaching or achieved their personal (i.e., patient) or practice (i.e., clinician) goals?” Second, is the amount of change (0.8% A1c reduction) sufficient effectiveness to make the intervention worthy for routine use? Does this change lead to sufficient improvement in the participants’ everyday life or quality of life to make participation in the intervention worthwhile? Although surveys can be utilized to answer some of these questions, they often lack the depth of the responses to be insightful.
Qualitative measures add understanding to differential or heterogeneous results. Quantitative data using subgroup analyses might identify subgroups of participants who were more successful in achieving A1c reductions. Qualitative methods are best suited to answer questions of why and how these groups are different in their level of success. Quantitative analyses alone are unlikely to identify more nuanced, sociocultural, or practical features that are major contributors to program effectiveness. In our example, we might find that those participants who perceived the intervention more favorably were able to experience better results than those who had more initial reservations toward the diabetes initiative.
With any intervention, in addition to planned, intended effects, there may also be unintended effects or consequences, either positive or negative. These are very relevant results and it is important to understand the total pattern of results, both intended and unintended. Qualitative methods can help identify some of the unintended outcomes which might not have been measured quantitatively, but can emerge as qualitative reports from both clinicians and participants, or identified through observations. In our example, observations conducted by the research team during implementation might reveal that as a result of participating in the diabetes intervention, patients get to spend less time with their physicians during the visit to discuss other medical concerns and/or physicians are less likely to initiate discussion about the emotional/mental health status of the patient (i.e., shift in priorities during visit).
Adoption.
Adoption is quantitatively operationalized as the number or proportion of settings and implementing staff who agree to participate in the intervention. The qualitative key issues in adoption parallel those of reach, but are at levels of settings and staff/implementers. It is important to understand why different organizations - and staff members within these organizations - choose to participate or not; and to understand complex or subtle differences in those organizations and staff members in terms of underlying dynamics and processes. For example, compatibility with mission and current priorities, external factors, and changing context (e.g., policy changes, new regulations, competing demands) often impact why organizations and key agents within an organization choose to participate or not. Quantitative methods can be used to identify standard organizational characteristics associated with participation (e.g., size, prior experience with related innovations, employee turn-over rates), but cannot provide a full or detailed understanding of key and usually unmeasured issues. Often empirical data are not available on key organizational factors (e.g., leadership, reasons for trying a new program).
Qualitative methods are extremely instructive for understanding reasons for adoption or lack of adoption across targeted staff and their settings. To identify a staff member’s rationale for participating or not in an initiative, semi-structured interviews can be extremely illuminating. Such questions can range from more superficial and straightforward interview questions such as “Please tell me about thoughts about participating in initiative X. Why did you not participate in initiative X?” to more in-depth probing with specified interview techniques that get deeply and in a detailed way, at specific factors related to uptake of the intervention.
For example, cognitive task analysis [
18] is a collection of methods that allow much greater understanding of the organizational representatives in terms of their thinking about an issue, including how they make decisions as a group. Central is the concept of mental model, or how one conceptualizes what something is and how it will work. [
19] Such issues are critical to understanding decision making around participation and commitment to participation. A key aspect of interviews for understanding adoption is to purposefully select key informants that speak from different perspectives. This includes individuals “in the trenches” with little authority to organization leaders and those fulfilling different tasks to provide triangulation among roles for a broader, deeper understanding.
Beyond interviewing, observation can often prove insightful in understanding forces underpinning adoption. Observation may include a tour of the physical site to see the layout, structure and space; it may include participant observation and/or role shadowing in which observation occurs with interaction with participants to explain what is happening and why. A formal ethnographic approach may or may not be used. Observation paired with interviews, if possible, is likely highly valuable because it may reveal inconsistencies between participants responses to interview questions and what they actually do in practice.
In our example of the diabetes intervention, perhaps the intervention was taken up by the three physicians and not the two physicians assistants. Interviews and observation could reveal that the physician assistants in this setting only provide care for patients in acute situations and thus do not have the opportunity for referral to a diabetes management program. Examining adoption on a qualitative level allows for greater understanding of the factors influencing adoption at both organizational and staff levels.
Implementation.
Implementation is quantitatively measured through indicators which include fidelity to the intervention protocol, adaptations made to the original intervention or implementation strategies, the cost (especially replication cost) [
20,
21] to deliver the program, and the percent of key strategies that are delivered. There are many sub-issues in investigating implementation that lend themselves to qualitative inquiry. In fact, implementation is the dimension of RE-AIM where the need for qualitative understanding is most needed and often more meaningful than that of quantitative information. These issues include understanding the conditions under which consistency and inconsistency are occurring across staff, setting, time, and different components of program or policy delivery.
The traditional view of understanding implementation is that of fidelity. [
22,
23] Knowing the extent to which fidelity (e.g. delivery of key components of a program) is achieved is an important aspect of understanding the contribution of the intervention to observed outcomes. If an effective intervention is not implemented well, then it is likely that its effects are diminished. Fidelity is usually measured by having delivery staff or observers complete checklists noting which intervention core components are delivered. While useful, additional inquiry utilizing qualitative methods is often necessary to understand the how, why and to what extent questions regarding implementation of an intervention.
Implementation can be understood at a deeper level using specific interview techniques to have a participant walk through step by step with an actual recent patient and then answer questions in multiple passes about the people involved, communication involved, tools and resources needed, and other aspects. Implementation may also be understood better through observation and shadowing with extensive field notes to observe what people do as they go through their day. In our diabetes example, perhaps a research assistant shadows the diabetes educator and discovers that many patients are only getting two sessions instead of the recommended four sessions. Interviewing reveals that the diabetes educator is overwhelmed with too many patients and has managed by reducing the number of contacts with patients.
In understanding nuances in implementation, [
23] the importance of documenting and understanding adaptations is becoming increasingly recognized. [
24,
25] In studies of scale-up or replication, interventions are almost never delivered and integrated precisely the way they were in prior efficacy studies or intervention guides. Increased understanding is needed of how and why programs are altered over time, by whom, for what reasons, and with what results. Qualitative methods, together with quantitative data in an iterative, mixed methods approach can be essential to understanding adaptations. [
26] Adaptations are often not negative, and in fact, in many cases making adaptations to improve the fit between the intervention and local context improve the outcomes of the intervention in their own setting. [
27,
28] Understanding not just what was adapted, but when, why and by whom will provide far greater information about the contributions to the outcomes as well as provide guidance for future scale up efforts. Finally, understanding decision maker perspectives on types of costs, resources and burden involved in delivering a program, the person or organization’s values, and how they construe “return on investment” is important when implementing, adapting or discontinuing (de-implementing) programs and can often best be illuminated through qualitative methods. [
29]
Implementation issues are ideal for qualitative methods. If resources permit, an iterative combination of written survey responses to standard questions about the intervention and its implementation along with interviews of key informants and observations (such as shadowing roles) can be triangulated to inform a very thorough picture of not just how well an intervention is implemented, but why and how. This information is extremely useful in informing how the intervention may be translated to other settings and how barriers and difficulties can be overcome.
Maintenance.
Understanding program sustainability and the reasons why a) individual benefits continue or fade, and b) why the organization delivering the intervention decides to continue or discontinue the intervention are important for future program design and scale up. Maintenance is often not assessed as grant funding runs out and sustainability suffers. [
3] Therefore, planning for sustainability beyond grant funding is an important issue to address both in the initial intervention design and implementation strategies planned, as well as after the formal evaluation of a new intervention is over. This is increasingly required and especially important in pragmatic studies. [
30] Qualitative methods, coupled with early and ongoing stakeholder engagement throughout a study, can help illuminate sustainability problems early and allow implementers to plan for it and address it as needed. Also, brief interviews with those who continue or discontinue, or adapt an intervention can be very informative. In our example of an intervention for diabetes, we might use interviews with stakeholders to identify existing infrastructure that could support the ongoing use of the intervention and embed the intervention into this infrastructure. Case studies could describe how successfully sustained interventions in their context may provide lessons learned for others on how planning for sustainability can be done.