Contributions to the literature
-
Published rapid qualitative analysis approaches often use transcripts; our approach shows how notes and verification with audio recordings can be used to ensure rigor while saving time and eliminating transcription costs.
-
Published rapid qualitative analysis approaches often utilize inductive approaches; our approach shows how to conduct deductive rapid analysis using the Consolidated Framework for Implementation Research (CFIR), which allows researchers to compare results more easily across studies.
-
CFIR users have expressed difficulty using the framework because our traditional analysis approach is resource intensive; the rapid analysis approach described here may facilitate the use of the CFIR for experienced users.
Background
Methods
Evaluation background
Methods for the traditional and rapid approaches
Data collection: semi-structured interviews
Data analysis: traditional and rapid approaches
Traditional Deductive CFIR Approach (Cohort A) | Rapid Deductive CFIR Approach (Cohort B) | |
Data Management | ||
Create MS Word CFIR Facility Memo Template. Create project and codebook in qualitative software program. See Table 2 and Additional File 2. | N/A | |
aCreate MS Excel CFIR Construct by Facility Matrix Template (CFIR constructs as rows and facilities as columns). See Additional File 3. | ||
Time | 1 h/project set-up | .5 h/project set-up |
bTranscribe audio recordings. | N/A | |
De-identify and import transcripts into software program. | N/A | |
Time | .5 h/interview | 0 h/interview |
Copy and paste summaries, ratings, and rating rationales into matrix. See Table 3 and Additional File 3. | N/A | |
Time | .5 h/facility | 0 h/facility |
Total Time | 1 h/project set-up + (.5 h/interview + .5 h/facility) | .5 h/project set-up |
Data Collection | ||
aConduct and record semi-structured interviews. See Additional File 1. | ||
Total Time | 1 h/interview | 1 h/interview |
Data Analysis: Coding and Adjudication Process: Process is repeated for each interview | ||
Primary analyst: Code verbatim transcript independently in qualitative software program and use comments as needed. | ||
Time | 1.5 h/interview | 1.72 h/interview |
Secondary analyst: Code verbatim transcript independently and use comments as needed. | Secondary analyst: Review notes in matrix, listen to audio recording, and use comments and different colored text to highlight additional notes, edits, quotes, or timestamps. | |
Traditional Qualitative Approach (Cohort A) | Rapid Qualitative Approach (Cohort B) | |
Time | 2.5 h/interview | 1.70 h/interview |
Primary analyst: Review coding for differences and meet with secondary analyst to reach consensus. | Primary analyst: Review notes for differences and meet with secondary analyst to reach consensus. | |
Time | 1.5 h/interview | .5 h/interview |
Total Time | 5.5 h/interview | 3.92 h/interview |
Data Analysis: Rating and Adjudication Process: Process is completed for each facility | ||
Export coded data and aggregate in facility memo; memos were an average of 108 pages/facility. See Table 2 and Additional File 2. | N/A | |
Primary Analyst: Review all data (all participants in facility) in facility memo and write summary for each CFIR construct and the facility overall. See Table 3. | Primary Analyst: Review all notes (all participants in facility) in facility column in matrix (see above); data is already in note form and facility summary has been written. See Table 3 and Additional File 3. | |
Primary Analyst: Rate each CFIR construct in facility memo and provide rating rationale. | Primary Analyst: Rate each CFIR construct in facility column in matrix and provide rating rationale. | |
Time | 8 h/facility | 1.69 h/facility |
Secondary Analyst: Review facility memo and edit summaries, ratings, and rating rationales. | Secondary Analyst: Review facility column in matrix and edit ratings and rating rationales | |
Time | 4 h/facility | 1.23 h/facility |
Primary analyst: Review facility memo for differences and meet with secondary analyst to reach consensus. | Primary analyst: Review facility column in matrix for differences and meet with secondary analyst to reach consensus | |
Time | 2 h/facility | 1 h/facility |
Total Time | 14 h/facility | 3.92 h/facility |
Data Interpretation: | ||
aReview and interpret data by facility; write facility level summaries. | ||
aReview and interpret data by construct; organize facilities by implementation outcomes and identify constructs that manifested positively across facilities, negatively across facilities, or distinguished between facilities with high and low implementation success. | ||
Total Time | 100 h/project | 100 h/project |
Analysts: | |
Facility: | |
Interview participants: | |
High-level facility summary: | |
[Provide high-level summary of the facility] | |
I. Innovation characteristics | |
A. Innovation source | |
RATING: OVERALL __ (ANALYST 1 __, ANALYST 2 __) | |
Summary: | |
[Provide summary of data.] | |
Rationale: | |
[Provide a rationale for rating.] | |
Data: | |
[Copy coded data from software.] | |
B. Evidence, strength, and quality | |
RATING: OVERALL __ (ANALYST 1 __, ANALYST 2 __) | |
Summary: | |
[Provide summary of data.] | |
Rationale: | |
[Provide a rationale for rating.] | |
Data: | |
[Copy coded data from software.] |
Approach | Traditional approach (cohort A) | Rapid approach (cohort B) |
---|---|---|
Inner setting | ||
Leadership engagement (LE) | aOverall rating −2 Summary: The implementation leader tried to brief the [Leadership Role 1] when she returned from the DoE Base Camp, but “she was very busy that week, so I was told to maybe meet with the [Mid-Level Leadership Role 1] instead.” The [Key Stakeholder 1] believes one of the biggest barriers to implementation was unstable and acting leadership; most of the leadership team was acting or missing during implementation, which has required them to brief and re-brief new leadership. Rationale: Leadership was minimally engaged throughout implementation, which [Key Stakeholder 1] felt was a big barrier to implementation, warranting a −2 rating. | Overall rating +2 Summary: bP1: Leadership was very engaged. P2: The [P2] was responsible for “dislodging” barriers up the chain as necessary, e.g., reaching out to leadership to support training. He states that site leadership “mandated” or “deeply inspired” them to set time aside to be trained. P3: She felt leadership was very engaged based on (1) [Leadership Role 1] bidding; (2) [Leadership Role 2] encouraging staff to participate with [EBI Name] Day; (3) [Leadership Role 3] adding it to the pay-for-performance plan. Rationale: Leadership provided ongoing tangible support and incentives, warranting a +2 rating. |
Available resources (AR) | Overall rating: X Summary: Time was limited both for implementation and administration of the practice; it was a collateral duty for the implementation leader and given that [department] was short-staffed, [Role 1] had limited time to complete assessments. However, they did have funding to buy [equipment]; the [Key Stakeholder 1] was able to give them money from another VA program. Rationale: Important resources were both available (funding) and unavailable (dedicated time), warranting an X rating. | Overall rating +1 Summary: P1: It was hard for the implementation leaders to have time “carved out”; if there was one “pearl” from her, it is that bids should include time. She should not have to advocate for them to have time. Even if they were ultimately supported, she knows the implementation leader experienced frustration related to lack of time in the beginning. P2: Site had equipment already in place. Rationale: Although the implementation leader did not initially have dedicated time, important resources were ultimately available to support implementation (equipment, dedicated time), warranting a + 1 rating. |
Data interpretation: facility and construct analyses
Methods for comparing traditional and rapid approaches
Comparing time and transcription costs
Comparing effectiveness and rigor
Results
Comparing traditional and rapid approaches
Time and transcription costs
Hours | Traditional CFIR approach (cohort A) | Rapid CFIR approach (cohort B) | Differences in hours |
---|---|---|---|
aTotal data | 50 interview audio hours across 16 facilities | 0 h | |
Data management | 34 total hours 1 h/project set-up = 1 .5 h × 50 interviews = 25 .5 h × 16 facilities = 8 | .5 total hours .5 h/project set-up = .5 0 h × 50 interviews = 0 0 h × 16 facilities = 0 | 33.5 h |
Data collection | 50 total hours | 50 total hours | 0 h |
Data analysis: interviews | 275 total hours 5.5 h × 50 interviews | 196 total hours 3.92 h × 50 interviews | 79 h |
Data analysis: facilities | 224 total hours 14 h × 16 facilities | 63 total hours 3.92 h × 16 facilities | 161 h |
Data interpretation | 100 total hours | 100 total hours | 0 h |
Total hours | 683 h | 409.5 h | 273.5 h |
Transcription cost | Traditional CFIR approach (cohort A) | Rapid CFIR approach (cohort B) | Differences in cost |
Transcription | $7250 $145/h × 50 h | $0 | $7250 |
Data management
Data collection: semi-structured interviews
Data analysis
Data interpretation
Effectiveness and rigor
Domain | Traditional CFIR approach | Rapid CFIR approach |
---|---|---|
Effectiveness: evaluation objectives | ||
Ability to identify and describe implementation determinants | Yes | Yes |
Ability to provide rapid feedback to operational partners | No (preliminary results only) | Yes |
Rigor: evaluation processes | ||
Credibility | ||
Analyst authority: We had analysts with expertise in both qualitative methods and the CFIR | Yes | Yes |
Data accuracy: We used two analysts/interview and maintained access to the raw data in order to verify the accuracy of data, especially quotations | Yes (transcripts and audio recordings) | Yes (audio recordings) |
Data organization: We used matrices, allowing us to parse out and synthesize data as needed | Yes | Yes |
Dependability | ||
Data comparability: We used the same interviewers and semi-structured interview guide (based on the CFIR) to ensure data was comparable across participants and facilities | Yes | Yes |
Coding comparability: We used the same analysts and framework to ensure coding was comparable across participants and facilities | Yes | Yes |
Analysis audit trail: We documented keys phases of analysis and edits in memos and/or matrices | Yes | Yes |
Confirmability | ||
Data triangulation: We interviewed multiple participants at each site, allowing us to triangulate data | Yes | Yes |
Team reflexivity: We held weekly meetings to discuss discrepancies and refinements to coding processes | Yes | Yes |
Discussion
-
The team could eliminate the second analyst entirely or only use a second analyst on a subset of interviews, e.g., on the first 10 interviews or a random sample.
-
The team could include only the CFIR constructs expected to be most relevant to the research question in the matrix.
-
The team could seek to obtain project artifacts, e.g., meeting minutes, to analyze in the place of interviews.
-
The team could omit the rating process following coding.