Participants
We will interview four informants at each facility, selected on the basis of availability, from the following groups: the facility director, the associate chief of staff (ACOS) for primary care, one full-time primary care physician and/or physician extender, and one full-time primary care nurse, for up to 64 interviews (or until saturation across key informant type is reached). New or part-time employees would not have sufficient exposure to the EPRP to make informed appraisals of its utility; therefore, we will only target full-time primary care providers with at least three years in their current position. We will query the Personnel Accounting Integrated Database (PAID) to identify eligible participants who meet these criteria. Primary care participants will then be randomly selected from the resulting list and invited to participate via email. We will confirm the eligibility of potential participants upon invitation to participate. Every effort will be made to ensure a balanced representation of informants across facilities. If a participant declines to participate, we will ask them to refer us to another suitable candidate.
Procedures
Research team and participant blinding
The proposed research will use a double-blind design. The team statistician will be the only member of the research team with knowledge of which facilities are high, moderate, or low performers. All other team members, including co-investigators and research assistants, will be blinded to minimize bias during the interviewing and data analysis processes. Similarly, participants will also be blind to their facility’s performance category so as to minimize respondent bias.
Interviewer training
Interviewers will receive training consisting of three components, consistent with the Information, Demonstration, Practice (IDP) framework of training delivery: [
33]
1.
a didactic training session (information) by Dr. Haidet,
2.
observation of interviews conducted by Drs. Haidet and Hysong (demonstration), and
3.
two mock interviews (practice).
Preparatory facility fact finding
The research team will conduct telephone fact-finding interviews with key contacts at each study facility to gather factual information about the facility’s EPRP dissemination process, such as the type of performance measurement data used by the facility or whether the facility uses a locally generated dashboard. This information will (a) provide greater contextual understanding of existing facility processes related to EPRP, (b) help refine the telephone interview guide, and (c) help the research team identify the best strategy for study interviews. Examples of key contacts are the Facility Quality Manager and ACOS and/or their designee(s). We will use a snowball contact-and-referral process until all the requisite factual information is obtained for each facility. For example, if our first contact at the facility is unable to provide the survey questions, the research team will request the name of another individual at the facility who is more likely to have the answers, and so on.
Each telephone conversation will take approximately 30 minutes and will include topics about the dissemination processes and reporting of EPRP data at the facility. Interviewers will follow a standardized question guide about the dissemination process of EPRP data at their facility.
Participant recruitment
Prospective participants will receive an email inviting them to participate in the study and requesting a preferred contact email, phone number, and a time where a representative of the study team can review the study information form with them and obtain informed consent. Prospective participants who have not responded to the invitation within two weeks will receive a follow-up telephone call inviting them to participate; the aforementioned contact and scheduling information will be collected from these participants at that time. Research team members will email a copy of the study information form in advance and will call the participants at their requested appointment time to review the consent form and procedures with the participant, answer any questions, and schedule the interview. Should a prospective participant decline the invitation and not provide a recommendation for a substitute participant (see Participants section, above), the next prospective participant on the list of candidates will be invited.
As much as possible given participant schedules, interviews will be scheduled following a maximum variation strategy at the facility level. The statistician will provide specific site names to the study team members for recruitment purposes, ensuring that all four arms are represented in the resulting list of sites. Research team members will then schedule and conduct interviews until all interviews are completed or saturation of information is reached within study arms, whichever comes first.
Telephone interviews
Participants will be interviewed individually for one hour by a research assistant at a mutually agreed upon time. All interviews will be audio-recorded with the participant’s consent. The interview consists of two initial “picture questions,” which ask the respondent to provide an example of a feedback strategy that resulted in practice change and an example of a feedback strategy that did not result in practice change. The answers to these two questions will guide the rest of the interview. Based on the participant’s initial answers, interviewers will ask follow-up questions that will tap the constructs of interest in the study. Additional file
1 presents a preliminary interview protocol listing the constructs of interest and their corresponding proposed questions for each type of key informant. The interviewers need not ask the questions in the order listed nor use all of the probing questions; however, they are required to cover all of the constructs of interest. Participants will answer questions about (a) the types of EPRP information they receive, (b) the types of quality/clinical performance information they actively seek out, (c) opinions and attitudes about the utility of EPRP data (with specific emphasis on the role of targets), (d) how they use the information they receive and/or seek out, and (e) any additional sources of information or strategies they might use to improve the facility’s performance. Interview recordings will be sent for transcription the day after the interview; we will begin analyzing transcripts per our data analysis strategy when they are received.
To minimize participant burden, interviews will be scheduled and analyzed concurrently until saturation of information is reached. That is, up to 64 interviews may be conducted, but fewer may be conducted if no new information is encountered. In order to check for thematic saturation, we will code and analyze interview transcripts as we receive them, rather than wait until all interviews are conducted. As new codes indicate new concepts of importance, a lack of new codes is indicative that no new information is being generated by additional interviews and that the data are sufficiently saturated. We will end the interview process when a new interview adds less than 5% of the total number of existing codes in new codes; for example, if 100 codes have been generated after 25 interviews, and the 26th interview only adds four new codes, we will consider the data to have reached saturation and end the interview process.
Data analysis
Interview recordings will be transcribed and analyzed using techniques adapted from grounded theory [
34] and content analysis [
35] using Atlas.ti (ATLAS.ti Scientific Software Development GmBH, Berlin, Germany)
,[
36] a qualitative analysis software program. Consistent with grounded-theory techniques, the analysis will consist of three phases: open, axial, and (if appropriate) selective coding.
Coder training
Before the coding process begins, the principal investigator will conduct a training session (consistent with the IDP framework) with the coders and co-investigators to familiarize them with the Atlas.ti software and the initial coding taxonomy. The session will consist of two modules:
1.
A didactic module, where the trainees will receive detailed information about the specific a priori codes to be searched for in the texts (e.g., definitions, examples, negative cases), guidelines for identifying new themes and codes, and a demonstration of the Atlas.ti software features and its project-specific use.
2.
A practice module, where coder teams will use the mock interviews from their interviewer training practice module to practice coding and calibrate the coders to the taxonomy of utility perceptions, strategies, and data-sharing practices.
In addition, coders will independently code two live transcripts and then convene to discuss their coding decisions, in order to further calibrate the coders on live data.
Open coding
Open coding is concerned with identifying, naming, categorizing, and describing phenomena found in the interview transcripts. The same research assistants who conducted the interviews will conduct the open-coding phase of analysis. Each research assistant will independently code all interview transcripts; each will serve as primary coder for the interviews they conducted and as secondary (i.e., corroborative) coder for interviews they did not conduct. Secondary coding assignments will be distributed such as to maximize the number of different coders reviewing the transcripts of any given site.
Coders will receive an a priori list of codes and code definitions, designed to capture the relevant constructs of interest for the study. These include (but are not limited to) feedback cues regarding content and format as specified in Kluger and DeNisi, 1996 (e.g., correct solution information, frequency); feedback characteristics as specified in Hysong et al., 2006 (i.e., timeliness, punitiveness, individualization, customizability); attitudes and mental models about EPRP (e.g., positive/negative, concerns of trust or credibility of feedback); and feedback sources. Based on this list, coders will select relevant passages indicative of a given phenomenon (e.g., timely sharing of performance information) and assign it a label descriptive of the phenomenon in question (e.g., “timeliness”). Coders will specifically review the transcripts for instances of the constructs involved in the research questions (perceptions of EPRP utility as a feedback source; local data collection, dissemination and evaluation strategies; timeliness, individualization, and nonpunitiveness of data sharing) and capture them as they emerge from the data.
Coders will also have the opportunity to add new codes to the existing list. Proposed codes will be vetted by the research team according to two criteria: (1) whether the codes can be clearly and crisply defined and (2) the extent to which the new codes contribute to the analysis. Codes deemed to be suitable will then be added to the list; previous transcripts will be coded with the new codes as needed.
Ensuring coding quality
Once a transcript is coded by the primary coder, a secondary coder will independently review the primary coder’s assignments and agree or disagree with each code attached to a quote. To minimize potential bias, pairs of coders will rotate—that is, a given primary coder will not have all his/her transcripts reviewed by the same secondary coder; complete rotation of primary and secondary coder pairs can be accomplished in 24 (4!) interviews. Counts of “agree” and “disagree” code assignments will be tallied to help estimate the extent of inter-rater agreement and identify codes or themes that require crisper definition if necessary. After the secondary coder’s review, any disagreements will be resolved by consensus between the primary and secondary coder. Any disagreements that the coder pairs could not resolve will be presented to the entire team for resolution.
Other data quality checks will be performed, including tests of quotation length over time (significant increases in quotation lengths in later portions of a transcript or in later transcripts could be indicative of coder fatigue) and between raters (to check for biases in coding styles).
Axial coding
In this phase, the categories generated during the open coding phase are organized into coherent themes and networks of relationships using the “constant comparative approach”;[
34] the investigator team will review the generated codes, noting their frequency and searching for relationships among the codes as they emerge.
Research question 1 asks whether leaders of high-performing facilities have different perceptions about the utility of facility performance data than do leaders of low- or moderate-performing facilities. To explore this research question, the investigator team will review the codes generated in relation to this question and compare the universe of perceptions in high-, low-, and moderate-performing facilities. These will be organized thematically, with separate taxonomies and relational networks (i.e., visual maps of how the codes relate to one another) developed for each facility type. To the extent the corpus of codes is different across facility types (e.g., a greater number of nonoverlapping codes than codes in common), or that similar codes exist in greater or less frequency in high- versus low-performing facilities, this will be taken as evidence that high-performing facilities perceive the utility of EPRP data differently than do low-performing facilities.
Research question 2 asks whether leaders of high-performing facilities employ different strategies than leaders of low- or moderately performing facilities to collect and disseminate local performance data and to evaluate providers. This research question will be explored using a parallel approach to that of research question 1. The investigator team will review the codes generated in relation to the questions in the interview protocol about feedback strategies and compare the universe of strategies in high-, moderate-, and low-performing facilities. These will be organized thematically, with separate taxonomies and relational networks developed for each facility type. To the extent that there are a greater number of discrepant themes across subgroups than there are themes in common, this will be taken as evidence that (a) different facility types employ different strategies than low-performing facilities to collect and disseminate local performance data and to evaluate providers and (b) the core category emergent from the selective coding process differs by subgroup.
Selective coding
This stage of the analysis involves selecting one of the categories developed during axial coding as the core category around which the theory best fitting the data is to be built. In other words, there should be a central theme emergent from the data best
explaining the relationships among the other categories (codes). Hysong et al.’s model of actionable feedback [
15] was developed in this way, with the concept of customizability as the core category. This phase of the analysis will be most useful in exploring research question 3.
Research question 3 asks whether high-performing facilities share their data in more timely, individualized, and nonpunitive ways than do low- or moderate-performing facilities. To explore this research question, the investigator team will review codes generated in relation to the questions in the interview protocol about timeliness, individualization, and actionability of feedback, which captured instances where facilities delivered performance data to its providers in a timely (
e.g., at least monthly), individualized (provider-specific, rather than aggregated by clinic or facility), and nonpunitive fashion (
e.g., mention of educational, developmental approaches to feeding back the data to providers). These will be organized thematically, with separate taxonomies and relational networks developed for each facility type (axial coding, as described above). From this relational network, a core category will emerge around which a model of feedback can be organized; we will compare each subgroup’s emergent model to Hysong et al.’s model of actionable feedback [
15]. We expect that the feedback model for the high-performing facilities will more closely resemble the Hysong et al. model than the lower-performing facilities.
Maximizing confirmability and trustworthiness
Several techniques will be employed to minimize potential biases resulting from the differences in experiences of the interviewers and coders. Interviewers and coders will be trained using a standardized training protocol (see interviewer and coder training sections described earlier); as part of the training, interviewer assumptions and expectations will be documented prior to conducting any interviews. Assumptions and impressions generated during interview coding will be documented simultaneously with the originally planned coding as the interviews are coded. These assumptions and impressions will be constantly referenced and compared with planned codes during the coding process to check for bias. Lastly, negative case analysis will be conducted to check for evidence contradictory to the hypotheses. Together, these strategies will help maximize the analyses’ confirmability and trustworthiness.
Timeline
This research is projected to last three years.