Qualitative data from semi-structured interviews, document review, and focus groups will be imported and analyzed (separately) in NVivo using qualitative content analysis [
77‐
80], which has been used successfully in similar studies [
81‐
83]. Content analysis enables a theory driven approach, and an examination of both manifest (
i.e., the actual words used) and latent (
i.e., the underlying meaning of the words) content [
72]. Accordingly, analysis will be informed by the guiding conceptual models, with additional patterns, themes, and categories being allowed to emerge from the data [
72,
84]. The first author and a doctoral student research assistant will independently co-code a sample of the transcripts to increase reliability and reduce bias [
72,
85]. Both coders will participate in a frame-of-reference training to ensure a common understanding of the core concepts related to the research aims [
82]. Disagreements will be discussed and resolved through consensus. Initially, the coders will review the transcripts to develop a general understanding of the content. ‘Memos’ will be generated to document initial impressions and define the parameters of specific codes. Next, the data will be condensed into analyzable units (text segments), which will be labelled with codes based on
a priori (
i.e., derived from the interview guide or guiding theories) or emergent themes that will be continually refined and compared to each other. For instance, the implementation of change model [
36] will be used to develop
a priori codes such as ‘identifying programs and practices’ or ‘planning’ related to implementation decision-making. The CFIR [
26] will be used in a similar fashion by contributing
a priori codes that will serve to distinguish different types of implementation strategies, such as strategies that focus on the ‘inner setting’ or the ‘outer setting’. Finally, the categories will be aggregated into broader themes related to implementation strategy patterns, implementation decision-making, and stakeholders’ perceptions of strategies.
The use of multiple respondents is intentional, as some individuals may be more or less knowledgeable about their organization’s approach to implementation; however, it is possible that participants from a given agency may not endorse the use of the same strategies [
86]. The approach to handling such ‘discrepancies’ will be one of inclusion, in that each unique strategy endorsed will be recorded as ‘in use’ at that agency (for an example of this approach, see Hysong
et al.[
82]). If participants’ responses regarding strategies vary widely within a given organization, it may be indicative of a lack of a coherent or consistent strategy [
86]. The use of mixed methods and multiple sources of data will allow us to make sense of reported variation in strategy use by affording the opportunity to determine the extent to which these sources of data converge [
57,
64,
86,
87]. The use of multiple respondents and different sources of data also reduces the threat of bias that is sometimes associated with the collection of retrospective accounts of phenomena such as business strategy [
69].