Despite the need for and existence of practices that effectively prevent or treat mental health problems in children and adolescents, such practices are rarely employed in child welfare systems (Usher and Wildfire
2003; Burns et al.
2004; Leslie et al.
2004). In fact, as much as 90% of public youth-service systems, including mental health, education, juvenile justice and child welfare, do not use evidence-based practices (Hoagwood and Olin
2002). Unfortunately, our understanding of the reasons for this apparent gap between science and practice is limited to a few empirical studies and conceptual models that may or may not be not empirically grounded (Aarons et al., this issue). In implementation research, mixed method designs have been increasingly been utilized to develop a science base for understanding and overcoming barriers to implementation. More recently, they have been used in the design and implementation of strategies to facilitate the implementation of EBPs (Proctor et al.
2009). Mixed methods designs focus on collecting, analyzing and merging both quantitative and qualitative data into one or more studies. The central premise of these designs is that the use of quantitative and qualitative approaches in combination provides a better understanding of research issues than either approach alone (Robins et al.
2008). In such designs, qualitative methods are used to explore and obtain depth of understanding as to the reasons for success or failure to implement evidence-based practice or to identify strategies for facilitating implementation while quantitative methods are used to test and confirm hypotheses based on an existing conceptual model and obtain breadth of understanding of predictors of successful implementation (Teddlie and Tashakkori
2003).
In this paper, we examine the application of mixed method designs in implementation research in a sample of mental health services research studies published in peer-reviewed journals over the last 5 years. Our aim was to determine how such methods were currently being used, whether this use was consistent with the conceptual framework outlined by Aarons et al. (this issue) for understanding the phases of implementation, and whether these strategies could offer any guidance for subsequent use of mixed methods in implementation research.
Methods
We conducted a literature review of mental health services research publications over a five-year period (Jan 2005–Dec 2009), using the PubMed Central database. Data were taken from the full text of the research article. Criteria for identification and selection of articles included reports of original research and one of the following: (1) studies that were specifically identified as using mixed methods, either through keywords or description in the title; (2) qualitative studies conducted as part of larger projects, including randomized controlled trials, which also included use of quantitative methods; or (3) studies that “quantitized” qualitative data (Miles and Huberman
1994) or “qualitized” quantitative data (Tashakkori and Teddlie
1998). Per criteria used by McKibbon and Gadd (
2004), the analysis had to be fairly substantial—for example, a simple descriptive analysis of baseline demographics of the participants was not sufficient to be included as a mixed method article. Further, qualitative studies that were not clearly linked to quantitative studies or methods were excluded from our review.
We next assessed the use of mixed methods in each study to determine their structure, function, and process. A taxonomy of these elements of mixed method designs and definition of terms is provided in Table
1 below. Procedures for assessing the reliability of the classification procedures are described elsewhere (Palinkas et al.
2010). Assessment of the structure of the research design was based on Morse’s (
1991) taxonomy that gives emphasis to timing (e.g., using methods in sequence [represented by a “→”symbol] versus using them simultaneous [represented by a “+” symbol]), and to weighting (e.g., primary method [represented by capital letters like “QUAN”] versus secondary [represented in small case letters like “qual”]). Assessment of the function of mixed methods was based on whether the two methods were being used to answer the same question or to answer related questions and whether the intention of using mixed methods corresponded to any of the five types of mixed methods designs described by Greene et al. (
1989) (Triangulation or Convergence, Complementarity, Expansion, Development, and Initiation or Sampling). Finally, the process or strategies for combining qualitative and quantitative data was assessed using the typology proposed by Cresswell and Plano Clark (
2007): merging or converging the two datasets by actually bringing them together, connecting the two datasets by having one build upon the other, or embedding one dataset within the other so that one type of data provides a supportive role for the other dataset.
Table 1
Taxonomy of mixed method designs
Structure | QUAL → quan | Sequential collection and analysis of quantitative and qualitative data, beginning with qualitative data, for primary purpose of exploration/hypothesis generation |
qual → QUAN | Sequential collection and analysis of quantitative and qualitative data, beginning with qualitative data, for primary purpose of confirmation/hypothesis testing |
Quan → QUAL | Sequential collection and analysis of quantitative and qualitative data, beginning with quantitative data, for primary purpose of exploration/hypothesis generation |
QUAN → qual | Sequential collection and analysis of quantitative and qualitative data, beginning with quantitative data, for primary purpose of confirmation/hypothesis testing |
Qual + QUAN | Simultaneous collection and analysis of quantitative and qualitative data for primary purpose of confirmation/hypothesis testing |
QUAL + quan | Simultaneous collection and analysis of quantitative and qualitative data for primary purpose of exploration/hypothesis generation |
QUAN + QUAL | Simultaneous collection and analysis of quantitative and qualitative data, giving equal weight to both types of data |
Function | Convergence | Using both types of methods to answer the same question, either through comparison of results to see if they reach the same conclusion (triangulation) or by converting a data set from one type into another (e.g. quantifying qualitative data or qualifying quantitative data) |
Complementarity | Using each set of methods to answer a related question or series of questions for purposes of evaluation (e.g., using quantitative data to evaluate outcomes and qualitative data to evaluate process) or elaboration (e.g., using qualitative data to provide depth of understanding and quantitative data to provide breadth of understanding) |
Expansion | Using one type of method to answer questions raised by the other type of method (e.g., using qualitative data set to explain results of analysis of quantitative data set) |
Development | Using one type of method to answer questions that will enable use of the other method to answer other questions (e.g., develop data collection measures, conceptual models or interventions) |
Sampling | Using one type of method to define or identify the participant sample for collection and analysis of data representing the other type of method (e.g., selecting interview informants based on responses to survey questionnaire) |
Process | Merge | Merge or converge the two datasets by actually bringing them together (e.g., convergence—triangulation to validate one dataset using another type of dataset) |
Connect | Have one dataset build upon another data set (e.g., complementarity—elaboration, transformation, expansion, initiation or sampling) |
Embed | Conduct one study within another so that one type of data provides a supportive role to the other dataset (e.g., complementarity—evaluation: a qualitative study of implementation process embedded within an RCT of implementation outcome) |
Discussion
Our analysis of the 22 studies uncovered five major reasons for using mixed method designs in intervention research. The first reason was to use quantitative methods to measure intervention and/or implementation outcomes and qualitative methods to understand process. This aim was explicit in 11 of the 22 studies. Qualitative inquiry is highly appropriate for studying process because (1) depicting process requires detailed descriptions of how people engage with one another, (2) the experience of process typically varies for different people so their experiences need to be captured in their own words, (3) process is fluid and dynamic so it can’t be fairly summarized on a single rating scale at one point in time, and (4) participants’ perceptions are a key process consideration (Patton
2001).
The second reason was to conduct both exploratory and confirmatory research. In mixed method designs, qualitative methods are used to explore a phenomenon and generate a conceptual model along with testable hypotheses, while quantitative methods are used to confirm the validity of the model by testing the hypotheses (Teddlie and Tashakkori
2003). This combined focus is also consistent with the call by funding agencies (NIMH
2004) and others (Proctor et al.
2009) to develop new conceptual models and to develop new measures to test these models. Several of the studies focused on development of new measures (Blaskinsky et al.
2006; Slade et al.
2008) or conceptual frameworks (Zazalli et al.
2008), or the development of new or adaptations of existing interventions (Proctor et al.
2007; Henke et al. 2007).
The third reason was to examine both intervention content and context. Many of the studies included in this review used mixed methods to examine the context of implementation of a specific intervention (e.g., Henke et al.
2008; Sharkey et al.
2005; Slade et al.
2008; Whitley et al.
2009). Unlike efficacy studies where context can be controlled, implementation research occurs in real world settings distinguished by their complexity and variation in context (Landsverk et al., this issue). Qualitative methods are especially suited to understanding context (Bernard
1988). In contrast, quantitative methods were used to measure aspects of the content of the intervention in addition to the intervention’s outcomes. A particularly important element of content was the degree of fidelity of application of the intervention. Schoenwald et al. (this issue) discuss different strategies for the quantitative measurement of fidelity to explain variation in intervention/implementation outcomes.
The fourth reason for using mixed methods was to incorporate the perspective of potential consumers of evidence-based practices (both practitioners and clients) (Proctor et al.
2009). As observed by Aarons et al. (this issue), some models that describe approaches to organizational change and innovation adoption highlight the importance of actively including and involving critical relevant stakeholders during the process of considering and preparing for innovation adoption. Use of qualitative methods gives voice to these stakeholders (Sofaer
1999) and allows partners an opportunity to express their own perspectives, values and opinions (Palinkas et al.
2009). Obtaining such a perspective was an explicit aim of studies by Henke et al. (
2008), Proctor et al. (
2007), Aarons et al. (
2009), and Palinkas and Aarons (
2009). A mixed method approach is also consistent with the need to understand patient and provider preferences in the use of Sequential Multiple Assignment Randomized Trial (SMART) designs when testing and evaluating the effectiveness of different strategies to improve implementation outcomes (Landsverk et al., this issue).
Finally, mixed methods were used to compensate for one set of methods by the use of another set of methods. For instance, convergence or triangulation of quantitative and qualitative data was an explicit feature of the mixed method study of the implementation of SafeCare® in Oklahoma by Aarons et al. (Aarons and Palinkas
2007; Palinkas and Aarons
2009) because of limited statistical power in quantitative analyses that were nested in teams of services providers, a common problem of implementation research (Proctor et al.
2009; Landsverk et al., this issue).
The studies examined in this review represent a continuum of mixed method designs that ranges from the simple to the complex. Simple designs were observed in single studies that have a limited objective or scope. For instance, in seeking to determine whether the experience of using mixed methods accounted for possible changes in attitudes towards their use, Gioia and Dziadosz (
2008) used semi-structured interview and focus group methods to obtain first-hand accounts of practitioners’ experiences in being trained to use an EBP, and a quantitative measure of attitudes towards the use of EBPs to identify changes in attitudes over time. In contrast, complex designs usually involve more than one study, each of which are linked by a set of related objectives. For instance, Bearsley-Smith et al. (
2007) describe a protocol for a cluster randomized feasibility trial in which quantitative measures are used in studies designed to evaluate program outcomes (e.g., diagnostic status and clinical severity, client satisfaction) and measure program fidelity, and qualitative methods (clinician focus groups and semistructured client interviews) are used in studies designed to assess the process of implementation and explain quantitative findings.
In addition to study objectives, complexity of mixed method designs is also related to the context in which the study or studies were conducted. For instance, six of the studies reviewed were embedded in a larger effort known as the National Evidence-Based Practice Implementation Project, which was designed to explore whether EBP’s can be implemented in routine mental health service settings and to discover the facilitating conditions, barriers, and strategies that affected implementation (Brunette et al.
2008; Marshall et al.
2008; Marty et al.
2008; Rapp et al.
2009; Whitley et al.
2009). Two additional studies (Aarons and Palinkas
2007; Palinkas and Aarons
2009) were part of a mixed-method study of implementation embedded in a statewide randomized controlled trial of the effectiveness of an evidence-based practice for reducing child neglect and out of home foster placements. In each instance, the rationale for the use of a mixed method design was determined by its role in the larger project (primary or secondary), resulting in an unbalanced structure and emphasis on complementarity to understand the process of implementation and expansion to explain outcomes of the larger project. However, the embedded mixed method study itself often reflected a balanced structure and use of convergence, complementarity, expansion, and sampling to understand barriers and facilitators of implementation.
Complexity of mixed method designs is also related to the phase of implementation under examination. Mixed method studies of the exploration and adoption phases described by Aarons et al. (this issue) tended to utilize less complex designs characterized by a sequential unbalanced structure for the purpose of seeking convergence through transformation or developing new measures, conceptual frameworks or interventions, and a process of connecting the data. In contrast, studies of the implementation and sustainability phases tended to utilize more complex designs characterized by a simultaneous balanced or unbalanced structure for the purpose of seeking convergence through triangulation, complementarity, expansion and sampling, and a process of embedding the data. Nevertheless, as these studies illustrate, research on any of the four phases of implementation described by Aarons et al. may utilize and benefit from the application of any combination of elements of structure, function and process as long as this combination is consistent with study aims and context.
Our examination of these studies also revealed other characteristics of mixed method designs in implementation research that are noteworthy. First, the vast majority of studies reviewed utilized observational designs. As Landsverk et al. (this issue) and others (Proctor et al.
2009), have noted, most early research on implementation was observational in nature, relying upon naturalistic case study approaches. More recently, prospective, experimental designs have been used to develop, test and evaluate specific strategies designed to increase the likelihood of implementation (Chamberlain et al. 2008; Glisson and Schoenwald
2005). Second, all of the 22 studies reviewed focused on characteristics of organizations and individual adopters that facilitated or impeded the process of implementation. Only seven studies included a focus on the outer context or the interorganizational component of the inner context of implementation (Aarons et al., this issue). Third, only 2 of the 22 studies (Aarons and Palinkas
2007; Palinkas and Aarons
2009) focused on implementation in child welfare settings. Given the issues in Child Welfare, such as lack of professional education focused on evidence based practices and the richness of information solicited through mixed methods, the paucity of studies on implementation in Child Welfare is surprising.
However, there are ongoing efforts to incorporate mixed method designs in research involving the implementation of evidence-based practices that include experimental designs to evaluate implementation strategies, an examination of outer and interorganizational context, and are situated in child welfare settings. Two such efforts include
Using Community Development Teams to Scale-
up MTFC in California (Patricia Chamberlain, Principal Investigator) and
Cascading Diffusion of an Evidence-
Based Child Maltreatment Intervention (Mark Chaffin, Principal Investigator). The first is a randomized controlled trial designed to evaluate the effectiveness of a strategy for implementing Multidimensional Treatment Foster Care (MTFC; Chamberlain et al.
2007), an evidence-based program for out of home youth aged 8–18 with emotional or behavioral problems. Mixed methods are being used to examine the structure and operation of system leaders’ influence networks and use of research evidence. The Cascading Diffusion Project is a demonstration grant examining whether or not a model of planned diffusion of an evidence-based practice can develop a network of services with
self-
sustaining levels of model fidelity and provider competency. A mixed method approach is being employed to describe the relationships between provider staff, system and organizational factors, and their impact on the implementation process. In both projects, qualitative and quantitative methods are being used in a simultaneous, unbalanced arrangement for the purpose of seeking complementarity, using quantitative methods to achieve breadth of understanding (i.e., generalizability) of both content (i.e., fidelity) and outcomes (i.e., stage of implementation, number of children placed, recidivism), and qualitative methods to achieve depth of understanding (i.e., thick description) of both process and inner and outer context of implementation, all in embedded design.
In recommending changes in the current approach to evidence in health care to accelerate the improvement of systems of care and practice, Berwick (
2008) recommends embracing a wider range of scientific methodologies than the usual RCT experimental design. These methodologies include the use of assessment techniques developed in engineering and used in quality improvement (e.g., statistical process control, time series analysis, simulations, and factorial experiments) as well as ethnography, anthropology, and other qualitative methods. Berwick argues that such methods are essential to understanding mechanisms and context of implementation and quality improvement. Nevertheless, it is the combining of these methods through mixed method designs that is likely to hold the greatest promise for advancing our understanding of why evidence-based practices are not being used, what can be done to get them into routine use, and how to accelerate the improvement of systems of care and practice.