Implementing Intervention Research into Public Policy
Evidence-based intervention programs in educational contexts have become highly important in recent years. However, transferring these programs into practice and into the wider field of public policy often fails (Fixsen et al.
2013). As a consequence, the field of implementation research has emerged (Rossi and Wright
1984; Ogden and Fixsen
2014). In recent years, a growing body of implementation research has indicated that an active, long-term, multilevel implementation approach is far more effective than passive forms of dissemination (Ogden and Fixsen
2014). Within the field of implementation research, several theoretical bases and models—implementation frameworks—have been developed (Meyers et al.
2012).
However, intervention research and implementation research have not yet been systematically connected and different traditions and research groups are involved. Implementation researchers are mostly given mandates by politicians to take on the implementation of already existing interventions. Moreover, implementation research remains rather isolated and is sometimes considered to be less scientifically valuable than research that develops new interventions (Fixsen et al.
2011). This might be one of the key reasons why there are still many problems in translating programs into widespread community practice (Spoth et al.
2013).
In this paper, we argue for a systematic
Integration of
Intervention and
Implementation research (“I
3-Approach”). That means researchers design and develop intervention programs based on a field-oriented and participative approach from the very beginning (according to the concept of use-inspired basic research, Stokes
1997; see also Spiel
2009a). This is not only a matter of transferring a program to practitioners at the end of the research process; the whole conceptualization of an intervention as well as its evaluation and implementation should systematically consider the needs of the field (Spiel et al.
2011b) in an integrated way (Beelmann and Karing
2014). Consequently, the perspective of all stakeholders should be included (Shonkoff
2000). Based on theoretical considerations we drew from the literature and our experiences with intervention and implementation research, we summarized the most relevant actions to be taken and issues to be considered on the part of researchers, into a systematic six-step procedure (PASCIT) in order to propose such a systematic connection between intervention research and implementation research. We expect that such a connection would increase the probability of sustainably implementing evidence-based intervention programs into public policy.
How this systematic connection between intervention and implementation research can be realized is illustrated by means of the ViSC Social Competence Program. The main goal of the ViSC program is to reduce aggression and bullying and to foster social and intercultural competencies. It was embedded in a national strategy for violence prevention. For sustainable implementation, a cascaded train-the-trainer (researcher-multipliers-teachers-students) model was developed and applied. Advantages and limitations of the six-step procedure are also discussed for the VISC program as well as from a general point of view at the end of the article.
Theoretical Background
In recent decades, the evidence-based movement has gained greatly in impact, especially in Anglo-American contexts (Kratochwill and Shernoff
2003). Efforts have been made to make better use of research-based prevention and intervention programs in human service areas such as medicine, employment, child welfare, health, and juvenile justice (Fixsen et al.
2009; Spiel
2009a). As part of this evidence-based movement, various efforts have been made to define standards of evidence. For example, the Society for Prevention Research (Flay et al.
2005) has provided standards to assist practitioners, policy makers, and administrators in determining which interventions are efficacious, which are effective, and which are ready for dissemination (for details, see Flay et al.
2005). Other standards are provided by, for instance, the What Works Clearinghouse (see
www.whatworks.ed.gov), the Best Evidence Encyclopedia (see
www.bestevidence.org), the Campbell Collaboration (see
www.campbellcollaboration.org), and the UK-Based Evidence for Policy and Practice Information and Co-ordinating Centre (see
www.eppi.ioe.ac.uk). Common to these standards is the fact that evidence-based programs are defined by the research methodology used to evaluate them, and randomized trials are defined as the gold standard for defining evidence (Fixsen et al.
2009).
However, there are considerable differences in the uptake of research findings among public service areas and scientific disciplines (Nutley et al.
2007). Particularly in the field of education, there has not been extensive implementation of the body of evidence-based research, and the adoption of prevention and intervention programs is driven more by ideology than by evidence (Forman et al.
2013; Slavin
2008; Spiel
2009a,
b).
Despite differences among public service fields and countries, it has become obvious that the evidence-based movement has not provided the intended benefits to consumers and communities (Fixsen et al.
2009), and implementing these programs into practice and in the wider range of public policy has often failed (Fixsen et al.
2009,
2013). There is large-scale agreement about one of the central reasons for this disappointment: Program evaluation has not historically included any mention or systematic study of implementation (Meyers et al.
2012). As a consequence, the field of implementation research has emerged (Rossi and Wright
1984). In recent years, a growing body of implementation research has indicated that an active, long-term, multilevel implementation approach (= mission-driven focus) is far more effective than passive forms of dissemination without any active involvement of practitioners (Ogden and Fixsen
2014; Spiel et al.
2012).
Fixsen et al. (
2005) defined implementation as the “specific set of activities designed to put into practice an activity or program of known dimensions” (p.5; a similar definition is provided by Forman et al.
2013). Consequently, implementation science has been defined as “the scientific study of methods to promote the systemic uptake of research findings and evidence-based practices into professional practice and public policy” (Forman et al.
2013, p.80; see also Eccles and Mittman
2006).
In the last decade, many implementation studies have been conducted and several conceptual models and implementation frameworks have been presented. The review provided by Meyers et al. (
2012) consists of 25 frameworks. They found 14 dimensions that were common to many of these frameworks and grouped them into six areas: (a) assessment strategies, (b) decisions about adaptation, (c) capacity-building strategies, (d) creating a structure for implementation, (e) ongoing implementation support strategies, and (f) improving future applications. According to their synthesis, the implementation process consists of a temporal series of these interrelated steps, which are critical to quality implementation.
Despite these efforts within the field of implementation science, there is agreement among researchers that the empirical support for evidence-based implementation is insufficient (Ogden and Fixsen
2014). Although there is a large body of empirical evidence on the importance of implementation and growing knowledge of the contextual factors that can influence implementation, knowledge of how to increase the likelihood of quality implementation is still needed (Meyers et al.
2012).
Moreover, intervention research and implementation research have not yet been systematically connected. Forman et al. (
2013) explicitly pointed out the differences between intervention and implementation activities. While intervention activity refers to the provision of a prevention or intervention program to clients and consists (in the field of school) of a group leader conducting the program with targeted students, implementation activity refers to actions taken in the organizational setting to ensure that the intervention delivered to clients is complete and appropriate. Consequently, different research groups with different research traditions are usually involved in the two tasks. Implementation researchers are mostly given mandates by politicians to take on the implementation of already existing interventions. Moreover, implementation research is seen as less exciting than research that develops new interventions (Fixsen et al.
2011). Furthermore, implementation research is very difficult to do within the constraints of university research environments (e.g., due to time or financial constraints) and is sometimes even considered to be less scientifically valuable (Fixsen et al.
2011; Spiel
2009b).
Therefore, we suggest a systematic integration of intervention and implementation research. Consequently, researchers should design and develop intervention programs using a field-oriented and participative approach (according to the concept of use-inspired basic research, Stokes
1997; see also Spiel
2009a). The whole conceptualization of an intervention as well as its evaluation and implementation should systematically consider the needs of the field (e.g., Spiel et al.
2011a,
b,
c), and intervention, evaluation, and implementation should be developed in an integrative way. In order to realize this and to avoid as many presumable risks as possible, the perspective of all stakeholders should be included (Shonkoff
2000). In particular, the perspective of policymakers has to be included (Spiel et al.
2011a,
b,
c; Davies
2004), as well as analyses of what factors support or hinder evidence-based policy (Davies
2004,
2012).
In the next section, we propose an approach for the goal-oriented integration of intervention and implementation research.
Discussion
Although there is a large amount of empirical evidence for the importance of implementation, there is still not enough knowledge available on how to increase the likelihood of quality implementation (Meyers et al.
2012). From our perspective, one reason for this state of insufficiency might be that intervention and implementation research have not been systematically connected up until now. In this paper, we proposed the systematic integration of intervention and implementation research. We presented an integrative intervention and implementation approach (I
3-Approach) including a procedure with six consecutive steps (PASCIT) from (1) problem recognition to (6) transfer of program implementation. To illustrate the integration of intervention and implementation, we presented a program from our own research, focusing on violence prevention in schools. In this section, we discuss the strengths and limitations of this example. Finally, problems and challenges of the I
3-Approach and the PASCIT procedure are examined on a general level.
For the development and implementation of the ViSC school program, it was very helpful that it was part of a national strategy on violence prevention in the public school system, which we ourselves had also developed. Moreover, from our perspective, the establishment of a national strategy was a prerequisite for the upscaling of the ViSC program, as necessary implementation capacity and respective organizational structures (Fixsen et al.
2005) have not been established before. The public discussion of the high rates of bullying in Austria and several incisive events in schools raised policymakers’ awareness of the issue and gave us a window of opportunity for action. A further important step was that the national strategy became part of the coalition agreement of the governmental parties. This solved the budget problem. The national strategy also supported the implementation of sound measures for sustainability (for details, see Spiel and Strohmeier
2012; Spiel et al.
2012) and the realization of a randomized control trial as scientific standard was defined within the strategy. To our knowledge, it was the first time that this gold standard was applied in a program financed by the Austrian Federal Ministry for Education. Further strengths of the ViSC program include the fact that adaptation within a defined range is an explicit part of the program (for details, see Strohmeier et al.
2012); the building of organizational capacity through collaboration with, e.g., school psychology, the ViSC coaches training, the implementation concept and its evaluation, as well as the AVEO self-assessment as a feedback mechanism.
However, there are also some limitations. In the ViSC program, we did not work directly with schools, but rather trained ViSC coaches. This resulted in lower commitment of the schools and lower implementation quality in the evaluation study (Schultes et al.
2014), but advantages for long-term implementation in the school system. Nevertheless, we recommend convincing politicians and government officials that the initial implementation of such programs should be done under the supervision of researchers and program developers. A further limitation has to do with the cultural context. In Austria, responsibility and commitment are not well established in the educational system (see Spiel and Strohmeier
2012). Out of the 155 secondary schools in Vienna invited to participate in the ViSC program, only 34 applied for participation. Considering the high bullying rates in Austria (Craig and Harel
2004), which indicate a need for intervention, the participation rate was low. The low engagement can also be seen by the fact that out of the 13 schools we asked to serve as control schools, only 5 agreed to do so. In other countries, e.g., in Finland (Salmivalli et al.
2013), a greater degree of responsibility in the educational system means that participation rates in such programs are usually much higher.
We tried hard to fulfil all the critical steps identified by Meyers et al. (
2012) and were successful concerning most of them (see above), but it was not possible to realize our intention in all cases. To prepare the schools for the intervention, we defined criteria for program participation, such as participation of the entire school staff in the program. The school principals were asked to get consent in advance. However, after the start of the program, it became apparent that this consent had not been obtained in several cases. We further recommended that no other programs should be conducted simultaneously, which was also disregarded by some schools. Finally yet importantly, the school principals’ leadership was not strong enough. This can be attributed to the Austrian school law and cannot be changed easily. School autonomy has been a topic of political discussion in Austria for many years.
Overall, the ViSC program illustrates that intervention and implementation can be systematically combined in the sense of PASCIT. However, it also illustrates that detailed planning of such projects by researchers is difficult and that limitations on different levels—regarding, e.g., cultural contexts—have to be kept in mind.
But why is the systematic combination of intervention and implementation as proposed by PASCIT so difficult? On the surface, the steps seem self-evident. And what is really new in contrast to existing implementation frameworks and transfer concepts? Obviously, most, if not all components (both within and across the six steps) of PASCIT, are already known and have long been considered in intervention and implementation research. However, the new and demanding challenge is our postulation of bringing them together in an integrative and coordinated way, in order to achieve success. The I
3-Approach represents a very basic but also a very systematic research concept and is more than purely the sum of its steps, ignoring one aspect changes the whole dynamic. RE-AIM (Glasgow et al.
1999) has recommended such a systematic view for the evaluation of public health measures. However, so far, it has not been implemented comprehensively either. Nevertheless, the validity and convenience of PASCIT have to be proven by future research and programs in different fields and cultural contexts.
Obviously, combining intervention and implementation research is very demanding. Therefore, the appropriate acknowledgement in the scientific community is essential. Consequently, individual researchers should not be the only ones engaging in this kind of research; universities also have to include it in their missions. We therefore strongly recommend a discussion of success criteria in academia (Fixsen et al.
2011) and that the social responsibility of academics and universities, respectively, will be considered more deeply. The current gratification system in science is more oriented to basic than to applied research. Within the applied field, it is predominantly technology and medicine that are financially supported and acknowledged by the public. Mission-driven research picking up problems in society is less financed and noticed. Consequently, the number of researchers engaged in this field is limited. However, in the last few years some advances could be observed.
A further problem lies in the availability of knowledge. In the social sciences, particularly in the educational field, it is not easy to get robust scientific knowledge. Reasons for this include that replication studies are rare and only probability conclusions can be drawn. Here, the development of standards of evidence was of high importance (e.g., Flay et al.
2005). However, the requirements defined in these standards are not as comprehensive as demanded by the I
3-Approach. For example, the evidence standards defined by the Society for Prevention Research (Flay et al.
2005) proposed criteria for efficacious and effective interventions and interventions recognized as ready for broad dissemination. However, they did not combine them with the affordance of implementation.
As mentioned earlier, the commitment of policymakers is crucial. Researchers need to have a great deal of persistence and knowledge about policymakers’ scope of action. However, in most cases this is not enough. A window of opportunity is also needed and researchers have to catch it. Here, the media can be supportive (Spiel and Strohmeier
2012).
To sum up, from our perspective, it is a continuous challenge to introduce the criteria for and the advantages of evidence-based practice to politicians, public officials, and practitioners on the one hand, and to promote the recognition of intervention and implementation research in the scientific community on the other hand. The I3-Approach and its PASCIT steps offer one possible procedure for realization. Obviously, other procedures, such as bottom-up approaches, might be possible, especially if implementation capacity (e.g., in the sense of sufficient competent manpower) and respective organizational structures are already established. However, we argue for a systematic, strategic procedure instead of an incidental one.