Background
Improving the quality of care in maternal-newborn settings is an international, national, and local priority [
1‐
3]. Despite ongoing improvement efforts, pregnant and birthing people and their infants continue to receive care that is not always aligned with the best available evidence [
4,
5]. Moving evidence into practice is complex and remains an ongoing challenge in healthcare. Studies have revealed that individuals and organizations face many barriers to implementing evidence into practice, including lack of skills with few opportunities for training, unsupportive organizational cultures and the undervaluing of evidence, as well as limited time and physical and human resources [
6]. In maternal-newborn care specifically, clinical practice changes can be particularly complex due to the involvement of different healthcare providers (e.g., nurses, physicians, midwives), care that focuses on two different patient populations (e.g., the pregnant or birthing person and infant), and the fact that some practices are affected by separate hospital units (e.g., birthing unit, mother-baby unit, neonatal intensive care unit).
Implementation science is defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice” [
7]. Although the field of implementation science has developed rapidly over the past two decades, its full potential has not yet been realized [
8]. There is emerging discussion about how the knowledge gained through implementation science has largely remained in the scientific domain, and has not been well translated into practice-based settings such as healthcare [
9,
10]. It is important to assess if and how implementation evidence, principles, and tools are being applied to identify opportunities to optimize evidence-informed implementation.
Within maternal-newborn care, there are increasing examples of implementation science applications, focusing on topics such as prioritizing content areas for implementation research [
11] and practice [
12], identifying and examining the effectiveness of different implementation strategies [
13‐
15], and exploring barriers and facilitators to implementation [
15‐
17]. While this literature makes an essential contribution to advancing our understanding of evidence-informed strategies to implement evidence and the potential challenges, it typically has not focused on the entire implementation process needed to bring about change (i.e., taking a planned action [
18] approach to changes). Recently, there have been more examples of how teams work through the overall implementation process being published as best practice implementation reports [
19‐
21]. However, these reports are typically focused on a practice change in a single setting, and by virtue of publishing their work, likely over-represent teams that are more familiar (and potentially more successful) with implementation processes. To complement this existing literature, there is a need to shift from learning about single implementation strategies or single projects to also looking more holistically at how maternal-newborn teams implement practice changes in their day-to-day work.
The province of Ontario, Canada provides a unique opportunity to learn about the process of moving evidence into practice. Every birthing hospital in the province has access to a perinatal data registry called the Better Outcomes Registry & Network (BORN) Ontario, which captures data on nearly every birth in the province [
22]. Contributing hospitals can use their own BORN data to facilitate practice improvements [
22], for example, to identify evidence-practice gaps, understand current practice, and monitor and evaluate implementation projects. Although hospitals have access to this large and robust data system, it remains largely unknown what processes teams are using to implement practice changes and how well their processes align with current best practices in implementation science.
In 2012, BORN launched the Maternal Newborn Dashboard (“the dashboard”), which is an online audit and feedback system that maternal-newborn hospital teams can use to facilitate practice change improvements. The dashboard includes six key performance indicators related to practices such as newborn screening, episiotomies, breastfeeding, repeat elective cesarean sections, Group B streptococcus screening, and inductions [
23]. In 2014, an evaluation of the dashboard commenced, providing an opportunity to learn how Ontario maternal-newborn hospitals approach practice changes and how they use the dashboard to support their work. One part of the evaluation involved interviews with nursing leaders in Ontario maternal-newborn hospitals about how they implement practice changes.
Using these data, we aimed to understand maternal-newborn leaders’ usual approaches to implementing practice changes in their hospital units, including what steps they take or not, and identify potential areas where the implementation process could be improved.
Discussion
In this study we aimed to explore how maternal-newborn hospital teams implement practice changes in their units. We learned about the usual implementation processes, focusing on what steps teams do and do not take. By comparing the described steps to an implementation framework, the Implementation Roadmap [
36], we identified which steps were most frequently discussed (e.g., identifying a problem), which were less frequently discussed (e.g., selecting and adapting evidence, evaluation), and which were discussed frequently but not optimally (e.g., barriers assessment). We identified many strengths, including efforts to work through varied implementation steps, the depth of experiential knowledge, and efforts to engage point-of-care staff. By noting gaps in the implementation process, we identified potential areas where further capacity development and support may be needed.
Across the 22 sites, only one participant described all seven steps, with most describing four or less steps, potentially signaling that sites’ implementation processes are not comprehensive. Although we do not know what the sites’ implementation outcomes were (and therefore cannot make inferences about the effectiveness of the different approaches), our previous work identified that teams who were successful in their practice change initiatives were more likely to have used a formal framework (like the Implementation Roadmap) to guide their implementation process [
39]. Furthermore, we identified variability across the sites regarding which implementation steps are taken. This is consistent with other studies that have reported differences in what, when, and how implementation steps are taken [
40,
41]. There are several potential explanations for this variability. First, we expect that the nature of the change (e.g., size, complexity) may influence the number of steps that teams take, with smaller, simpler changes resulting in fewer steps taken. Second, the urgency of a change may prevent some steps from being taken (e.g., field-testing), such as when units are required to implement changes immediately due to organizational or provincial mandates (as we recently observed during the COVID-19 pandemic). Third, it is likely that the education and training experience of the implementation leader influences the process used. In our study, participants were predominantly nursing leaders who would have been trained in nursing clinical practice. Despite nurses frequently being tasked with improvement work, implementation practice and quality improvement are not typically included in nursing education programs and there are few opportunities for ongoing professional development in these areas [
42,
43]. There is a need to better equip nurses with implementation science knowledge and skills to better position them to translate evidence-informed practices into care [
44].
Some of the most frequently identified implementation steps were identifying a problem and best practice, assembling local evidence, and monitoring and evaluation. While it is promising to see so many sites engaging in these steps, it is important to consider how access to the BORN dashboard and registry may have contributed to these high numbers, and thus not necessarily be reflective of what is occurring in the broader maternal-newborn system outside of Ontario. The dashboard facilitates identification of a problem by alerting teams with a colored signal (red or yellow) when best practice is not being met; it assists with learning about current practice by allowing users to drill down into specific cases to explore factors that may be driving the observed rates; and it allows users to monitor changes over time by observing changes in their rates and colored signals [
23,
45]. Other settings may not have access to a similar data system that has been designed to facilitate and improve care; rather, many teams rely on data systems designed for collecting data for clinical and administrative purposes, rather than monitoring and evaluation purposes [
46,
47]. While our findings speak to the value of a dashboard for facilitating specific steps in the implementation process, our findings also highlight the need for teams to actively engage in implementation steps
beyond using a dashboard. For instance, although many participants reported monitoring their dashboard (a largely passive activity), no participants described developing an evaluation plan beforehand and only one participant described undertaking a process and impact evaluation. More attention to active evaluation planning, consideration of broader outcomes (e.g., implementation outcomes, service outcomes, and client outcomes [
48]), and resources to support evaluation are needed to better assess the effect of the practice change initiatives.
Although assessing barriers was one of the most frequently mentioned steps, study participants rarely described a comprehensive or theory-based approach, with some relying on experiential knowledge of barriers acquired through past projects. While the application of tacit knowledge is particularly useful in familiar situations [
49], each practice change initiative is unique and relying on knowledge gained through past projects alone risks missing opportunities to learn about other relevant, current factors. In addition, few participants described selecting their implementation strategies based on the identified barriers or evidence. This challenge has been described elsewhere, with teams selecting strategies based on familiarity or the ISLAGIATT (“it seemed like a good idea at the time”) principle [
50,
51]. While participants provided some examples of implementation strategies, it is important to note that this list is not exhaustive compared to the wide-ranging implementation strategies identified in the literature [
52]. In our study, the most frequently identified implementation strategies were educational in nature, which aligns with previous literature [
53,
54]. However, the barriers to practice change are often multi-factorial (e.g., at the individual, interpersonal, organizational, and system level), going beyond individual knowledge deficits. This requires implementation strategies that are tailored to the change being implemented, the identified multi-level barriers, and the implementation context [
55]. Given there is evidence to suggest that tailoring implementation strategies to identified barriers can positively influence practice changes [
56], there are opportunities to build further capacity in this area.
Less than half of participants named a process or framework that they use to guide the implementation process. Several study participants stated they used a framework or process but could not name it. Of those that did identify a process or framework, no one identified a comprehensive implementation framework (e.g., planned action framework [
18]) that guided the full implementation process. Unsurprisingly, the most frequently identified processes and frameworks were grounded in quality improvement approaches (e.g., Lean, PDSA). Recently, there has been increased interest in the intersection between quality improvement and implementation science, with calls for the two complementary fields to better align [
57,
58]. Adding implementation science to existing quality improvement approaches may have several benefits including an increased emphasis on using evidence-informed practices and a focus on applying theory-driven and systemic approaches to assessing determinants of practice and selecting implementation strategies [
36]. We assert that implementation science can enhance (not replace) these existing quality improvement approaches and tools, providing a systematic and comprehensive approach for teams.
We identified examples of how teams engaged point-of-care staff to varying degrees in the implementation process, with most providing examples of two-way exchanges between the implementation working group and staff. Governance structures such as unit councils were identified as a means to facilitate this engagement. The COVID-19 pandemic may have resulted in changes to shared governance, with clinical priorities quickly shifting and “nonessential” activities such as council meetings sometimes suspended [
59]. Because our data were collected pre-pandemic, it is unknown what shared governance structures remain in place and how this may have changed staff engagement. Our future work will explore the existing shared governance infrastructure and how it is used to facilitate engagement of point-of-care staff in the implementation process. While our study provided many examples of how point-of-care staff are engaged in practice changes, only one study participant described engaging patients or families and six described using patient education and engagement as an implementation strategy. Given the limited examples of engaging patients in the working group itself, there remain opportunities for earlier partnership with patients in the implementation process. Indeed, patients and caregivers can contribute meaningfully to the implementation process and can be a powerful motivator for changes [
60].
Limitations and strengths
We acknowledge this study has several limitations, many of which are inherent to conducting a secondary analysis. The interview questions were not designed to probe for the different Implementation Roadmap steps. The results need to be interpreted with caution; a participant not describing a step may in fact reflect a lack of precision in the interview questions, rather than an indication that the participant did not do it, or lacks the knowledge or skills to do it. Conversely, a participant stating they completed a step does not necessarily mean it was actually completed (or completed optimally). In addition, implementation science continues to grow yearly, and it is possible that at the time of the interviews, participants may not have had access to the same implementation language to articulate the Implementation Roadmap steps. However, implementation science has not been well translated to practice-based settings [
9,
10,
43], and so this challenge would likely remain if the interviews were conducted today. To mitigate this challenge, we were liberal with our coding and coded according to participants’ descriptions, regardless of the specific terms used.
Next, our results may be influenced by social desirability bias, whereby participants shared information they perceived as socially acceptable, rather than sharing information that reflects their true reality [
61]. For instance, some participants may have attempted to describe a more thorough implementation process than is actually used in practice. Brief or vague answers may be an indication of social desirability tendencies [
61]; we were therefore attentive to this in our analysis, identifying where participants provided short answers without any elaboration on when or how the step is actually performed and highlighted this in our results (e.g., barriers assessment). However, social desirability was likely not an issue across all participants, as some did explicitly acknowledge their lack of awareness or completion of some steps.
Finally, it is important to acknowledge that at the time of analysis, the data were eight years old and these results may not reflect implementation practice in maternal-newborn hospitals today. To mitigate this limitation, each member of our research team was involved in interpreting the data, many of whom are clinicians embedded in maternal-newborn care. Based on our collective experience, and the knowledge that practice changes slowly [
62], these findings would likely still ring true today. These results are being used to develop a survey to distribute to all Ontario maternal-newborn hospital units to learn about what Implementation Roadmap steps teams are currently taking, their confidence completing them, and their perception of their importance. The results we report here are informing the development of survey questions to probe identified gaps and to tailor the question wording to align with local language. The upcoming survey will complement this qualitative secondary analysis by providing updated data from a wider sample of hospitals, allowing us to better understand what gaps and needs remain.
Conducting a secondary analysis also offered several strengths. Given the current demands on our health system and its leaders, we may not have been able to enroll the same number of study participants in today’s conditions. Given the sufficient fit between the original dataset and our question, conducting a secondary analysis eliminated the participant burden that would have been required to collect new qualitative data from busy clinicians and administrators [
24]. Another strength of our study was the application of a recent evidence-informed framework (Implementation Roadmap [
36]) that synthesizes theoretical and experiential knowledge in implementation science and practice, allowing us to interpret the data in a new light and identify future areas for research and practical support. Finally, our study makes a unique contribution to the literature by describing and comparing the implementation approaches of many maternal-newborn teams. With data on 22 sites (about one-quarter of birthing hospitals in the province), our sample provides insight into the implementation processes of diverse teams, highlighting commonalities and differences. These insights serve as potential areas to focus future implementation capacity-building efforts in maternal-newborn health services.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.