Context
In 2012, the Ontario Ministry of Health and Long-Term Care in Canada announced an ambitious policy aimed at improving healthcare delivery in the province – QBPs. QBPs represent the first major change in Ontario’s hospital funding mechanism since the introduction of global budgets in 1969 [
28]. QBPs, a variant of activity-based funding [
29], replace a small portion of each hospital’s annual lump sum global budget with a ‘patient-based’ funding approach. Under this QBP funding model, hospitals are paid a pre-set reimbursement rate for episodes of care based on diagnoses or procedures [
10]. To guide the provision of care, each QBP has an implementation handbook that includes an evidence-based, best-practice clinical pathway.
Data collection and analysis
In 2016, we undertook a qualitative evaluation of the implementation of QBPs using an embedded case study design, involving document review and semi-structured interviews with 45 key informants identified mainly through purposeful sampling [
10,
30]. Participants were involved in designing the QBP policy (
n = 12, ‘level one’ participants), in enabling adoption of QBPs across hospitals (
n = 11, ‘level two’ participants), and in implementing QBPs within hospitals (
n = 22, ‘level three’ participants). For hospital implementers (level three), we selected five hospitals from the initial 71 hospitals implementing QBPs across Ontario. We used a purposeful, stratified sampling approach based on a survey of executives from 16 provincial agencies. These executives were well-positioned to comment on hospital success or lack thereof with regards to QBPs. The five selected hospitals varied in size, geographic location, status as an academic versus community hospital and perceived relative success with QBPs [
10]. Within these five hospitals, we interviewed four hospital leadership groups (chief executive, financial/decision support, clinical and medical) using purposeful sampling. Snowball sampling was pursued beyond these groups where appropriate.
Interviews were recorded, transcribed verbatim and coded inductively using Quirkos® qualitative data analysis software. Thematic analysis, incorporating framework analysis, was used to generate themes from the data. Additional details on QBPs and on the study’s recruitment, data collection and analytical approaches are provided elsewhere [
10,
30]. Ethics approval was received from the Women’s College Research Institute Research Ethics Board (REB# 2016–0016-E).
Thematic analysis results
The results of the qualitative evaluation described above suggest that QBP implementation was suboptimal [
10]. QBPs were designed with a 3-year phase-in, with the intent to fund 30% of care by Fiscal Year 2014/15. Yet, as of Fiscal Year 2016/17, only about 15% of care was funded by QBPs, suggesting challenges with implementation. Participant comments from all three levels corroborate these challenges, as demonstrated by the sample quotes below:
“I don’t know [how it is going]. I think some would say QBPs haven’t really lived up to their expectations. Others probably believe that it’s going really well. It all depends who you talk to” (Level (L)1, Participant (P)8).
“[We are] pushing back and saying slow down because this is tougher to manage than we thought, and it’s got all kinds of complications in the implementation … I think the execution needs to be improved for the whole QBP and health system funding reform process” (L2, P13).
“There has not been change management dollars behind change. Fundamentally, change actually costs to implement and then you get the benefits down the line” (L3, P23).
The thematic analysis revealed that 4 years into implementation, confusion and misunderstandings of the primary goal of QBPs and the QBP funding mechanism persisted [
10]. These differences in understanding of QBPs within and across stakeholder groups prompted the research team to revisit the coded interview data using a SMM lens. Select codes were reviewed to identify shared or divergent mental models using the categories in Table
1. Examples of the codes we reviewed include (1) ‘Purpose/goals of QBPs’, (2) ‘Mechanisms by which QBPs will achieve intended goal(s)’ (including the sub-code ‘Degree of shared understanding of how/why QBPs work’) and (3) ‘Evaluation of QBP implementation’ (including the sub-code ‘Degree of shared impression of implementation success/failure’).
A Shared Mental Models lens on the implementation of Quality-Based Procedures in Ontario, Canada
We identified three key examples of divergent mental models in QBP implementation, each of which maps to a SMM type in Table
1. Below, we describe these divergent mental models and explain how they may have affected implementation.
The interviews revealed a lack of consistency and clarity over time in stakeholders’ understanding of the goals and the means by which those goals would be achieved [
10], suggesting weak policy-related SMMs. “
A significant challenge in the implementation of QBPs”, noted one participant, “
is that there has been no clarity as to what the primary purpose is” (L1, P45). Some participants argued that QBPs primarily aimed “
to engage clinicians” (L1, P4), “
focus on quality” (L2, P12) or “
reduce practice variation” (L1, P8), while others noted that “
QBPs are really more about funding than they are about quality” (L3, P37). Reflecting on the policy as a whole, two participants involved in the implementation of QBPs in hospitals stated, “
We don’t really understand what QBPs are all about. Most of us don’t even know what the right questions are to ask” (L3, P8) and “
I don’t think that everyone in the organisation has the same understanding of QBPs” (L3, P7). Another participant suggested that “
we should have had a collective think about how to do it [QBPs]
, and that did come later, but it came after the fact” (L2, P12).
Differing perspectives of the definition and goal(s) of QBPs contributed to divergent views of who within hospitals should lead QBP implementation [
10], suggesting weak stakeholder-related SMMs. Some participants, for example, argued that “
the CFO [Chief Financial Officer]
plays an essential role. That’s where the leadership is for the most part for a QBP” (L1, P8), while others “
made a deliberate decision for the implementation to be led at a clinical programme level, not by the finance team” (L3, P43). One participant recounted a conversation among senior leaders regarding who the executive sponsor for QBPs should be within their hospital: “
A clinical VP [Vice President]
was one of the nominees. We say, ‘it should be you because this is about quality of care’. He says, ‘No, QBPs are more about funding. It’s health system funding reform. It’s a financial thing. It should be led by our CFO.’ And our CFO was sitting on the other side of the table saying ‘No, this is about clinical quality improvement. Yes, it has a financial component, but this really has got to be owned by the clinical service areas’” (L3, P44).
Finally, there appeared to be varied levels of trust in both the expressed promise of QBPs and in those overseeing and leading QBP implementation. These divergent beliefs resulted in some stakeholders feeling cynical about QBPs and others feeling optimistic. For example, one participant attributed provider non-compliance with QBP pathways at his site to an “I don’t believe in it, I don’t have to do that” attitude (L3, P42), while another participant described the opposite sentiment: “They said ‘we’re going to implement the recommendations in these handbooks, because we trust that they’re the right thing. There are smart people on these expert panels. They seem to be based on evidence. So, we’re going to give it the benefit of the doubt’” (L1, P14). Leader behaviours influenced trust, as this participant also recounted: “They need to believe it too. Some of them come and sit on a committee and then they go off and bad mouth something and that’s all it takes for people to say ‘oh, man, the leaders don’t really believe in this’” (L1, P8). A participant from an agency supporting QBP adoption noted that, “People are completely open to using QBPs … But they need to have the confidence that the decisions being made are practical and responsible” (L2, 20).
The examples above demonstrate the extent to which policy-related, stakeholder-related and belief-related mental models failed to converge across stakeholders. A participant involved in the design of QBPs articulated the challenge of congruence nicely: “It’s one thing to do this work at the provincial level, but unless those signals get transmitted clearly to the local level, then you’re going to have a breakdown in how it gets understood and what the response to that signal is” (L1, P4).