Introduction
Methods
Rapid review
Qualitative study setting
PM Intervention | Description |
---|---|
Guiding | |
Network administrative and clinical leadership model | Administrative leaders are embedded in host hospitals and are employees of, and therefore accountable to, both their host hospital and CCO. Each clinical program within CCO has a ‘clinical lead’ counterpart within each network. Clinical leads are paid by CCO for one day per week. |
Funding contracts | Specify expectations for patient volumes, data submission, and implementation of initiatives. Some funds may be withdrawn for non-compliance with these performance expectations/deliverables. |
Monitoring | |
Scorecard | Generated quarterly to grade performance for each network (green, yellow, red) relative to each indicator and its target, and to rank performance of each network relative to others (global ranking in cancer for overall performance versus ranking per indicator in renal) |
Annual Monitoring Report | Generated annually to monitor indicators retired from the scorecard (cancer only). |
Web-based access to performance data | Secure, web-based database and analytic tools offer historic, current, and projected data on indicators; updated monthly and allows users to generate reports on specific queries. |
Public reporting | The Cancer Quality Council of Ontario generates the Cancer System Quality Index, which is a web-based public reporting tool on cancer system performance across 30 indicators. Select renal indicators are also reported on the Ontario Renal Network website. |
Improving | |
Quarterly performance review reports | Issued quarterly in preparation for quarterly meetings (below) and includes scorecard indicators as well as additional indicators; if performance is below target or declining for any given indicator, requires commentary on contributing factors and improvement plans; space also provided to summarize successes |
Quarterly performance review meetings | Held between CCO leaders and network leaders following each quarter. Occurs via video-conference in cancer system and tele-conference in renal system. Performance results for the past quarter are discussed as well as data quality or reporting problems, success stories, challenges, and plans for improvement. |
Recognition certificates | Issued annually for each indicator for networks that met target, were the top performer, and/or were the most improved. |
Escalation process for poor or declining performance | Commences with an informal conversation with network leadership regarding performance, and may progress to a formal letter with template to complete a required improvement action plan and, in rare cases, to withdrawing of funds associated with requirement, if relevant. |
Secondary analysis of qualitative interviews
Results
Rapid review
Type | Description | References | In Our Data? |
---|---|---|---|
Unintended Consequences on Providers and Organizations | |||
I. Increased Work | |||
a. Increased administrative
burden | Excessive time spent on administrative tasks (e.g., documentation, data collection and submission, justifying deviation from clinical reminders when not clinically relevant) | ✓✓ | |
II. Poor Design or Use of Performance Data | |||
a. Tunnel vision | Emphasis is placed on dimensions of performance that are measured or incentivized, while other unmeasured but important aspects are overlooked | ✓✓ | |
b. Measure fixation | Emphasis is placed on meeting the performance target rather than the associated objective | ✓✓ | |
c. Suboptimization | Focusing on one component of a total and making changes intended to improve that one component and ignoring the effects on other components (e.g., pursuit of narrow local objectives at the expense of broader organizational or system objectives) | ✓ | |
d. Myopia | Excessive concentration on short-term targets without consideration for long-term consequences | ✓ | |
e. Quantification privileging | Fixation on data that can be quantified causing qualitative aspects of healthcare to be missed | ✓ | |
f. Anachronism | Lag effect between data capture and data usage causes data to not help solve current problems | [5] | ✓ |
g. Insensitivity | Assessment does not capture overall complexity of health performance, causing the wrong providers, units, or organizations to be penalised or rewarded (e.g., contextual factors not considered, risk adjustment not performed, good performance results in a disadvantage such as improved efficiency and cost savings resulting in a lower budget the following year) | ✓✓ | |
h. Misinterpretation | Incorrect inferences made about raw performance due to a lack of understanding of the measure and its underlying methodology or to a failure to account for the full range of possible influences on performance | ✓ | |
i. Complacency | Reduced ambition to improve caused by the perception that performance is satisfactory | ✓ | |
j. Fossilization | PM system excessively rigid to the point of organizational paralysis and reduced innovation (e.g., choosing not to adopt new technology or procedure so that current performance is maintained) | ✓ | |
k. Systemic dysfunction | Performance priorities, indicators, measurement methodologies, interpretations of data, and/or resulting actions are misaligned or contradictory across programs and hierarchical levels within an organization or between PM schemes that co-exist in the broader healthcare system | [10] | ✓✓ |
l. Resource waste | Time and money are spent on PM without achieving its underlying objectives; time and money spent on unnecessary care | ✓ | |
III. Breaches of Trust & Increased Toxicity of the Work Environment | |||
a. Misrepresentation | Deliberate manipulation of data to appear a better performer (e.g., creative accounting, fraud, upcoding) | ✓ | |
b. Gaming | Deliberate manipulation of behavior to appear a better performer (e.g., cherry-picking patients, stopping the clock for wait time indicators) | ✓ | |
c. Bullying | Pressure for performance improvement involves shaming, intimidating, or coercing staff; PM system seen as punitive and oppressive | ||
d. Loss of professional ethos/morality | PM causes provider motivation to shift from providing the best care to providing incentivized care, thereby undermining providers’ intrinsic motivation | ||
e. Reduced learning and psychological safety | A work environment focused on blame emerges and generates distrust and fear that inhibits problem-solving, learning, and innovation | ||
f. Reduced autonomy, agency and/or self-regulation | PM reduces individual, organizational, or network autonomy, agency, and ability to self-regulate due to the PM system itself and/or due to how PM was implemented (i.e., imposed on providers, rather than designed and undertaken with or by them) | ✓ | |
g. Reduced morale | Loss of belief and confidence in their organization’s mission, goals, or work or loss of belief and confidence in PM tools and processes | ✓✓ | |
h. Team and inter-professional conflict | Reduced cooperation and increased tension between teams and professional groups due to PM | ||
i. Increased perceived injustice (due to social comparisons) | Feelings of competition, resentment, and frustration between those individuals and groups who are affected by PM and those who are not (e.g., those not affected by PM do not receive the same attention and/or resources; those affected by PM operate under more scrutiny and pressure) | N/A | ✓ |
j. Toxic ambition | Constant pressure to improve even when performance meets or exceeds the target | N/A | ✓ |
IV. Exacerbation of Inequities | |||
a. Increased resource gap | Providers that treat poorer or underserved patients may have less resources to invest in improvement. As a result, they perform worse and then either do not benefit from incentives or experience penalties that further exacerbate existing resource gaps | ✓ | |
b. Reduced ability to recruit necessary staff | Staff are attracted to highly rated organizations compared to lower rated organizations thereby making it more difficult for lower rated organizations to recruit staff and improve performance | ||
c. Overcompensation | Incentive payments made are higher than required to meet performance targets, thereby reducing resources for other important types or aspects of care | [5] | |
V. Politicization of Performance Management | |||
a. Political grandstanding | PM is driven by interests of governments, political parties, the media and other stakeholders | [5] | |
b. Political diversions | PM is used as a distraction by governments under pressure | [5] | |
VI. Positive Unintended Consequences | |||
c. Improved morale | Feeling of recognition and increased confidence and pride in individual or organizational performance | ✓ | |
d. Motivated learning and development | PM spurs further education and training to support improvement | ✓ | |
e. New relationships and collaborative problem-solving | Professionals, organizations, or networks come together in new and inventive ways to cope with PM | ✓✓ | |
f. Improved capacity planning | Information collected through PM allows for better internal planning and external applications | N/A | ✓ |
Unintended Consequences on Patients and Patient Care | |||
I. Inappropriate or Sub-Optimal Care | |||
a. Clinical decisions driven by PM (rather than by evidence and clinical judgment) | PM generates pressure to diagnose and treat patients in particular ways, resulting in under-treatment, over-treatment, and/or harm to patient | ✓ | |
b. Improved documentation
without improved care | Providers document care provided more effectively, but the care itself is not improved | ✓ | |
c. Less continuity of care | When PM incentivizes approaches to care that result in patients interacting with multiple providers rather than or in addition to their primary provider(s) (e.g., incentives for same-day appointments and after-hours care) | ||
II. Reduction in Patient-Centered Care | |||
a. Compromised patient education and treatment choice | Providers promote incentivized treatments over non-incentivized treatments to patients or fail to obtain informed consent before conducting a test or procedure | ✓ | |
b. Compromised patient autonomy | Providers exert pressure on patients who refuse incentivized care | ✓ | |
c. Compromised patient convenience | Causing inconveniences for patients for purposes relating to PM (e.g., requiring an additional appointment that would otherwise not be deemed necessary or bringing up a topic like end-of-life care at an inappropriate time and place) | ✓ | |
d. Compromised patient engagement | Reduction in patient engagement as a result of changes in treatments driven by PM | [65] | |
e. Disregard for the patient voice | Providers give less attention and priority to patient concerns and preferences compared to PM-related aspects of care | ✓ | |
f. Erosion of trust in care | Patients lose confidence in their healthcare providers after a poor performance assessment or after experiencing or witnessing manipulation driven by PM | ||
II. Exacerbation of Inequities | |||
a. Increased inequity in access to high quality care | Providers avoid high risk or socially challenging patient sub-groups or choose patients who can maximize positive measurement (i.e., cherry-picking) When financial incentives for high performance are re-invested in improved services, then patients of high-performing services benefit to a greater degree than patients of under-performing services | ✓ | |
b. Increased healthcare disparities | Increased health care disparities in the population based on sex, race, ethnicity, language, or economic status due to (1) differences in access to high quality care or (2) improvements spurred by PM are more useful to mainstream patients | ✓ | |
IV. Positive Unintended Consequences | |||
a. Beneficial spillover effects | PM contributes to improved performance in other clinical areas that are not performance managed | ||
b. Increased patient knowledge | PM increased patient education efforts | [61] | |
c. Increased patient
motivation and engagement with care | Patients more involved and compliant with recommended care due to increased education and time spent | [61] | |
d. Increased patient satisfaction with care | PM promoted more comprehensive care (e.g., addressing multiple issues per visit, including preventive care) | [61] | |
e. Enhanced patient-provider communication and relationships | PM increased patient-provider communication, resulting in positive psychological feelings among patients regarding their providers and their care |
Unintended consequences of performance management in cancer and renal care in Ontario
Negative unintended consequences on providers and organizations
Increased work
“We probably have more staff tied up in data submissions and data quality work than most of the rest of the hospital. It just seems like a really labour-intensive, very resource-intensive requirement” (G21, Network, Cancer and Renal)
The administrative burden described by participants was often linked to two contributing factors. The first factor was the number of required performance indicators, as this CCO representative acknowledged: “It’s the sheer number of indicators that we try to push forward. At some point, you’re just diluting the capacity that exists within the regions or in the hospitals to make change” (P71, CCO, Cancer, Administrator). The second contributing factor was the inconsistencies in data systems and PM requirements across the multiple oversight bodies to whom networks are accountable as this participant explained: “For me, the big thing is 100% the data burden. They forget that we have shared accountabilities. We’re not just accountable to CCO” (G06, Network, Renal). Many participants described conflicting data requirements between organizations and oversight bodies such as their own hospital, CCO, the Canadian Institute for Health Information, the Local Health Integration Network, Health Quality Ontario, and Accreditation Canada. As such, the administrative burden was spurred in part by the unintended consequence of ‘systemic dysfunction’, described further below.“Our resources don’t increase, our staff don’t increase, but the demand on us increases exponentially from one quarter to the next” (G06, Network, Renal)
Poor design or use of performance data
Second, some network representatives argued that select indicators reflect service performance beyond their immediate control. CCO representatives acknowledged that some indicators are aimed at stimulating collaboration across programs within and across organizations.“There’s a lot of unmeasured confounding in some of these measures. So, we think everybody should be able to get there, but, you know, everything falls within a distribution, and someone’s going to be an outlier, and it’s not necessarily their fault that they’re an outlier” (P36, Network, Clinical Lead)
“People are really focused on numbers and really focused on the methodology. They’re not necessarily focused on what’s ultimately best for the patient because they want to look good on the scorecard and rankings” (G14, CCO, Clinical Leads)
‘Tunnel vision’ occurs when emphasis is placed on dimensions of performance that are measured or incentivized, while other unmeasured but important aspects are overlooked. Participants described tunnel vision in two ways. In the first set of examples, participants described how indicators tend to focus on select fragments of the patient journey, and on processes like wait times rather than outcomes like patient survival, which one participant described as “losing the forest for the trees” (G11, Network, Cancer). Some of these indicator-focused examples were related to the unintended consequence of ‘quantification privileging’. In the second set of examples, participants focused more broadly on the healthcare system, explaining how PM in one part of the healthcare system (in this case, in cancer care) can exacerbate problems in other parts of the system that do not operate under the same PM requirements:“Sometimes I feel if we just play the game we could be a perfect performer, but I’m not sure our patients would be any better off…we would just learn how to play the game of being a good performer” (G22, Network, Cancer)
These system-level examples of ‘tunnel vision’ seemed to contribute to the unintended consequence of ‘increased perceived injustice’, described in the next section.“Sometimes the cancer program is seen as the have-more program and other disease states, for people that present to hospital, are maybe the have-less. For example, gastrointestinal does not have a provincial body that’s driving performance. Or rheumatology, for example. So, those patients, I’m hearing, are being bumped or less prioritized for surgery and things like that, because they’re not associated with, for example, a surgical wait time metric” (G18, Network, Cancer)
“We’re often collecting the same data slightly differently for all four organisations… The data tends to not take on the same meaning as it should if everybody was measuring it and looking at it together” (P61, Network)
‘Systemic dysfunction was viewed as contributing to the unintended consequence of ‘increased administrative burden’, described earlier.“Sometimes your own hospital strategy or direction may be in conflict with what [CCO] is trying to do so you’re trying to always play this balancing game” (G24, Network, Renal)
“Remember, there’s financial cost and opportunity cost for the amount of time we spend on PM” (G20, Network, Cancer and Renal)
We identified very few examples of each of the remaining seven unintended consequences in this category: suboptimization, myopia, quantification privileging, anachronism, misinterpretation, complacency, and fossilization.“It does seem disproportionate in terms of the amount of effort we put into measuring particular indicators that don’t necessarily have a return on patient experience or patient outcome” (G21, Network, Cancer and Renal)
Breaches of trust & increased toxicity of the work environment
In addition to ‘insensitivity’, ‘reduced morale’ was also often coupled with the unintended consequences of ‘increased administrative burden’ and ‘systemic dysfunction’.“They don’t feel a strong motivation to work toward achieving the provincial benchmark, if they feel like they’re never going to be able to reach it anyway” (G03, CCO, Administrators)
“All of a sudden over 50% of their patients were being seen the same day they were referred, which is amazing, and it doesn’t make sense either…I think because they changed how they define the referral date as whatever was convenient to them. So, I think we have to be careful that we don’t push people so far that they start making up information so that we get off their backs” (P29, CCO, Cancer, Administrator)
“We’ve had that in wait times reporting, where certain cases are being reclassified to incorrect buckets to make it appear as though they’re being completed on time” (P61, CCO, Renal, Administrator)
Another ‘gaming’ behaviour identified in the data was intentionally prioritizing indicators that were more likely to lead to an increase in overall performance and ranking:“My own hospital is absolutely gaming wait times…I have no doubt that there has been no actual improvement in wait times at this institution” (G14, CCO, Cancer, Clinical Leads)
The unintended consequences of ‘misrepresentation’ and ‘gaming’ were closely related to ‘measure fixation’.“We sacrifice some indicators for others. For example, we have a look at all the indicators, and know that we’re not going to be able to make a very big dent on this indicator due to…the challenges of our region. So, then we focus on the indicators that we actually know we can make an impact on because we don’t want to be [ranked] 14th” (P60, Network)
We identified two unintended consequences in this category that are not described in the literature: (1) ‘increased perceived injustice’ and (2) ‘toxic ambition’. The literature describes ‘team and inter-professional conflict’ as an unintended consequence of PM due to workload, resource, and accountability issues [26, 27, 32, 36]. ‘Increased perceived injustice’ is similar, but in our data this occurred at the program or system level in response to social comparisons between groups affected by PM and those that are not, as this quote illustrates:“[CCO] sets the priorities now whereas programs used to be able to create that strategy envisioned for themselves. Some of that is now being driven based on [CCO] priorities and I think that might take away from some of the innovation and creativity that might have happened at a program level before” (G25, Network, Renal)
‘Increased perceived injustice’ appeared to be exacerbated by ‘tunnel vision’ and ‘systemic dysfunction’.“My colleagues in other programs sometimes feel that we are favoured or the spotlight is more on us than on them, and it creates an element of resentment, which is really not of our own doing. It’s just our requirement to report and to produce” (G11, Network, Cancer)
“We got lots of push back that we needed to increase [the target]. Whether that makes sense or not, I’m not sure. People are doing well and exceeding the target. Isn’t that good enough?” (P29, CCO, Cancer, Administrator)
We identified very few, if any, examples of the remaining three unintended consequences in this category: ‘reduced learning and psychological safety’, ‘loss of professional ethos/morality’, and ‘bullying’.“Penalizing a high performing program that isn’t getting even better? The optics are terrible…Some of them say, why do you even give us a target? Why don’t you accept that we’re doing a darn good job and move on to some other initiative? I’m sympathetic with that. I think above a certain level, we should leave them alone and stop hammering them” (P53, CCO, Renal, Clinical Lead)
Exacerbation of inequities
Politicization of PM
Positive unintended consequences
“I’ll brag if we have a top ten one because it’s good for the team to hear that, and good for those who are working hard day in and day out for the patient population…They’re proud of being high performers” (G19, Network, Cancer)
PM also occasionally contributed to increased confidence and collective efficacy, as described by this network representative:“The meetings happening quarterly are very helpful and give us a chance to relate to Cancer Care Ontario some of the work we’ve been doing that we’re proud of” (G21, Network, Cancer and Renal)
PM sometimes spurred ‘motivated learning and development’ beyond a reactive response to performance feedback or incentives as this quote demonstrates:“Our staff and our leaders said yeah, we can do something that actually improves things. You know, we don't have to be defeatist about this. We proved to ourselves that we could actually do this” (G16, Network, Cancer, Administrator).
‘Motivated learning and development’ often occurred together with ‘new relationships and collaborative problem-solving’. Network representatives frequently described how CCO’s PM approach spurred inter-network mentorship:“We’re not waiting to be told how we did. We actively look at our data on an ongoing basis and pick out where we can improve and really focus resources on that” (G13, Network, Cancer)
“We really look [at the data] and reach out to other programs to see how they’re actually achieving some of their targets so we can collectively share our ideas and do better” (G26, Network, Renal)
We identified one positive unintended consequence not described in the literature: ‘improved capacity planning’. A few participants described the use of information collected for PM purposes to support internal planning and external applications:“The ability to reach out to peer programs, better performing programs and gain nuggets of, wow, how have you done that, what can we pick up, what can we learn?” (G19, Network, Cancer)
“Although the measurement might be in order to incite improved performance…You can't create a strategic plan and get [Ministry] approval for capital initiatives in the absence of that information. So, I think it's leveraged in different ways than originally intended” (G06, Network, Renal)
Negative unintended consequences on patients and patient care
Inappropriate or sub-optimal care
Reduction in patient-centered care
We identified very few examples of ‘disregard for patient voice’, ‘compromised patient autonomy’, and ‘compromised patient education’, though in the few hypothetical examples shared, the three unintended consequences were intertwined:“Even the patients are experiencing survey fatigue because we’ve got the patient experience survey that came from [CCO], but we also have our own organizational patient experience survey too…and they're not certain why they're getting surveys with similar questions” (G23, Network, Renal)
“If we’re supposed to be patient-centered, patient-focused, and we’re being mandated that this must be completed, we’re not truly being sensitive to the needs of the patient. Yes, this is very, very important, but, in the real world, when you’ve got that person in front of you, it may not be the time” (P69, Network, Cancer)
The two remaining unintended consequences in this category – ‘compromised patient engagement’ and ‘erosion of trust in care’ – were not evident in our data.“Many of these targets involve patients. It could result, theoretically, in having pressure applied to patients, to do things that patients don’t want to do. That, to me, is a bridge too far” (P42, CCO, Renal)
Exacerbation of inequities
“I think that the regions trying to address performance issues for the majority of their populations, leaves out the marginalised populations – the homeless, the low-income, the ethnic populations. I think focusing on specific indicators, and moving those indicators closer to the target, can actually increase inequities in those populations because you’re focused on that indicator” (G03, CCO, Administrators)
Positive unintended consequences
Mitigating unintended negative consequences
“It’s not as though CCO comes up with these metrics and targets on their own. Provincial clinical leads and providers in our communities of practice are involved in developing them” (G15, Network, Cancer and Renal)
In addition to involvement in indicator selection and development, ongoing dialogue between CCO and network leaders ensures that issues regarding data quality, indicator sensitivity to change, or impact on staff morale are addressed. For example, participants described the removal of two indicators from the scorecard “at the request of the RVPs [Regional Vice Presidents]” because they lacked responsiveness to improvement efforts and were negatively impacting staff morale (P02, CCO, Administrator).“Our stakeholder engagement is very strong. We go out to the groups, we take the initial data, we see if they have feedback and if it makes clinical sense, we rework it, if needed, then maybe eventually we get to setting a target but it’s not something we roll out right away. It’s often a year, if not more, of work and socialization” (G14, CCO, Clinical Leads, Cancer)