Introduction

Antimicrobials are prescribed in up to a third of hospital inpatients, often inappropriately [13], and more than two thirds of critically ill patients are on antimicrobials at any one time [4]. Because of our current understanding of antimicrobial resistance—i.e., that it results from Darwinian selection pressure on bacteria and fungi from antimicrobials—coupled with the fact that much antimicrobial resistance coincides with increased antimicrobial consumption [57], efforts to try and curb the unnecessary use of antimicrobials have gathered increased attention [8, 9••, 10]. Although it is widely accepted that antimicrobial use accelerates the development of resistance, the challenge with antimicrobial prescribing lies in the need to balance two conflicting goals: the provision of therapy that is adequate to treat documented or presumed infection, and the minimization of antimicrobial use to avoid adverse drug events (e.g., Clostridium difficile infection and allergy) and the emergence of antimicrobial resistance, and to reduce costs.

The first use of the term “antimicrobial stewardship” probably dates back to an article in 1996, when McGowan and Gerding asserted that appropriate use of antimicrobials might avert or even reverse trends in antimicrobial resistance [11]. Since that time, antimicrobial stewardship programs (ASPs) have continued to become more prevalent in tertiary and quaternary care hospitals, although smaller, community-based hospitals are increasingly paying attention to antimicrobial use. However, hospitals and their ASPs are also struggling with identifying appropriate measures of success for such programs. This article reviews the various measures that are available to ASPs and administrators who oversee such programs, to help guide antimicrobial stewards with assessment of their programs and interventions.

Antimicrobial measures

Consumption measures refer to metrics that reflect an aggregate or average amount of antimicrobials being consumed at the level of the patient, a hospital unit or service, or an entire institution. They are often reported with patient-day as the denominator (i.e., per 1,000 patient-days) to standardize according to bed utilization, although mean daily dose and total quantity of therapy have also been reported [12••]. ASPs will use any of several different sources for this information, including ordered antimicrobials, dispensed antimicrobials, and administered antimicrobials. The most accurate and relevant of these measures is administered drug [13•], but most institutions do not have the infrastructure or capability to assess this measure. Some other hospitals choose to use purchased antimicrobials, obtaining business data rather than clinical data. The downsides of this approach are obvious, but imperfect yet consistently acquired data is likely better than no data at all.

Antimicrobial defined daily dose

The most widely accepted measure is defined daily dose (DDD), a metric that was developed in the 1970s and has been further refined and promoted by the World Health Organization Collaborating Centre for Drug Statistics Methodology. DDD “is the assumed average maintenance dose per day for a drug used for its main indication in adults” [14]. In simple terms, a DDD is the amount of drug that a typical patient might receive on any day for therapeutic purposes. DDD was never intended to be used as a metric for antimicrobial stewardship. There are particular reasons why it probably should not be used as a metric to study the impact of antimicrobial stewardship. For one, DDD biases against combination therapy, even when that therapy might be narrower spectrum (e.g., combining metronidazole and cefazolin for an intra-abdominal infection will result in double the DDDs of using piperacillin–tazobactam or meropenem). Because DDDs assume routine dosing, programs might be “penalized” for using appropriately higher doses when necessary, such as in patients with obesity or who have central nervous system infections. Conversely, reduced dosing of antimicrobials for renal dysfunction will underestimate antimicrobial exposure [15]. Similarly, using DDDs for pediatrics results in rather uninterpretable data. Finally, because administered dosing often differs from the DDD dosing standard for several drugs, it is difficult to infer days of therapy (DOT) from DDDs or to make conclusions about the relative use of one antimicrobial compared with another [16]. However, an important advantage of using DDDs is the relative ease of hospital systems to report consumption using DDDs: most pharmacy departments have a mechanism to calculate overall prescription, dispensing, or consumption of a quantity of antimicrobials, allowing DDDs/1,000 patient-days to be relatively easy to calculate if bed utilization is also available. Additionally, institution-wide consumption can be benchmarked against similar institutions. The landmark guidelines on antimicrobial stewardship by the Infectious Diseases Society of America and Society for Healthcare Epidemiology of America advocated for DDDs/1,000 patient-days as a universal metric for hospital-based ASPs [10].

Antimicrobial days of therapy

In contrast to DDD, DOT offers more clinical relevance to the healthcare provider [17]. Whereas DDDs hold a dubious relationship with the treatment patients actually receive, DOTs do not. They are relatively intuitive, and correlate with evidence from clinical trials where antimicrobials are given for a fixed duration. Some scenarios do pose problems, however (e.g., drugs with a long half-life, especially in the setting of renal failure). This has led to the concept of “exposure days” rather than DOT. However, it appears that the distinction is not relevant from a practical point of view [18]. Most importantly, most hospitals—especially smaller ones—are unable to calculate DOTs accurately and easily. A limitation that DOT shares with DDD as a metric of antimicrobial stewardship impact is the incentive to use broad-spectrum monotherapy (because a patient receiving two antimicrobials for 7 days contributes 14 DOTs). As mentioned above, DOT is broadly applicable to a pediatric population (including neonates), whereas most other measures (including DDD) are not.

Antimicrobial-free days

A measure that holds particular appeal is antimicrobial-free days. This avoids (or overcomes, depending on perspective) the issues related to spectrum of therapy, and monotherapy versus duotherapy, and focuses on whether patients are receiving antimicrobials or not. It has been mostly used as a disease-specific consumption measure (specifically ventilator-associated pneumonia) [19, 20], but it also holds appeal as a broader measure of ASP impact.

Grams of antimicrobial therapy

Using a different approach to address the problems with DDDs, some investigators have become interested in the overall mass of antimicrobials being consumed by patients [13•]. It is unclear what this offers above and beyond DOT or DDD. Additionally, it is unclear if mass of antimicrobial administered has a better relationship to antimicrobial resistance and other adverse effects from antimicrobial use than other commonly used metrics.

Antimicrobial cost of therapy

Perhaps the easiest metric for most hospitals to acquire is cost of therapy (COT) [21]. Many providers are reluctant to report cost because (in most healthcare jurisdictions) antimicrobial acquisition costs are variable from institution to institution, and also vary over time. Additionally, they are markedly affected by genericization of antimicrobials. However, because newer drugs tend to be more broad spectrum and expensive, COT is one of the few metrics readily available that address antimicrobial spectrum of therapy in some manner. Additionally, because cost savings are an important outcome for ASPs, reductions in COT are useful for ASPs to report [2225, 26••, 27]. Unfortunately, COT should not be used for benchmarking purposes because of the differences in purchasing agreements between institutions, and variability in costs from country to country.

Antimicrobial prevalence

Because relatively sophisticated data systems are required to measure the various indicators listed above, there is a need in some healthcare settings to more nimbly measure consumption in a reliable manner. One such method is point prevalence. One of the early studies to perform this was the European Prevalence of Infection in Intensive Care (EPIC) study, later replicated in EPIC II, which was an international study of infection in intensive care [4, 28]. Increasingly, larger coordinated efforts are using point prevalence reports of antimicrobial exposure as a replicable measure of antimicrobial consumption in hospitals [29•, 30, 31]. Malcolm et al. have reported that point prevalence studies can successfully drive quality improvement in antimicrobial prescribing [29•].

Effect of length of stay on aggregate consumption measures

Care in acute care hospitals globally has evolved, resulting in a higher acuity of illness for inpatients, coupled with increasingly short lengths of hospital stay. This has resulted in patients who are admitted with infection remaining on antimicrobials until (and often after) discharge. This has the potential effect of increasing the aggregate antimicrobial consumption for inpatients even if antimicrobial practice hasn’t changed substantially [32••]. Additionally, because the course of therapy is censored at the time of discharge, it is often unclear what total duration of therapy patients have received.

Disease-specific consumption measures

Aggregate data are the most commonly used metrics for ASPs because they are the most readily accessible. However, their greatest limitation relates to the fact that they do not adequately account for clinical indications: reports using DDD or DOT do not correct for the prevalence of infection in a population. Accordingly, ASPs are increasingly interested in looking at antimicrobial consumption for specific conditions [e.g., community-acquired pneumonia (CAP) or methicillin-resistant Staphylococcus aureus (MRSA) bacteremia], using either COT or DOT. Perhaps the most appropriate measure when looking at disease-specific therapy is duration of therapy or length of therapy (LOT) [3336]. LOT differs from DOT in that the number of antimicrobials is largely irrelevant. Additionally, it accounts for dosing intervals beyond 1 day (e.g., patients receiving vancomycin every 48 h). Finally, LOT accounts for best practice by not penalizing programs for changing antimicrobials once susceptibilities are known. This is most important when programs look at ordered or dispensed drugs, rather than administered drugs. Antimicrobial-free days, mentioned previously, are inversely related to LOT.

Targeted antimicrobials

Some antimicrobials attract more attention than others. There are many potential reasons for this: they may be more expensive, more broad spectrum, more toxic, have a stronger association with C. difficile, or resistance to them may be a greater problem than it is with other agents. Vancomycin and anti-pseudomonal (and other broad-spectrum) antimicrobials are commonly selected agents [3746]. There is sufficient evidence that ASPs can achieve reductions in targeted antimicrobial use. However, it is not entirely clear that such approaches reduce overall antimicrobial use or unnecessary antimicrobial use (i.e., using an antimicrobial when none is needed). It also does not address whether an agent with an appropriate spectrum of activity is prescribed. This approach lends itself to “squeezing the balloon,” shifting antimicrobial selection pressure to cheaper agents [47].

Appropriateness measures

Perhaps the most desired measure of antimicrobial use to determine whether an ASP is effective is antimicrobial “appropriateness.” When assessed comprehensively, this would theoretically state whether the right agent, with the appropriate antimicrobial activity to treat the infection of concern, is being provided at the right dose, route and schedule, for the right duration, accounting for allergies, drug interactions, and potential toxicities. In some ways, this is the ideal metric for consumption. Unfortunately, it appears impossible to measure in a reliable, valid, and widely accepted manner. Some authors have suggested concordance with treatment guidelines [43, 48, 49]. However, this assumes that antimicrobial guidelines are a gold standard of best practices. In addition to most guidelines having been developed without a particular eye on antimicrobial stewardship principles, many guidelines lack the quality and development rigor that validate them as a measure of appropriateness [50, 51]. Further, assessing appropriateness is labor-intensive; measuring it using point-prevalence methodology (e.g., assessing appropriateness of therapy on a particular day throughout a ward or institution) is a way of making measures of appropriateness more feasible.

Process measures

Although “appropriateness” could be considered a process measure, most process measures in antimicrobial stewardship relate to the various factors that should lead to better prescribing. These can include (but are not limited to) documenting an indication for the antimicrobial, filling in an antimicrobial order form, or mandating an infectious diseases consult [38, 52]. As with all process measures, they may be important to ensure that a quality improvement methodology is doing as it is intended, but they do not truly reflect the quality of antimicrobial prescribing, or the effect of the prescribing on important outcomes.

Microbial measures

At the time of writing, there are no prospective randomized studies demonstrating that antimicrobial stewardship can reduce antimicrobial resistance, even though there is a plethora of observational studies suggesting that improving antimicrobial use can improve antimicrobial resistance. Because reduced resistance is one of the primary justifications for ASPs, being able to accurately and reliably report on antimicrobial resistance is an important metric.

Antimicrobial resistance prevalence

Selected antimicrobial-resistant organisms

The initial institutional response to antimicrobial resistance by healthcare was, primarily, using principles of infection prevention and control. Patients were screened for common and/or worrisome antimicrobial-resistant organisms (AROs) such as MRSA and vancomycin-resistant enterococci. Accordingly, hospitals in some jurisdictions were expected to report cases or rates of hospital-acquired infection due to these AROs, with the understanding that new cases of these (primarily nosocomial) organisms represented a failure of infection prevention and control practices. Further, for organisms such as MRSA, the first wave of cases were due to (what eventually became referred to as) hospital-acquired MRSA (HA-MRSA) beginning in the 1960s [53]. Unfortunately, relatively little attention was paid to antimicrobial use as a driving factor for HA-MRSA. With the emergence of community-acquired MRSA (CA-MRSA), approaches to screening, diagnosis, and management of MRSA changed. Further, institutions continued to ignore any relationship between antimicrobial stewardship and MRSA, with the understanding that the epidemiology of CA-MRSA could not be altered by changes in antimicrobial use. Ironically, most guidance on responding to the emergence of CA-MRSA focused on using more, broader-spectrum antimicrobials [54]. The extreme of this example is correlating the incidence of C. difficile infection with antimicrobial use [42, 43, 45, 5560]. Usually this is reported in juxtaposition with a cluster or outbreak of C. difficile infection.

Despite these trends, some authors have advocated for using prevalence of AROs as an antimicrobial stewardship metric [17, 61]. It seems unlikely, though, that antimicrobial use will, alone, have a significant impact on hospital prevalence of community-acquired organisms such as CA-MRSA. Specifically, community prevalence of AROs is far more likely to modify hospital prevalence than antimicrobial stewardship measures. Further, because screening approaches might influence “prevalence,” heterogeneity might exist due to differences in screening strategies rather than in real differences between populations.

McGowan has discussed the need to identify microbial benefits to antimicrobial stewardship in detail in his excellent review [26••]. He points out that there is a surprising paucity of literature on the beneficial effects of antimicrobial stewardship on antimicrobial resistance. However, in a recent Cochrane Review (an update to an earlier review in 2005), Davey and colleagues were able to show that there is increasing evidence that antimicrobial stewardship interventions can influence antimicrobial resistance [62••]. In particular, interventions intended to decrease excessive prescribing were associated with a reduction in C. difficile infections and colonization or infection with aminoglycoside- or cephalosporin-resistant Gram-negative bacteria, MRSA, and vancomycin-resistant Enterococcus faecalis [62••]. The problem in looking at resistance in a single or few species is the potential for “squeezing the balloon” [47]. Specifically, ASPs may assess their impact by reporting on the reduction of resistance to a certain (measured) class of antimicrobial when, in fact, resistance to another antimicrobial or class of antimicrobials increases because of shifted prescribing patterns.

Antibiograms

Recognizing that tracking selected AROs—and trying to influence their prevalence—is unlikely to be clearly related to antimicrobial use, a more holistic approach to antimicrobial resistance might be exploring overall antimicrobial resistance from hospitalized patients. The representation of susceptibilities of multiple organisms to multiple agents is called an antibiogram. Unfortunately, there are no proposed aggregate measures in the literature that offer guidance to resistance for multiple antimicrobials in multiple organisms [63•]. The closest approximation to this is the Drug Resistance Index (DRI) proposed by Laxminarayan and Klugman [64]. However, the DRI is a measure that was conceived for larger populations, modeling it on the consumer price index to reflect community based antimicrobial consumption and resistance. Although it is conceivable (and likely) that antimicrobial use (and stewardship) will have an impact on antimicrobial susceptibilities in-hospital, there are very little data to show this in a broad manner.

Clinical outcomes

Perhaps surprisingly, clinical outcomes have been reported in a minority of studies evaluating antimicrobial stewardship interventions [26••]. The reasons are unclear, but likely relate to the data sources for most ASP interventions (administrative and pharmacy databases), in addition to a biased belief that “better” antimicrobial use must be better for the patient. At a minimum, clinical outcomes are useful balancing measures, to help ensure that patients are not harmed through efforts to better rationalize antimicrobial prescribing.

Mortality

Perhaps the most objective clinical outcome for ASPs is mortality. There are obvious problems with using mortality as an outcome, especially when most ASPs will evaluate their interventions as before–after studies, rather than randomized controlled interventions. In particular, secular trends in healthcare mortality may result in lower “after” mortality (because of concurrent quality improvement initiatives) or higher “after” mortality (because of a trend to limit hospitalization to sicker and/or older patients). Other factors might affect mortality, as well. Accordingly, mortality results should be interpreted with caution. Further, it is unclear if most antimicrobial stewardship interventions should look to reduce mortality as a measure of impact. More commonly, it is likely to be used as a balancing measure, assuring stakeholders that an ASP intervention does not lead to increased harm. Indeed, limited studies have shown that increasing appropriate antimicrobial therapy can reduce mortality, but there are a sizable number of studies that show that efforts to reduce excessive prescribing do not result in excess mortality [62••].

To focus the relationship between antimicrobial use and mortality, organism-specific or syndrome-specific mortality can be used [65, 66]. For example, programs may look at efforts to improve management of CAP, and then look at CAP-related mortality, in addition to other metrics.

Length of stay

Length of stay is often an easy to obtain metric. It suffers from many of the same problems as mortality, especially with secular trends in developed countries where an emphasis has been placed on early discharges. Attempts to discontinue or transition to oral antimicrobial therapy from parenteral therapy has, perhaps, the strongest relationship to length of stay [41, 67, 68••, 69]. Rather than hospital length of stay, some interventions use intensive care unit length of stay as a surrogate for clinical improvement [68••, 69].

Cure

There are various ways to measure cure of an infectious disease. There is clinical cure (whereby the patient is believed to be clinically well after effective treatment of their infection), microbiological cure (whereby microbiological cultures or other tests demonstrate that the pathogen is no longer present in a manner capable of causing disease), and other forms (e.g., radiographic cure). These are potentially useful metrics, but are very difficult for programs to reliably measure on a consistent basis. Further, it is unclear if this adds much above and beyond mortality and/or length of stay.

Conclusion

Antimicrobial stewardship is a young field, with much room for growth, and with tremendous opportunity to improve healthcare. However, as McGowan points out in his definitive 2011 review of the limitations of contemporary research in antimicrobial stewardship, the opportunities lie in focusing on outcomes [26••].