Introduction
It is difficult for modern physicians to appreciate the impact that the sudden availability of antibiotics had on the practice of medicine in the 1930s and 1940s [
1]. Before antibiotics, physicians had little meaningful therapeutics with which to alter the course of their patients’ illnesses [
2]. Then, suddenly, with the appearance of sulfanilamide in late 1936, followed by penicillin in 1942, cures came to be expected. As one eyewitness wrote, ‘The crossing of the historic watershed could be felt at the time. One day we could not save lives, or hardly any lives; on the very next day we could do so across a wide spectrum of diseases’ [
3].
Indeed, the absolute reductions in mortality afforded by antibiotics are virtually unparalleled in the annals of medical pharmacotherapy. Conservative estimates of the absolute reductions in death mediated by antibiotic therapy include 25% for community-acquired pneumonia (CAP), 30% for nosocomial pneumonia, 75% for endocarditis, and 60% for meningeal or cerebral infections [
4]. Even cellulitis, which is very rarely fatal in the modern era, mediated an 11% mortality in the pre-antibiotic era [
5], and this rate is similar to the mortality of myocardial infarction in the placebo arm of the Second International Study of Infarct Survival study published in 1988 [
6]. Furthermore, the absolute reduction in death from cellulitis mediated by antibiotics was more than 10% [
5], as compared with a 3% reduction in death from myocardial infarction mediated by aspirin or streptokinase [
6]. The ability to cure infections opened up entirely new fields in medicine, such as critical care medicine (for example, ventilators and central venous catheters), complex surgery, care of premature neonates, organ transplantation, and cancer chemotherapy.
Perhaps it is not surprising that the availability of such powerful weapons against disease quickly led to hubris. As early as 1948, one expert expressed ‘optimism’ that ‘bacterial diseases have been brought under control’ [
7]. By 1962, a Nobel laureate pontificated that ‘one can think of the middle of the 20th century as the end of one of the most important social revolutions in history, the virtual elimination of the infectious diseases as a significant factor in social life’ [
8]. The hubris continued through the 1980s [
9], before the rise of antibiotic resistance began to bring us back to reality. During these decades of hubris, the medical community failed to understand that microbes have been waging war among themselves with antibiotics, and creating resistance mechanisms to defeat antibiotics, for more than two billion years [
1,
10]. We will never ‘defeat’ microbes with antibiotics. There is no ‘endgame’ - resistance is inevitable.
Nor is recognition of the threat of antibiotic resistance new, despite our failure to act effectively to confront the threat. Fifty years ago, a gathering of legends held a symposium focusing on the lack of new antibiotics that could deal with rising rates of resistant pathogens [
11]. Indeed, as far back as 1945, Fleming himself, discoverer of penicillin, warned the medical community that our abuses of penicillin (and, by extrapolation, subsequent antibiotics) would surely lead to an inexorable rise in resistance, which ultimately would prove fatal for our patients [
12]. ‘In such cases’, he said, ‘the thoughtless person playing with penicillin is morally responsible for the death of the man who finally succumbs to infection with the penicillin-resistant organism. I hope this evil can be averted’ [
12].
Sadly, it has not been, and we have not learned from our past. We expose our environment to more than 15 million kilograms of antibiotics every year in the US alone [
13]. This staggering degree of environmental contamination has, predictably, led to an inexorable rise in resistance rates, even as our research and development (R&D) efforts to develop new antibiotics have waned. Most pharmaceutical companies have abandoned the discovery and development of new antibiotics [
14‐
16]. As a result, over the last 30 years, there has been a 90% decline in new approvals of systemic antibiotics by the US Food and Drug Administration (FDA) [
10,
17]. If we want to reverse these trends and facilitate new approaches to overcoming resistance, we must first understand the forces responsible for them.
Scientific
More than 140 antibiotics have been developed for use in humans over the past 80 years [
1]. Thus, we face considerable scientific barriers to discovering the next generation of antibiotics because the low-hanging fruit has been plucked. Using the same screening methodologies and the same chemical libraries tends to identify the same lead scaffolds over and over again [
4,
18,
19]. Scientific complexity of discovery methodologies must therefore increase, which results in increasingly risky, time-consuming, and expensive discovery programs just at a time when economics and regulatory forces are converging to make antibiotics a poor vehicle for R&D investment. Furthermore, the ‘brain drain’ of expertise resulting from the systematic dismantling of antibacterial discovery programs at major pharmaceutical programs has exacerbated the difficulty in overcoming scientific complexities for new discovery.
These scientific complexities are further compounded for discovery of antibacterial agents targeting Gram-negative bacilli because of the unique biology of the Gram-negative cellular structure [
19]. The lipid-rich membrane bilayer that envelops the cell wall creates unique physiochemical barriers to antibacterial penetration into the interior of the cells. Furthermore, porins and efflux pumps are ubiquitous among Gram-negative bacteria as a means to control nutrient and toxin influx and efflux and thus serve as natural resistance mechanisms for many antibacterial agents. These factors likely account for the lack of development of any new antibacterial classes for Gram-negative bacteria for more than 45 years (since nalidixic acid, the progenitor of the synthetic fluoroquinlones, was developed).
Economic
Multiple economic factors make antibiotics less attractive for investment in R&D than other classes of drugs. For example, antibiotics are short-course therapies that cure their target diseases. Companies can make more money selling drugs that are taken every day for the rest of a patient’s life (for example, for hypertension, cholesterol, diabetes mellitus, acid reflux, arthritis, dementia, and HIV). Also, prices for antibiotics are typically not competitive with other drugs that have a high impact on morbidity and mortality (for example, cancer therapeutics). Small market sizes are further exacerbated by the appropriate principles of antibiotic stewardship, which lead thought leaders to advise judicious use when new antibiotics become available, such that sales of new antibiotics typically under-perform relative to expectations, particularly during the first years after market entry.
As a result of these and other market forces, a recent sophisticated study from the London School of Economics estimated that, at discovery, the net present value (NPV) of a new parenteral antibacterial agent was minus $50 million [
20]. The NPV is a standard method that companies use to prioritize investment strategies that seeks to calculate today what the net value of a drug will be worth over the ensuing decades. It is calculated by incorporating cost input for R&D, time it will take to realize a return on investment, and future predicted revenues. By comparison, at discovery, the NPV for a new arthritis drug has been estimated to be positive $1 billion [
14,
16]. Given these economic realities, it is easy to understand why for-profit companies, which have a fiduciary responsibility to increase shareholder value, have increasingly shunted R&D money away from antibiotics and toward other drug types.
Regulatory
For more than a decade, a shift in thinking at the FDA, particularly in the Office of Antimicrobials, has resulted in increasingly infeasible trial design requirements to enable new antibiotics to get approved for use in humans [
4,
17,
21,
22]. The reasons for this shift in thinking are complex, resulting in part from legitimate scientific and statistical concerns, but are driven to an irrational and dangerous extreme by the highly public and embarrassing post-marketing failure due to toxicity of the antibiotic, telithromycin [
22,
23]. In the end, statistical concerns have come to so thoroughly dominate considerations regarding trial standards that clinical reality and feasibility have been sacrificed.
Clear clinical trial guidances took many years to be released for trials of new antibiotics. When such guidances were released, they generally created trial conduct standards that were infeasible, nonsensical, or both [
22,
24]. Some experts even expressed doubt whether antibiotics were more effective than placebo for lethal infections, such as CAP [
25,
26]. Proposals were forwarded to force future antibiotic studies to use a placebo-controlled superiority design for the treatment of CAP - the disease that Osler referred to in 1901 as ‘the Captain of the Men of Death’ [
27]. Such proposals were given serious credence and discussion and were discredited only after extensive and expensive dialogue and effort that took more than a year [
25].
Other specific examples of unreasonable and damaging elements to new trial standards included banning any pre-study antibiotics from being administered to patients who were going to be enrolled in antibiotic clinical trials; this eliminated the possibility of enrolling any patients who were seriously ill. At the same time, studies were required to administer multiple days of intravenous therapy in hospital for diseases such as pneumonia, urinary tract infections, and intra-abdominal infections; this eliminated the possibility of enrolling any patients who were not seriously ill. Thus, there were few patients left to enroll.
New requirements that patients be considered evaluable for efficacy only if an etiologic bacterium was identified resulted in doubling or tripling of sample sizes for pneumonia studies. Non-inferiority margins shrunk because of arbitrary mathematical manipulations that were used to ‘discount’ the best guess of antibiotic treatment effect sizes for various diseases, further driving up sample sizes [
22,
25].
The cumulative effect of this ‘lost decade’ of debate, discussion, and deliberation was a substantial exacerbation of the risk, cost, and time it took to develop new antibiotics, just at the time when scientific challenges and other economic realities were having the same effect. The net result of these three converging forces, which fed off of one another, was a marked decrease in the number of companies, and number and experience of scientific experts, working in this space.