Skip to content
Publicly Available Published by De Gruyter March 8, 2016

Methodologies for evaluating strategies to reduce diagnostic error: report from the research summit at the 7th International Diagnostic Error in Medicine Conference

  • Beau B. Bruce EMAIL logo , Robert El-Kareh , John W. Ely , Michael H. Kanter , Goutham Rao , Gordon D. Schiff , Maarten J. ten Berg and Kathryn M. McDonald
From the journal Diagnosis

Abstract

In this article we review current evidence on strategies to evaluate diagnostic error solutions, discuss the methodological challenges that exist in investigating the value of these strategies in patient care, and provide recommendations for methods that can be applied in investigating potential solutions to diagnostic errors. These recommendations were developed iteratively by the authors based upon initial discussions held during the Research Summit of the 7th Annual Diagnostic Error in Medicine Conference in September 2014. The recommendations include the following elements for designing studies of diagnostic research solutions: (1) Select direct and indirect outcomes measures of importance to patients, while also practical for the particular solution; (2) Develop a clearly-stated logic model for the solution to be tested; (3) Use rapid, iterative prototyping in the early phases of solution testing; (4) Use cluster-randomized clinical trials where feasible; (5) Avoid simple pre-post designs, in favor of stepped wedge and interrupted time series; (6) Leverage best practices for patient safety research and engage experts from relevant domains; and (7) Consider sources of bias and design studies and their analyses to minimize selection and information bias and control for confounding. Areas of diagnostic error mitigation research identified for further attention include: role of competing diagnoses, understanding the impacts of organizational culture, timing of diagnosis, and sequencing of research studies. Future research will likely require novel clinical, health services, and qualitative research methods to address the age-old problem of arriving at an accurate diagnosis.

Introduction

Diagnostic error leads to substantial patient morbidity and mortality, with estimated incidence rates between 10% and 20% [1]. They are also the most common reason for which medical malpractice claims are awarded [2]. The causes of diagnostic error are multifaceted and typically the result of a combination of cognitive errors and system failures [3–7]. More work is required to fully understand the burden and causes of diagnostic failures [8], and this research is intimately intertwined with developing effective strategies to reduce diagnostic error. Diagnostic error research is a relatively nascent science [9], and considerable methodological challenges have been identified in this complex area. Thus, most studies have focused on understanding the frequency and causes of diagnostic errors with relatively fewer reports on strategies to develop solutions although some have appeared [7, 8, 10–13].

In this article, we review current evidence on strategies to evaluate diagnostic error solutions, discuss the methodological challenges that exist in investigating the value of these strategies in patient care, and provide recommendations for methods that can be applied in investigating potential solutions to diagnostic errors.

Methods

A face-to-face consensus conference was held as part of a special Research Summit meeting on September 14, 2014 at the 7th Annual Diagnostic Error in Medicine Conference [14]. The Research Summit was by invitation only for experts in the field of diagnostic error or health services research methods, including both senior and junior scientists. The panel included 29 participants, who divided into three breakout groups, respectively focusing on (a) burden of, (b) causes of, and (c) solutions to diagnostic errors. Each group was tasked with developing methodological recommendations within their group’s domain. This paper summarizes recommendations from the third group: solutions to diagnostic errors. Recommendations were developed iteratively by the participants following the meeting and critiqued independently by members of the other two focus groups before submission for publication.

Results

Three major methodological themes emerged from discussions among the group focused on solutions to reduce diagnostic error: (1) appropriate outcomes or other measures that reflect reduction of diagnostic error, (2) processes for developing solutions to diagnostic error, and (3) best practices for evaluation of potential solutions.

Appropriate outcomes or other measures

Two main categories of outcomes arose in the discussion of what measurable targets are most useful in research studies of diagnostic error solutions: (1) direct outcomes measures of diagnostic accuracy, and (2) indirect measures of diagnostic accuracy, including process measures related to correct diagnosis. These measures, summarized in Table 1, were a natural outgrowth of attempts to identify processes and outcomes likely to respond to a given diagnostic error intervention, as well as practical to target. The focus on using measures to evaluate the merits of an intervention eliminated structure measures (e.g. availability of diagnostic decision support, organizational experience conducting root cause analysis of diagnostic error, around-the-clock availability of radiology to review images) from close consideration. However, structural factors, may be influential contextual variables, whose measurement could be valuable in research on the context-sensitivity of implementation of some interventions [15]. Likewise, the availability of electronic medical records and health information technology solutions and the way they are used is an important contextual variable that requires further research [16].

Table 1:

Measures for solution-oriented diagnostic error research.

Direct outcomes measures
Wrong diagnosis
Missed diagnosis
Delayed diagnosis
Indirect measures
Adherence to diagnostic pathways
Discrepancy between prevalence of diagnosis and prevalence of disease
Unwanted variation in diagnosis
Costs of obtaining a diagnosis or the consequences of misdiagnosis
Potential benefits and risks of timely termination of diagnostic process

Direct outcomes measures include specific diagnostic error targets – wrong diagnosis, missed diagnosis, and delayed diagnosis. To date, most intervention studies have focused on these measures [7]. When evaluating diagnostic pathways or algorithms where it can be determined whether patients with a given diagnosis based on a reference standard are correctly classified, then sensitivity and specificity measures are both ideal, similarly to well-conducted evaluations of a new diagnostic test.

As timing issues are inherent in the diagnostic process, studying the temporal aspects of diagnosis is important, but also results in considerable ambiguity in terms of diagnostic error classification, and therefore in terms of outcomes assessment as well. Further, early identification of some diseases cannot always be assumed to be better as patients may not want to know that they have a rare incurable genetic disease that will not affect them for years; yet there are other diseases such as acute ischemic stroke where earlier diagnosis allows for effective thrombolytic treatment and even short delays may cause significant harm. Understanding what represents an inappropriate delay in diagnosis in between these extremes (e.g. what is the appropriate tempo for establishing a diagnosis of lung cancer in a patient presenting with a cough?) is substantially more unclear, particularly once one considers the difficulties of establishing a specific diagnosis from a non-specific symptom, the resources needed for faster diagnosis, and the effect that an earlier diagnosis might have on prognosis. These issues highlight the need for further research to clarify how best to use delayed diagnosis and timing issues as outcome measures.

Proposals and manuscripts on diagnostic error solutions research are often criticized because they frequently assess outcomes with surrogate measures logically related to improved patient outcomes (e.g. frequency of diabetic retinopathy diagnosis) instead of directly measuring the patient-oriented outcome (e.g. visual loss experienced or prevented). While careful measurement of such patient-oriented outcomes is ideal, it is often not practical for timing, statistical, and ethical reasons. Many catastrophic outcomes for the patient are relatively rare or require significant time to accumulate and thereby greatly increase the number of patients that must be studied. For example, significant visual loss may not occur for many years after retinopathy is first detected, and therefore not develop within the duration of a typical study. Additionally, the treatments administered or not during the period of diagnostic uncertainty may interfere with the development of patient-oriented outcomes. It is also ethically challenging or impossible to design a study that can detect a diagnostic error without attempting to influence the subsequent course of the disease. The group acknowledged the importance of patient-oriented outcomes, but also recognized the need to rely upon more easily measurable outcomes, and to extrapolate based on these outcomes and available evidence to clinically significant disease, quality of life, and cost-effectiveness.

For the indirect measure concept, a focus on improved diagnosis may reveal diagnostic processes as targets for measurement, either in line with direct outcome measures or alongside them in some cases. One such measure of diagnostic performance may be adherence to specific diagnostic pathways with respect to identification of some rare diseases. For example, if close adherence to a diagnostic pathway for hypertension will more reliably result in the discovery of a pheochromocytoma, then it would be appropriate to measure use of the pathway by clinicians as a study outcome. Many other candidates for process measures – those that relate to common failures or missed opportunities along the diagnostic pathway – exist [5, 17], but are more useful for intervention research to the extent that robust evidence exists demonstrating causal linkage between the process and a desired outcome. The group recommended development of a list of process targets, methods to measure them, and references supporting linkage to outcomes.

Another indirect measure that an intervention could target is reducing the discrepancy between the prevalence of a diagnosis and the prevalence of the disease in a given population. For example, community-based surveys from a well-designed epidemiological study may reveal that roughly 10% of women in a specific community are victims of domestic violence. However, careful analysis of clinical records from emergency rooms, primary care physicians and other health care facilities may reveal a rate of diagnosis of domestic violence of only 2%. The estimated discrepancy in diagnosis is therefore 8%, and a reduction in this gap indirectly measures diagnostic accuracy (e.g. increased rate of diagnosis of domestic abuse in emergency rooms in a community at a level closer to epidemiologic expectation).

A third example of an indirect measure is reduction in unwarranted variation in diagnosis. For example, the rate of diagnosis of a particular condition may vary significantly among different clinicians, hospitals, or regions, even when the underlying populations are similar [18, 19]. The same clinician may approach diagnosis differently for the same group of patients at different times as revealed by second review studies [1]. An effort to reduce diagnostic error could be based upon documented variation of these types, and standardization of diagnostic assessments to reduce variation. An example of this approach has been described for diabetic retinopathy [20].

The group identified additional indirect targets, which may also be markers of diagnostic accuracy, and therefore be useful in identifying priority areas for development and assessment of diagnostic error reduction strategies. These included the costs of obtaining a diagnosis; the potential benefits and risks of the timely termination of the diagnostic process; the impact on treatment; the downstream costs of a wrong or delayed diagnosis (time, money, discomfort, etc.) to patients, physicians, the medical system and society in order to achieve a given level of certainty with respect to a diagnosis; and patient safety and satisfaction (i.e. if the patient is safe and happy does the label really matter?).

An example of a cost-related indirect outcome of diagnostic error could potentially be derived from the payment for medical services. Currently the Centers for Medicaid and Medicare Services (CMS) uses a risk adjustment model to determine capitation payments for Medicare Advantage plans [21]. The risk adjustments are based on diagnostic codes, such that if diagnostic errors are frequently made in common conditions, it can impact CMS payment to Health Plans. In theory, for a given condition, underdiagnosis could be reflected in payment less frequently than usual, while overdiagnosis could be reflected in payment more frequently than expected. The idea of identifying suitable targets for reduction of diagnostic error based on CMS payment impact has not been explored.

It should be noted that solution-oriented studies for reducing diagnostic error could be designed to maximize positive externalities including enriching measurement-oriented knowledge – such as understanding the consequences of a missed diagnosis or estimating frequencies of conditions tied to common complaints. When a disease was missed and thus not treated, one can retrospectively follow the natural course of that disease in an ethical fashion. This method has been used to study the natural history of non-invasive breast cancer by determining the outcomes in patients in whom the diagnosis was originally missed by the pathologist and no additional treatment was given [22]. Solution-oriented studies could quantify the links between presenting symptoms and final diagnoses. For example, we know little about the most likely causes of common complaints in primary care. For example, of 100 consecutive patients presenting to an emergency room with eye pain, how many will have hordeolum, how many will have corneal abrasion, and how many will have acute glaucoma? We know the initial diagnosis frequencies (which include diagnostic errors), but not the final diagnosis frequencies.

Processes for developing solutions

Logic models

A key aspect that the group identified as lacking from many studies on diagnostic error, as with other evaluations of patient safety practices, was a logic model or logical framework underlying the solution [23, 24]. A logic model should clearly define the problem to be addressed, the intervention to be evaluated, and most importantly, why the proposed intervention would be expected to work. Development of solutions within a logic model or logical framework is essential for determining whether failure of a given solution has occurred due to the underlying idea or the implementation of that idea.

In a logic model, the main idea is the mechanism of action, preferably informed by theory, about why the solution is expected to cause an improvement in diagnostic error. For example, if the intervention includes just-in-time information presented to the clinician in an electronic medical record that highlights examination findings contradictory to the final diagnosis, a logic model would describe the cognitive biases that might be corrected and the clinician behavior change that might be provoked by this additional information provided in this way. Further, the logic model could hypothesize counterproductive contextual pressures (e.g. “alert fatigue”, poorly designed presentation of the information) that could inactivate the expected mechanism of action. Clear logical frameworks can help teams identify appropriate intermediate measures during the design phase that can be used to troubleshoot interventions early during implementation. In addition, because most solutions to diagnostic error have been applied to very specific diagnostic situations, a clear statement of the logical frameworks that underlie specific interventions will likely help to classify solutions into taxonomies. Such classifications could enable generalizing across different diagnoses and medical specialties to identify generic solutions in some instances.

For example, the Society to Improve Diagnosis in Medicine has released a “Patient Toolkit for Diagnosis”, a structured series of questions and prompts that helps patients to record pertinent medical information before, during, and after a clinical encounter [25]. The use of the tool is hypothesized to reduce diagnostic errors, but what is the logical basis for this theory? One way to express the supporting logic model would be a table showing the logical connections between intervention elements (e.g. provision of the toolkit to patients, training nurses to discuss the toolkit with patients to uncover useful diagnostic clues, and physician training about best practices to incorporate patient-generated pre-appointment information into diagnostic workflow) and the intermediary effects hypothesized to lead to a reduction in diagnostic error (e.g. patients adhere to follow-up testing more often, more relevant patient information is recorded by nurses, and doctors generate broader differential diagnoses). For each intermediary effect, a scientific basis is included with ways to measure it. In this example, the evaluation team needs to ask a number of questions: Is there evidence that patients fill in forms before appointments? Do nurses seek out the information that patients have recorded in advance of the appointment? What do we know about the amount of information presented and the length of the differential list? Does a longer differential list increase the likelihood of making a correct diagnosis? Are patients who have engaged in documentation more likely to adhere to physician recommendations? The keys to this process are seeking evidence that supports or contradicts each logical connection and iteratively using previous evaluations to generate additional measurement options. Specifying the logic model and measuring the intermediary effects helps troubleshoot interventions when they do not work as hoped.

Prototyping and pilot testing

The second recommendation of the group was to apply engineering principles such as rapid, iterative prototyping and simulation during the development of solutions for diagnostic error [26]. Use of these approaches can prevent one from carrying an ineffective solution into a real-world production environment where it might be met with dissatisfaction by busy clinicians, inconvenienced patients, and result in questionable efficacy, when more design and testing upfront may have led to a clear need for alternative strategies. While these strategies are important in developing sustainable solutions, the group acknowledges that funding for these approaches is currently limited because the majority of funders tend to favor traditional scientific approaches. The onus is on investigators to justify the advantage of a prototyping approach for diagnostic error solutions.

Best practices for evaluation of potential solutions

Study designs less likely to be useful

While post-diagnosis retrospective epidemiological studies can be important for suggesting solutions to diagnostic error, they are seriously limited by the problem of hindsight bias. This bias results from the human body’s limited repertoire of generally non-specific “real” symptoms (e.g. pain, fever, cough, etc.) combined with “red-herring” symptoms (from heightened sensitivity to normal physiology, unrelated chronic disease, simultaneous conditions, or even embellishment in order to impart gravity to the situation that reflects the anxiety they are experiencing related to the lack of a diagnosis or fear of a misdiagnosis). The expression of disease, in turn, leads to: (1) the Bayesian nature of the diagnosis (i.e. although pneumonia is always characterized by fever and cough, knowledge of the probability of pneumonia when a patient presents with fever and cough is what matters for diagnosis) and (2) the low signal-to-noise ratio that is particularly an issue for rare diseases that present with non-specific symptoms. For example, only a tiny proportion of patients with complaints of diarrhea have celiac disease and many have no disease at all; instead they simply fall into a tail of the distribution of bowel movements among normal individuals. Testing everyone with anti-gliadin antibodies to uncover the small proportion of patients with celiac disease has enormous costs and will generate numerous false positives. Yet, if patients and physicians have zero tolerance for missing a case of celiac disease, we may succumb to this faulty strategy [27].

Pre-post intervention designs where a single assessment of the outcome is evaluated before and after the intervention are also problematic because they are subject to secular trends and other confounding factors that can explain the observed difference between the outcome before and after the solution is applied [28].

Recommended study designs

The group unsurprisingly endorsed randomized clinical trials as the ideal design when resources allow and equipoise exists (i.e. genuine uncertainty about the benefit of the intervention). Given the complexity of most approaches to decrease diagnostic error and to avoid crossover effects, cluster-randomization across multiple sites will often be the most appropriate study design.

The next tier of designs includes the stepped wedge and interrupted time series. A stepped wedge design involves the sequential roll-out of an intervention to a group of participants (at the individual or cluster level) in random order over a number of time periods [29]. An interrupted time series design measures the outcome at several intervals before and after the intervention and analysis focuses on whether there is a clear “interruption,” with the trend changing at the time of the intervention from the baseline trend of the data [30]. Unlike a pre-post design, both of these designs permit secular trends to be disentangled from the effects of the intervention, and both designs are especially valuable when a randomized clinical trial is infeasible or unethical because of substantial data supporting the benefit of the intervention.

Future advances in methodology will likely involve the use of advanced modeling techniques and methods, such as g-estimation and dynamic simulation, to deal with time-varying confounding and to evaluate complex diagnostic pathways [31, 32]. These complex analyses will require the assistance of experts in these advanced methods. Indeed, the group recommends leveraging best practices (e.g. rigorous and established criteria for diagnostic test evaluation, cost-effectiveness, etc.) and engaging knowledge experts from other domains (epidemiologists, cognitive scientists, patients, health economists, etc.) where they are appropriate to the research question.

Unique challenges in addressing research biases in diagnostic error solutions research

The group acknowledged the unwanted role that various types of bias may play in studies of diagnostic error reduction strategies. Because there is currently inadequate consensus around definitions of diagnostic error, it is critical that within the context of a study that all parties adjudicating whether an error has occurred are working with the same unambiguous definition. Another particular bias relevant to studies of diagnostic error solutions is the improvement in performance that generally occurs when people are observed, the so-called Hawthorne effect [33], which could lead to problems if subjects are more closely observed during a period of intervention than during a control period (or vice versa).

Although the group favors the experimental and quasi-experimental designs discussed in the section above because they are less prone to confounding than observational designs, many diagnostic error studies still require observational designs for hypothesis generation and for practical reasons. Confounding occurs when a (biased) association is observed between the intervention (or other exposure) and the outcome that arises from a common cause of the exposure and outcome. For example, if patients with a greater severity of illness are more likely to receive the intervention and are also more likely to be diagnosed with the disease by the reference standard, then they may have a worse outcome unrelated to the intervention (i.e. the association between the intervention and worse outcome is confounded by severity of disease). Controlling for confounders is important to avoid bias, but the strength and even existence of most confounders is unknown. Although there is controversy regarding their ability to fully compensate for confounding in observational designs [34], propensity scores are one technique that can be used to adjust for confounding resulting from the likelihood the patient will have the disease, the patient’s severity of disease, or the clinician’s threshold for diagnosis which are likely to be important sources of bias in diagnostic error solution research [35].

Discussion

The group made recommendations to address issues across the spectrum of research on diagnostic error solutions: from which outcomes are important to measure to development and design to implementation and execution to analysis. The group recommends the following design elements for studies of diagnostic research solutions:

  1. Select direct and indirect outcomes measures of importance to patients, while also practical for the particular solution.

  2. Develop a clearly-stated logic model for the solution to be tested.

  3. Use rapid, iterative prototyping in the early phases of solution testing.

  4. Use cluster-randomized clinical trials where feasible.

  5. Avoid simple pre-post designs, in favor of stepped wedge and interrupted time series.

  6. Leverage best practices for patient safety research and engage experts from relevant domains.

  7. Consider sources of bias and design studies and their analyses to minimize selection and information bias and control for confounding

While these recommendations should lead to more robust interventions to reduce diagnostic error, the group acknowledges that interventions successfully developed and deployed in a research environment may not have the same impact in routine clinical practice. Knowledge from the fields of quality improvement and implementation science should be leveraged to effectively translate solutions for reducing diagnostic error to a much broader clinical context.

Priority areas of diagnostic error mitigation research will likely receive more attention in the near future. These areas include competing diagnoses, understanding the role of organizational culture, timing of diagnosis, and sequencing of research studies. This research will likely require novel clinical, health services, and qualitative research methods to address the age-old problem of arriving at an accurate diagnosis.


Corresponding author: Beau B. Bruce, Emory University School of Medicine, Atlanta, GA, USA, E-mail:

Acknowledgments

We would like to acknowledge the suggestions and comments of Ashley N. D. Meyer that helped to improve our manuscript.

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: None declared.

  3. Employment or leadership: None declared.

  4. Honorarium: None declared.

  5. Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.

References

1. Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf 2013;22:ii21–7.10.1136/bmjqs-2012-001615Search in Google Scholar

2. Chandra A, Nundy S, Seabury SA. The growth of physician medical malpractice payments: evidence from the national practitioner data bank. Health Aff (Millwood) [Internet] 2005 [cited 2015 Jan 19]; Available from: http://content.healthaffairs.org/content/early/2005/05/31/hlthaff.w5.240.10.1377/hlthaff.W5.240Search in Google Scholar

3. Thammasitboon S, Thammasitboon S, Singhal G. Diagnosing diagnostic error. Curr Probl Pediatr Adolesc Health Care 2013;43:227–31.10.1016/j.cppeds.2013.07.002Search in Google Scholar

4. Croskerry P, Cosby KS, Schenkel SM, Wears R. Patient safety in emergency medicine. Philadelphia: Wolters Kluwer Health/ Lippincott Williams & Wilkins, 2009.Search in Google Scholar

5. Schiff GD, Hasan O, Kim S, Abrams R, Cosby K, Lambert BL, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009;169:1881–7.10.1001/archinternmed.2009.333Search in Google Scholar

6. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005;165:1493–9.10.1001/archinte.165.13.1493Search in Google Scholar

7. McDonald KM, Matesic B, Contopoulos-Ioannidis DG, Lonhart J, Schmidt E, Pineda N, et al. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med 2013;158:381–9.10.7326/0003-4819-158-5-201303051-00004Search in Google Scholar

8. Zwaan L, Schiff GD, Singh H. Advancing the research agenda for diagnostic error reduction. BMJ Qual Saf 2013;22:ii52–7.10.1136/bmjqs-2012-001624Search in Google Scholar

9. Newman-Toker DE, Pronovost PJ. Diagnostic errors—the next frontier for patient safety. J Am Med Assoc 2009;301:1060–2.10.1001/jama.2009.249Search in Google Scholar

10. Danforth KN, Smith AE, Loo RK, Jacobsen SJ, Mittman BS, Kanter MH. Electronic clinical surveillance to improve outpatient care: diverse applications within an integrated delivery system. EGEMS (Wash DC) 2014;2:1056.10.13063/2327-9214.1056Search in Google Scholar

11. Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014;40:102–10.10.1016/S1553-7250(14)40013-8Search in Google Scholar

12. Singh H, Graber ML, Kissam SM, Sorensen AV, Lenfestey NF, Tant EM, et al. System-related interventions to reduce diagnostic errors: a narrative review. BMJ Qual Saf 2012;21: 160–70.10.1136/bmjqs-2011-000150Search in Google Scholar

13. Graber ML, Kissam S, Payne VL, Meyer AND, Sorensen A, Lenfestey N, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf 2012;21:535–57.10.1136/bmjqs-2011-000149Search in Google Scholar

14. Diagnostic Error in Medicine 7th International Conference – Society to Improve Diagnosis in Medicine [Internet]. [cited 2015 Sep 21];Available from: http://www.improvediagnosis.org/?page=DEM_2014.Search in Google Scholar

15. Ovretveit JC, Shekelle PG, Dy SM, McDonald KM, Hempel S, Pronovost P, et al. How does context affect interventions to improve patient safety? An assessment of evidence from studies of five patient safety practices and proposals for research. BMJ Qual Saf 2011;20:604–10.10.1136/bmjqs.2010.047035Search in Google Scholar

16. Hudspeth J, El-Kareh R, Schiff G. Use of an expedited review tool to screen for prior diagnostic error in emergency department patients. Appl Clin Inform 2015;6:619–28.10.4338/ACI-2015-04-RA-0042Search in Google Scholar

17. Singh H, Daci K, Petersen LA, Collins C, Petersen NJ, Shethia A, et al. Missed opportunities to initiate endoscopic evaluation for colorectal cancer diagnosis. Am J Gastroenterol 2009;104: 2543–54.10.1038/ajg.2009.324Search in Google Scholar

18. Corley DA, Jensen CD, Marks AR, Zhao WK, Lee JK, Doubeni CA, et al. Adenoma detection rate and risk of colorectal cancer and death. N Engl J Med 2014;370:1298–306.10.1056/NEJMoa1309086Search in Google Scholar

19. Thompson GC, Schuh S, Gravel J, Reid S, Fitzpatrick E, Turner T, et al. Variation in the diagnosis and management of appendicitis at canadian pediatric hospitals. Acad Emerg Med Off J Soc Acad Emerg Med 2015;22:811–22.10.1111/acem.12709Search in Google Scholar

20. Hudson SM, Contreras R, Kanter MH, Munz SJ, Fong DS. Centralized reading center improves quality in a real-world setting. Ophthalmic Surg Lasers Imaging Retina 2015;46:624–9.10.3928/23258160-20150610-05Search in Google Scholar

21. Centers for Medicare & Medicaid Services. Risk Adjustment [Internet]. In: Medicare Managed Care Manual. Baltimore, MD: 2014 [cited 2015 Sep 21]. Available from: https://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/Internet-Only-Manuals-IOMs-Items/CMS019326.html.Search in Google Scholar

22. Page DL, Dupont WD, Rogers LW, Landenberger M. Intraductal carcinoma of the breast: follow-up after biopsy only. Cancer 1982;49:751–8.10.1002/1097-0142(19820215)49:4<751::AID-CNCR2820490426>3.0.CO;2-YSearch in Google Scholar

23. Foy R, Ovretveit J, Shekelle PG, Pronovost PJ, Taylor SL, Dy S, et al. The role of theory in research to develop and evaluate the implementation of patient safety practices. BMJ Qual Saf 2011;20:453–9.10.1136/bmjqs.2010.047993Search in Google Scholar

24. Petersen D, Taylor EF, Peikes D. The logic model: the foundation to implement, study, and refine patient-centered medical home models [Internet]. Agency for Healthcare Research and Quality; 2013. Available from: https://pcmh.ahrq.gov/page/logic-model-foundation-implement-study-and-refine-patient-centered-medical-home-models.Search in Google Scholar

25. Patient Toolkit – Society to Improve Diagnosis in Medicine [Internet]. [cited 2016 Feb 11];Available from: http://www.improvediagnosis.org/?page=PatientToolkit.Search in Google Scholar

26. Henriksen K, Brady J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013;22:ii1–5.10.1136/bmjqs-2013-001827Search in Google Scholar PubMed PubMed Central

27. Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what’s the goal? Acad Med J Assoc Am Med Coll 2002;77:981–92.10.1097/00001888-200210000-00009Search in Google Scholar PubMed

28. Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE, et al. The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inform Assoc 2006;13:16–23.10.1197/jamia.M1749Search in Google Scholar PubMed PubMed Central

29. Brown CA, Lilford RJ. The stepped wedge trial design: a systematic review. BMC Med Res Methodol 2006;6:54.10.1186/1471-2288-6-54Search in Google Scholar PubMed PubMed Central

30. Fretheim A, Zhang F, Ross-Degnan D, Oxman AD, Cheyne H, Foy R, et al. A reanalysis of cluster randomized trials showed interrupted time-series studies were valuable in health system evaluation. J Clin Epidemiol 2015;68:324–33.10.1016/j.jclinepi.2014.10.003Search in Google Scholar PubMed

31. Robins JM, Blevins D, Ritter G, Wulfsohn M. G-estimation of the effect of prophylaxis therapy for Pneumocystis carinii pneumonia on the survival of AIDS patients. Epidemiol Camb Mass 1992;3:319–36.10.1097/00001648-199207000-00007Search in Google Scholar PubMed

32. Marshall DA, Burgos-Liz L, IJzerman MJ, Crown W, Padula WV, Wong PK, et al. Selecting a dynamic simulation modeling method for health care delivery research-part 2: report of the ISPOR dynamic simulation modeling emerging good practices task force. Value Health J Int Soc Pharmacoeconomics Outcomes Res 2015;18:147–60.10.1016/j.jval.2015.01.006Search in Google Scholar PubMed

33. McCambridge J, Witton J, Elbourne DR. Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. J Clin Epidemiol 2014;67:267–77.10.1016/j.jclinepi.2013.08.015Search in Google Scholar PubMed PubMed Central

34. Brooks JM, Ohsfeldt RL. Squeezing the balloon: propensity scores and unmeasured covariate balance. Health Serv Res 2013;48:1487–507.10.1111/1475-6773.12020Search in Google Scholar PubMed PubMed Central

35. Joffe MM, Rosenbaum PR. Invited commentary: propensity scores. Am J Epidemiol 1999;150:327–33.10.1093/oxfordjournals.aje.a010011Search in Google Scholar PubMed

Received: 2016-1-19
Accepted: 2016-2-16
Published Online: 2016-3-8
Published in Print: 2016-3-1

©2016 by De Gruyter

Downloaded on 28.4.2024 from https://www.degruyter.com/document/doi/10.1515/dx-2016-0002/html
Scroll to top button