Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Using audit and feedback to increase clinician adherence to clinical practice guidelines in brain injury rehabilitation: A before and after study

  • Laura Jolliffe ,

    Roles Data curation, Formal analysis, Investigation, Supervision, Visualization, Writing – original draft, Writing – review & editing

    l.jolliffe@latrobe.edu.au

    Affiliations College of Science, Health Engineering, La Trobe University, Bundoora, Victoria, Australia, Alfred Health, Caulfield, Victoria, Australia

  • Jacqui Morarty,

    Roles Conceptualization, Funding acquisition, Resources, Writing – review & editing

    Affiliation Alfred Health, Caulfield, Victoria, Australia

  • Tammy Hoffmann,

    Roles Writing – original draft, Writing – review & editing

    Affiliation Centre for Research in Evidence-Based Practice, Bond University, Robina, Queensland, Australia

  • Maria Crotty,

    Roles Conceptualization, Methodology, Writing – review & editing

    Affiliation Flinders University, Bedford Park, Adelaide, Australia

  • Peter Hunter,

    Roles Funding acquisition, Resources, Writing – review & editing

    Affiliation College of Science, Health Engineering, La Trobe University, Bundoora, Victoria, Australia

  • Ian. D. Cameron,

    Roles Funding acquisition, Methodology, Writing – review & editing

    Affiliation John Walsh Centre for Rehabilitation Research, Kolling Institute, University of Sydney, Camperdown, Sydney, Australia

  • Xia Li,

    Roles Formal analysis, Writing – original draft, Writing – review & editing

    Affiliation Department of Mathematics and Statistics, La Trobe University, Bundoora, Victoria, Australia

  • Natasha A. Lannin

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    Current address: Department of Occupational Therapy, The Alfred, Prahran, Victoria, Australia

    Affiliations College of Science, Health Engineering, La Trobe University, Bundoora, Victoria, Australia, Alfred Health, Caulfield, Victoria, Australia

Abstract

Objective

This study evaluated whether frequent (fortnightly) audit and feedback cycles over a sustained period of time (>12 months) increased clinician adherence to recommended guidelines in acquired brain injury rehabilitation.

Design

A before and after study design.

Setting

A metropolitan inpatient brain injury rehabilitation unit.

Participants

Clinicians; medical, nursing and allied health staff.

Interventions

Fortnightly cycles of audit and feedback for 14 months. Each fortnight, medical file and observational audits were completed against 114 clinical indicators.

Main outcome measure

Adherence to guideline indicators before and after intervention, calculated by proportions, Mann-Whitney U and Chi square analysis.

Results

Clinical and statistical significant improvements in median clinical indicator adherence were found immediately following the audit and feedback program from 38.8% (95% CI 34.3 to 44.4) to 83.6% (95% CI 81.8 to 88.5). Three months after cessation of the intervention, median adherence had decreased from 82.3% to 76.6% (95% CI 72.7 to 83.3, p<0.01). Findings suggest that there are individual indicators which are more amenable to change using an audit and feedback program.

Conclusion

A fortnightly audit and feedback program increased clinicians’ adherence to guideline recommendations in an inpatient acquired brain injury rehabilitation setting. We propose future studies build on the evidence-based method used in the present study to determine effectiveness and develop an implementation toolkit for scale-up.

Introduction

Acquired brain injury is a leading cause of disability in adults [1] with a large proportion of patients requiring rehabilitation [2]. Consistent with other areas of health care, neurological rehabilitation has been observed to vary in quality between services [3, 4]. Clinical practice guidelines provide recommendations to assist clinicians make evidence-informed decisions about the interventions they provide [57]. Despite the availability of such guidelines, auditing suggests that rehabilitation clinicians do not routinely provide care consistent with guideline recommendations [8]. Audit and feedback has been recommended as an intervention capable of increasing the uptake of evidenced-based recommendations by clinicians [911].

A growing number of researchers are trialing audit and feedback interventions to promote the use of evidence in rehabilitation, however outcomes for improving clinician adherence has been mixed. The use of implementation interventions in rehabilitation is undoubtedly a positive step forward, nevertheless, critical reflection on the effectiveness of different interventions is key. Specific to audit and feedback interventions, two systematic reviews have synthesised the evidence on effectiveness; these reviews suggest limited to modest improvements occur at best [12,13]. The latest Cochrane systematic review concluded that audit and feedback generally produces small, but potentially important improvements [12]. This is consistent with a second meta-analysis, which found modest improvements on quality outcomes [13]. These reviews [12, 13] suggest the need for clear definitions of goal-behaviors, and triangulation of data collection to improve the effect of audit and feedback interventions. They also suggested that the characteristics of the feedback component of future studies should be identified so as to build an understanding of the causal mechanisms underpinning audit and feedback as an intervention [1214].

Prior audit and feedback interventions to increase adherence to guidelines in rehabilitation have been provided infrequently or at low ‘dose’. For example, to improve the implementation of transport training after stroke, McCluskey and colleagues [15] delivered a single audit and feedback cycle in their knowledge translation program, while Kristensen & Hounsgaard [16] provided four cycles over 15 months, and Vratsistas-Curto et al [17] provided four cycles over 4 years. What remains unknown is the effect of audit and feedback when it is provided at a higher dose (such as weekly or fortnightly). A further limitation of the rehabilitation studies to date is that none triangulated their audit information; triangulation occurs by gathering information from multiple sources and while missing from the rehabilitation.

Studies outside of rehabilitation also suggest that it is important to strategically plan the method of feedback delivery; for example, nurses reported feeling ‘exasperated’ and ‘angry’ when they received feedback they perceived as critical [18]. Few studies have reported the use of a theoretical underpinning to their feedback delivery [12, 13, 19]. In contrast, LaVigna and colleagues [20] deliberately adopted a ‘non-aversive approach’ when working with staff in quality improvement cycles, and developed a form of audit and feedback known as periodic service review [20, 21]. Periodic service review has its base in both total quality management [22] and organizational behavior management [23, 24], and differs from other auditing approaches used in prior rehabilitation studies, since it is undertaken at a high dose, uses positive support strategies during feedback, and actively involves staff in the process [21]. It remains unknown if this approach to audit and feedback would increase adherence to guidelines in rehabilitation, where prior audit and feedback studies have not.

Therefore, the aim of this study was to evaluate the impact of a prospective audit and feedback program on adherence to acquired brain injury rehabilitation guidelines. We sought to understand whether:

  1. frequent audit and feedback cycles (with positive behavioral support) increased clinician adherence to clinical practice guidelines in acquired brain injury
  2. increases in adherence are maintained after the cessation of audit and feedback program
  3. changes in adherence differ according to individual guideline indicators

Method

Design

A before and after design with a 3-month follow-up was used to test the effect of a 14-month audit-feedback program in an inpatient rehabilitation setting. There were 8 assessments at baseline, 8 assessments at end of intervention and 20 assessments at follow-up. The study design and flow is depicted in (Fig 1). The administrative organization’s Human Research Ethics Committee approved this study prior to its commencement (Alfred Health Human Research Ethics Committee 355/14); a waiver of consent for participation was approved, meaning that all inpatients and all staff were involved for the duration of the study period.

Settings and participants

This study was conducted between September 2014 and March 2016 in a newly established 42-bed acquired brain injury rehabilitation unit in metropolitan Melbourne, Australia. All clinicians (inclusive of nursing, medical, and allied health staff) working on the unit were included in this study and expected to attend each fortnightly feedback session as part of their usual workplace meeting commitments with support of management. Staffing ratios within the unit are presented in Table 1. At the time of this study, other passive knowledge translation interventions (including the availability of guidelines on each ward, and posters of best practice summaries) were also provided to clinicians.

Intervention

A 14-month audit and feedback program was developed. Audit criteria were developed by two authors (NL, LJ) a priori from recommendations with high-quality (Grading of Recommendations Assessment, Development and Evaluation (GRADE) level one) evidence cited in stroke and traumatic brain injury clinical practice guidelines [25, 26] as well as the organization’s model of care and practice standards [27]. The resultant 114 observable criteria were mapped to 16 overarching guideline indicator areas for ease of communication with staff regarding performance. These guideline indicator areas included: behavioral support plans, care plans, continuity of care, discharge planning, equipment use, family education, goal setting, medical issues management, medical records, minimally conscious care, patient safety, personal care regimes, post traumatic amnesia management, roles and responsibilities, therapy interventions, and ward rounds. The organization set the target for staff to adhere to a minimum of 75% of applicable guideline indicators per patient prior to commencing the study.

Our audit and feedback program was based on the periodic service review method developed by LeVigna et al[20]. By acknowledging that the clinical team are key to delivery of evidence-based rehabilitation, we aimed to improve and then maintain the quality of the service using positive behavioral approaches to staff management [21]. We adopted a non-aversive approach to working with the staff during the feedback session, making the clinicians the leaders of the change solutions [21, 23, 24]. The audit-feedback cycles were regular and frequent throughout the study period. Each fortnight, a research assistant randomly selected two patients on the rehabilitation unit (one from each of the two medical teams) and completed a) medical file audit; b) on ward observations; c) clinical staff interviews of three disciplines (allied health, nursing and medical); d) patient interview; and e) family / friend interviews. At the completion of both audits, descriptive statistics (proportion of criteria adherence) were calculated and prepared for the clinician feedback meeting. Feedback sessions were offered twice within each fortnight period to enable shift-working staff to attend. These 15-minute sessions provided the audit results to clinicians, and were delivered by the senior author (NL) an accepted member of staff. Following the feedback sessions, data were made available to all staff via a shared drive on the organization’s computer network. These audit-feedback cycles were repeated every two weeks for 14 months. The intervention is summarized in Table 2; please refer to (Fig 2) for the flow of the fortnightly intervention and (S1 Table) for the Standards for Reporting Implementation Studies.

thumbnail
Table 2. Intervention summary based on TIDieR, delivered by researchers.

https://doi.org/10.1371/journal.pone.0213525.t002

Audit data were triangulated, involving a medical file audit, interviews with clinical staff, and interviews with the patient and/or family. An example of an interview question with a clinical staff member is “Can you identify the patient’s primary rehabilitation goals consistent with the documented goals from the interdisciplinary family meeting”. If the clinician responded correctly, this item was deemed met and scored “yes” on the audit form. An example of a medical file audit indicator was Does the patient receive 4.5–5 hours of therapy daily? To score ‘yes’ for this item, on ward observations as well as review of the patient’s therapy timetable was completed. An example of an interview question with the patient and or family member is “Did someone provide you with a tour of the unit when you first arrived on the ward” The responses to these interviews (yes or no) were recorded on the audit form. (The data dictionary of audit criteria is available from author on request).

A cessation period of three months then ensued, in which no auditing or feedback occurred. In March 2016, n = 20 randomly selected inpatient cases were audited (consistent with the main audit method) to investigate guideline adherence following intervention cessation.

Organizational context

The intervention was tailored to the organization, and designed to be multifaceted (to increase the likelihood of uptake) and frequent (to lower the fidelity gap). The core of the intervention (i.e. audit and feedback) was held consistent throughout the study (no adaptations); instead, the passive knowledge translation interventions (in particular, the education components) were tailored to address highlighted fidelity gaps each fortnight. For example, if auditing revealed low adherence to a guideline indicator, an evidence summary was created to increase staff awareness of the expected behavior. To understand the intervention dose delivered and dose received, we collected data on both number of staff employed (who would have received all passive knowledge translation components) and number of staff who attended the feedback sessions (referring to exposure to and uptake of the core intervention).

Our implementation intervention targeted behavior changes within both the individual (i.e., staff) and the organization. While the feedback was provided to staff, behavior change discussions held within feedback sessions took into consideration the context of the organization, the patient / family dyads and the national healthcare system). With staff leading the behavior changes, they held in-depth knowledge of the processes that controlled adoption of the guidelines within their organization, maximizing effect[28]. Our implementation targets were individual clinicians who worked within the rehabilitation unit, however, buy-in and support from management was an obvious factor impacting on implementation effectiveness. The Director of Rehabilitation, Director of Nursing Services and the Service Manager were asked to communicate support for guideline implementation to staff during orientation, at staff meetings, and via email throughout the intervention period.

Outcome measures

The primary outcome was adherence to guideline indicators as measured by the audits. Consistent with the auditing which formed part of the intervention, this included triangulation of data from the medical file audits, unit based observations, and patient, staff, family interviews.

Data analysis

Each fortnight, dichotomous data were recorded in an excel spreadsheet, and later imported into SPSS V24 for analysis. The mean adherence from audit data of month 0–2 was calculated to represent ‘baseline’ adherence. Mean adherence audit data from month 13–15 was calculated to represent ‘end of intervention’ adherence comparisons. Following intervention cessation (months 15–18), 20 randomly selected cases were audited (month 18–19) to calculate average (mean) adherence to assess if adherence was maintained or reduced. Where an audit item was not applicable to the selected case (i.e., if the selected case was not minimally conscious and therefore the minimally conscious care item(s) were not applicable), this item(s) was removed from the analysis for that period.

Median (95% confidence intervals) and Mann-Whitney U analyses were used to describe comparisons across all data due to the small sample size at each timepoint (n = 8, n = 8, n = 20 respectively) producing non-normally distributed data. Confidence intervals were calculated to highlight statistical significance where it existed, along with measures of variance around median differences (IQR). Chi square analysis for individual guideline indicator items were conducted to compare adherence across comparison points (given data was binary) with Fischer exact test statistic additionally reported due to small sample size[29]. To describe the data, mean (95% confidence intervals) and difference between means (95% confidence intervals) were also calculated and are presented in (S2 Table). The Bonferroni correction was applied to adjust the alpha level for all tests since multiple comparisons were made (with tests run for 230 comparisons, the alpha level was lowered to 0.0002). Refer to (Fig 2) for diagrammatic representation of analysis points.

Following quantitative analysis, narrative synthesis was undertaken to synthesise findings from our study with recommendations relating to conducting audit and feedback projects drawn from previously conducted systematic reviews [12,13]. Two authors [NL, LJ] extracted contributing factors which led to the success of the audit and feedback program into categories highlighted by these previous systematic reviews. All authors then reviewed and refined the list of factors.

Results

During the study period, 58 clinical staff were employed with strong representation at fortnightly feedback sessions, mean of 67% (SD 8) attendance. Clinical profiles of patients audited at time point is presented in Table 3.

thumbnail
Table 3. Patient demographic characteristics of randomly selected patients included at each audit time point.

https://doi.org/10.1371/journal.pone.0213525.t003

The sustained audit and feedback program significantly increased clinician’s adherence to guideline recommendation from median 38.8% (95% CI 34.3 to 44.4) at baseline to 83.6% (95% CI 81.8 to 88.5) at the end of the intervention. Table 4 shows median total adherence at each time point. Following cessation of the audit and feedback program, clinician adherence levels decreased by 7% (95% CI .51 to 14.0) from the end of the intervention to follow up, however adherence to guideline indicators was maintained above the organization’s goal of 75% adherence.

thumbnail
Table 4. Median (IQR) of clinical practice guideline indicator adherence across measurement points, median differences between timepoints (95% Confidence Interval) and significance of the between group difference.

https://doi.org/10.1371/journal.pone.0213525.t004

Adherence differed across guideline indicators, with some indicators more susceptible to change with the audit and feedback program, and others that were not. For example, indicators related to ‘goal setting’, ‘therapy’ and ‘roles and responsibilities’ increased significantly during the intervention period, but this increase was not sustained at follow up. Conversely, adherence to most of the ‘ward round’ indicators did not improve during the intervention period. Refer to Table 5 (and S2 Table) for full indicator change results.

thumbnail
Table 5. Adherence to audited indictors (n = 114) at three audit time points and difference (Chi square) between time points.

https://doi.org/10.1371/journal.pone.0213525.t005

Discussion

Our sustained fortnightly audit and feedback program led to a significant increase in adherence to clinical practice guideline recommendations. Following the three-month cessation period during which no audit and feedback was provided, adherence to guideline recommendations decreased (but remained above the organization’s benchmark of ≥75% adherence). The positive results of our study contrast to other audit and feedback studies conducted in rehabilitation [15,16,17]. Our program had strong support from senior management and the organization, as well as external funding. This external context supported higher frequency audit and feedback cycles, and our feedback was grounded in social cognitive modelling. The adherence improvements following intervention were likely due to a combination of the following attributes of our program: a) high level of managerial support, b) feedback delivered using a non-aversive and clinician-led approach, c) high frequency of audit and feedback cycles, d) 12-month duration of the program, and e) shared goal of working towards a target of ≥75% adherence. By describing these attributes, future studies can build on our program’s success.

We do acknowledge that when the audit and feedback program was ceased, adherence rates decreased, although they did not return to baseline levels. This decrease was not unexpected, and while we did not investigate the reasons why, we anticipate that the loss of accountability (knowledge that auditing was not occurring) as well as no longer having formal opportunities to reflect on practice gaps contributed to the lower rates of adherence. Interestingly, there were some audit indicators that increased in adherence after the program was ceased which suggests that comprehensive processes developed and established during the study period carried over beyond the period of audit and feedback.

Our results support many findings from audit and feedback studies conducted outside of rehabilitation. Indicators that had high adherence at baseline in our study were also less likely to improve with regular audit and feedback [12, 13, 30, 31]- the benefits of audit and feedback programs are likely greatest when baseline performance is low. The use of positive support while delivering feedback (i.e. employing a ‘no blame’ ethos and highlighting discipline ‘achievements’) is also consistent with other studies [18, 32, 33] which suggest that when feedback which is perceived as supportive rather than punitive, it is more likely to positively influence clinician behavior. Finally, our study provided feedback in both written and verbal formats by a respected internal senior member of staff. These characteristics are described in systematic reviews as effective strategies to increase audit and feedback effectiveness [12, 13]. Future studies testing audit and feedback interventions should continue to investigate models of providing feedback.

Setting targets (or goals) has been proposed as increasing the effectiveness of feedback, however, this remains uncertain [34, 35]. In contrast to Garner and colleagues[36], our results suggest that setting goals and developing action plans during feedback sessions was an effective strategy. With positive support, the facilitator guided clinician discussions towards solutions and encouraged the clinicians to create changes that may lead to increased guideline adherence for the following fortnight. The use of a cognitive model, in combination with high frequency (i.e., fortnightly) and solution-focused feedback is a novel addition to the evaluative studies in this field and supported in theory by the work of Hysong [13] and Ivers [12, 31]. Fig 3 outlines these potential factors which may have contributed to the success of the audit and feedback program.

thumbnail
Fig 3. Factors that contribute to the success of the audit and feedback program as indicated by the present study.

https://doi.org/10.1371/journal.pone.0213525.g003

Organizational expectation of clinician participation was likely to contribute to the high level of staff engagement achieved in the present study. Current behavior change models focus predominantly on individual level or local change characteristics (i.e. the Behaviour Change Wheel [37] and Theoretical Domains Framework [38]). Research around behavior change interventions have explored staff motivation for and perceptions of audit and feedback on an individual level [18]. Less discussed is how organizational expectations drive behavior change in clinicians. The revisited Promoting Action on Research Implementation (PARiHS) framework aptly encompasses the construct of environment and context; separating out micro (local) and meso (organizational) from macro (political, policy) levels [39]. In this framework, organizational systems and culture are a key consideration for behavior change. Given the organizational expectation of staff involvement in our current study, as well as the intervention frequency (i.e. fortnightly) and paid staff time release for feedback, the strong contribution of organization and culture to our positive findings cannot be overlooked.

Study limitations

Like all pragmatic studies in the clinical setting, our study is not without limitations. Not all staff attended each fortnight’s feedback session. While this reflects the practical reality of a ward environment and the shiftwork nature of hospital staffing, it did mean that not all clinicians received regular feedback. This study sought to investigate the effectiveness of a sustained program, and so this was an accepted limitation within the design of the study. We also acknowledge that the use of only one site may limit the generalizability of the results. The use of only one site also limits our ability to predict whether scaling up will achieve similar rates of adoption and delivery across multiple organizations. Furthermore, contextual factors may have positively affected the uptake at our study site (since it was newly established with newly employed staff) which may not directly translate to other sites. Our program also sought to improve adherence to n = 114 indicators of best-practice rehabilitation. While effective at the single site, scaling up our complex audit and feedback intervention may not be straightforward and future programs may choose a smaller number of indicators to implement. Finally, this was a funded study, so sustainable infrastructure needs to be established to enable scaling up. We recommend that future studies include a controlled comparison, consider using both publically and privately funded rehabilitation hospitals, and include a cost/benefit analysis alongside any evaluation of efficacy.

Conclusion

Our study demonstrated that a frequent and sustained audit and feedback program is an effective knowledge translation intervention to increase adherence to brain injury rehabilitation guidelines. Findings also highlighted that some guideline recommendation indicators that are less likely to change with audit and feedback, suggesting that alternative knowledge translation strategies may be more appropriate to achieve behavior change for these items. Our program has the potential to inform both local and larger initiatives to improve the quality of rehabilitation received, and more significantly beyond rehabilitation, in the field of implementation science and the knowledge base underpinning audit and feedback.

Supporting information

S1 Table. Standards for Reporting Implementation Studies: the StaRI checklist for completion.

https://doi.org/10.1371/journal.pone.0213525.s001

(DOCX)

S2 Table. Proportion (%) (95% CI) of clinical practice guideline indicator adherence (n = 114) across measurement points.

https://doi.org/10.1371/journal.pone.0213525.s002

(DOCX)

Acknowledgments

We thank Elizabeth O’Shannessy (project manager) and Alison Gehrig (occupational therapist) for their contributions to auditing. We acknowledge the PROCESS-ABI research group who provided their time in giving feedback on the study, including Professor Russell Gruen, Professor Lynne Turner Stokes, Dr Mithu Palit, Andrew Perta, Katrina Neave, and Professor Anne Holland.

References

  1. 1. Australian Institute of Health and Welfare. Disability in Australia: Acquired brain injury. Canberra: Australian Institute of Health and Welfare; 2007.
  2. 2. Welfare AIoHa. Admitted patient care 2015–16: Australian hospital statistics. Canberra, Australia: AIHW, 2017.
  3. 3. Green SE, Bosch M, McKenzie JE, O’Connor DA, Tavender EJ, Bragge P, et al. Improving the care of people with traumatic brain injury through the Neurotrauma Evidence Translation (NET) program: protocol for a program of research. Implementation Science. 2012;7(1):74. pmid:22866892
  4. 4. National Stroke Foundation. National Stroke Audit: Rehabilitation Services Report 2012. Melbourne, Australia: National Stroke Foundation; 2013.
  5. 5. Woolf SH, Grol R, Hutchinson A, Eccles M, Grimshaw J. Clinical guidelines: potential benefits, limitations, and harms of clinical guidelines. Bmj. 1999;318(7182):527–30. pmid:10024268; PubMed Central PMCID: PMC1114973.
  6. 6. Alonso-Coello P, Martínez García L, Carrasco JM, Solà I, Qureshi S, Burgers JS. The updating of clinical practice guidelines: insights from an international survey. Implementation Science. 2011;6(1):107. pmid:21914177
  7. 7. Quaglini S, Cavallini A, Gerzeli S, Micieli G. Economic benefit from clinical practice guideline compliance in stroke patient management. Health Policy. 69(3):305–15. pmid:15276310
  8. 8. Stroke Foundation. National Stroke Audit–Rehabilitation Services Report 2016. Melbourne, Australia2016.
  9. 9. Stroke Foundation. Clinical Guidelines for Stroke Management 2017. Melbourne Australia 2017.
  10. 10. Intercollegiate Stroke Working Party. National clinical guideline for stroke. London: Royal College of Physicians; 2012.
  11. 11. Hebert D, Lindsay MP, McInyre A, Kirton A, Rumney PG, Bagg S, et al. Canadian Stroke Best Practice Recommendations: Stroke rehabilitation practice guidelines, update 2015. Ottawa, Ontario Canada: Canadian Stroke Network; 2016.
  12. 12. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. The Cochrane database of systematic reviews. 2012;(6):Cd000259. Epub 2012/06/15. pmid:22696318.
  13. 13. Hysong SJ. Meta-analysis: audit and feedback features impact effectiveness on care quality. Med Care. 2009;47(3):356–63. Epub 2009/02/06. pmid:19194332; PubMed Central PMCID: PMCPMC4170834.
  14. 14. Foy R, Eccles M, Jamtvedt G, Young J, Grimshaw J, Baker R. What do we know about how to do audit and feedback? Pitfalls in applying evidence from a systematic review. BMC Health Services Research. 2005;5(1):50. pmid:16011811
  15. 15. McCluskey A, Ada L, Kelly PJ, Middleton S, Goodall S, Grimshaw JM, et al. A behavior change program to increase outings delivered during therapy to stroke survivors by community rehabilitation teams: The Out-and-About trial. Int J Stroke. 2016;11(4):425–37. Epub 2016/02/26. pmid:26906448.
  16. 16. Kristensen H, Hounsgaard L. Evaluating the Impact of Audits and Feedback as Methods for Implementation of Evidence in Stroke Rehabilitation. British Journal of Occupational Therapy. 2014;77(5):251–9.
  17. 17. Vratsistas-Curto A, McCluskey A, Schurr K. Use of audit, feedback and education increased guideline implementation in a multidisciplinary stroke unit. BMJ Open Quality. 2017;6(2). pmid:29450304
  18. 18. Christina V, Baldwin K, Biron A, Emed J, Lepage K. Factors influencing the effectiveness of audit and feedback: nurses' perceptions. Journal of Nursing Management. 2016;24(8):1080–7. pmid:27306646
  19. 19. Colquhoun HL, Brehaut JC, Sales A, Ivers N, Grimshaw J, Michie S, et al. A systematic review of the use of theory in randomized controlled trials of audit and feedback. Implementation Science. 2013;8(1):66. pmid:23759034
  20. 20. LaVigna GW. The Periodic Service Review: A Total Quality Assurance System for Human Services and Education: Paul H Brookes Publishing Company; 1994.
  21. 21. Lowe K, Jones E, Horwood S, Gray D, James W, Andrew J, et al. The evaluation of periodic service review (PSR) as a practice leadership tool in services for people with intellectual disabilities and challenging behaviour. Tizard Learning Disability Review. 2010;15(3):17–28.
  22. 22. Mawhinney TC. Total quality management and organizational behavior management: an integration for continual improvement. Journal of Applied Behavior Analysis. 1992;25(3):524–43. pmid:16795783
  23. 23. Deming EW. Out of the Crisis. Cambridge, MA: Center for Advanced Engineering Study, MIT; 1986.
  24. 24. Sluyter G. Contemporary Issues in Administration. Washington DC: American Association on Mental Retardation, 2000.
  25. 25. Stroke Foundation. Clinical Guidelines for Stroke Management 2010. Melbourne Australia2010.
  26. 26. Aquired Brain Injury Knowledge Uptake Strategy (ABIKUS) guideline development group. Evidence based recommendations for rehabilitation of moderate to severe acquired brain injury. 2007.
  27. 27. Hetts SW, Turk A, English JD, Dowd CF, Mocco J, Prestigiacomo C, et al. Stent-assisted coiling versus coiling alone in unruptured intracranial aneurysms in the matrix and platinum science trial: safety, efficacy, and mid-term outcomes. AJNR Am J Neuroradiol. 2014;35(4):698–705. pmid:24184523.
  28. 28. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629. Epub 2004/12/15. pmid:15595944; PubMed Central PMCID: PMCPMC2690184.
  29. 29. Larntz K. Small-Sample Comparisons of Exact Levels for Chi-Squared Goodness-of-Fit Statistics. Journal of the American Statistical Association. 1978;73(362):253–63.
  30. 30. Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews. 2006;(2). CD000259. pmid:16625533
  31. 31. Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O’Brien MA, French SD, et al. Growing Literature, Stagnant Science? Systematic Review, Meta-Regression and Cumulative Analysis of Audit and Feedback Interventions in Health Care. Journal of General Internal Medicine. 2014;29(11):1534–41. pmid:24965281
  32. 32. Larson EL, Patel SJ, Evans D, Saiman L. Feedback as a strategy to change behaviour: the devil is in the details. J Eval Clin Pract. 2013;19(2):230–4. Epub 2011/12/02. pmid:22128773; PubMed Central PMCID: PMCPMC3303967.
  33. 33. D'Lima DM, Moore J, Bottle A, Brett SJ, Arnold GM, Benn J. Developing effective feedback on quality of anaesthetic care: what are its most valuable characteristics from a clinical perspective? J Health Serv Res Policy. 2015;20(1 Suppl):26–34. Epub 2014/12/05. pmid:25472987.
  34. 34. Locke EA, Latham GP. Building a practically useful theory of goal setting and task motivation. A 35-year odyssey. Am Psychol. 2002;57(9):705–17. Epub 2002/09/20. pmid:12237980.
  35. 35. Nasser M OA, Paulsen E, Fedorowicz Z. Local consensus processes: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 2007;(1).
  36. 36. Gardner B, Whittington C, McAteer J, Eccles MP, Michie S. Using theory to synthesise evidence from behaviour change interventions: the example of audit and feedback. Soc Sci Med. 2010;70(10):1618–25. Epub 2010/03/09. pmid:20207464.
  37. 37. Michie S, van Stralen MM, West R. The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science. 2011;6(1):42. pmid:21513547
  38. 38. Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implementation Science. 2012;7(1):37. pmid:22530986
  39. 39. Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implementation Science. 2016;11(1):33. pmid:27013464
  40. 40. Scottish Intercollegiate Guidelines Network (SIGN). Brain Injury Rehabilitation in Adults: a national clinical guideline. Edinburgh: SIGN, 2013. Publication 130.
  41. 41. Royal College of Physicians, Royal College of Nursing. Ward rounds in medicine: principles for best practice. London: RCP, 2012.
  42. 42. National Clinical Guideline Centre (NICE). Stroke rehabilitation: Long-term rehabilitation after stroke. London: NICE; 2013. Clinical guideline no. 162.
  43. 43. Scottish Intercollegiate Guidelines Network (SIGN). Management of patients with stroke: rehabilitation, prevention and management of complications, and discharge planning. A national clinical guideline. Edinburgh: SIGN; 2010. Publication 118.
  44. 44. Stroke Foundation of New Zealand and New Zealand Guidelines Group. Clinical Guidelines for Stroke Management 2010. 2010.
  45. 45. Bayley MT, Tate R, Douglas JM, et al. INCOG guidelines for cognitive rehabilitation following traumatic brain injury: methods and overview. Journal of Head Trauma Rehab 2014;29:290–306.