Elsevier

Resuscitation

Volume 81, Issue 4, April 2010, Pages 453-456
Resuscitation

Simulation and education
Written evaluation is not a predictor for skills performance in an Advanced Cardiovascular Life Support course

https://doi.org/10.1016/j.resuscitation.2009.12.018Get rights and content

Abstract

Objective

Both a written cognitive knowledge evaluation and a practical evaluation that tests psychomotor skills, cognitive knowledge, and affective behaviors such as leadership and team skills are required for successful completion of American Heart Association (AHA) Advanced Cardiovascular Life Support (ACLS) course. The 2005 International Liaison Committee on Resuscitation (ILCOR) Consensus on Science and Treatment Recommendations noted that in Basic Life Support (BLS) there is little to no correlation between written and practical skills. The current study was conducted to determine if there is a correlation between written and practical evaluations in an ACLS course.

Methods

34 senior nursing students from four nursing programs participated in two separate ACLS classes, completing both the written and practical evaluations. Immediately following the courses, all participants served as team leader for a video recorded simulated cardiac arrest event. A panel of expert ACLS instructors who did not participate as instructors in the courses reviewed each video and independently scored team leaders’ performances.

Results

Spearman's rho correlation coefficient between the written test scores and practical skills performance was 0.194 (2-tailed significance = 0.272).

Conclusion

The ACLS written evaluation was not a predictor of participant skills in managing a simulated cardiac arrest event immediately following an ACLS course. The single case simulations used in ACLS skills evaluation test a narrow portion of ACLS content while written evaluation tests can more practically test a broader spectrum of content. Both work in concert to define participant knowledge and neither should be used exclusively to determine participant competence.

Introduction

The American Heart Association (AHA) Advanced Cardiovascular Life Support (ACLS) course was created in 1973 and introduced as an AHA sponsored program in 1974.1, 2 The ACLS course has served as the model for other short-course format resuscitation courses.3 Since its inception, passing both a written evaluation and a practical skills assessment were required for successful completion of the ACLS course. The validity of both the written and practical testing mechanisms currently in use have been demonstrated in previous studies.4, 5 There is however no published research that directly addresses the question of whether student performance on written tests in advanced life support courses correlates to objectively measured performance of resuscitation as a whole or its individual required subcomponents.

The current study proposed the following research question: is a written evaluation in an Advanced Cardiovascular Life Support course a predictor of students’ skills performance in a simulated cardiac arrest?

A single study of paediatric residents suggests that the correlation of written tests and objective performance measures may be poor as the mean score for the standardized Pediatric Advanced Life Support (PALS) written examination was 93.2% (±5.5%), but none of the 28 third-year paediatric residents were able to successfully perform both basic and advanced airway maneuvers.6 Only 11% of these same trainees were able to adequately perform both vascular access procedures (intraosseous and femoral central line placement using the Seldinger technique). Unfortunately, correlation data was not published between written and performance based tests as it was not the focus on the study.

An earlier study involving 45 paediatric residents described the residents as performing well on a cognitive written test with 80% also being able to achieve the end-point of four predefined resuscitation skills (bag-valve-mask ventilation, intubation, defibrillation, and intraosseous line placement).7 Again, this was not designed as a correlational study and although learners achieved the end-point of each skill, they performed quite poorly at many of the essential subtasks. For instance only 27% of residents verified suction was functioning and only 34% verified the bag-valve-mask was available and functioning prior to intubation. The equivocal findings and lack of true correlations make it difficult to make any generalizations from this paper relevant to the research question we have undertaken.

A systematic review of the literature addressing the correlation between written tests and performance based measures in basic life support (BLS) courses concluded that although the majority of papers (14 out of 17) did suggest a correlation, much of the data for healthcare professionals was inconsistent and contradictory requiring further research.8 The question of whether a written BLS test reflects BLS skills competence was felt to be “indeterminate.” Extrapolating the systematic review to advanced life support courses is not appropriate as the knowledge and skills to be performed in advanced life support are different than BLS and the relative performances of students in the written and performance measures are likely to be different.

The literature from the broader medical education community, although not conclusive, does suggest a positive correlation between written tests and more objective measures of performance such as Observed Structured Clinical Encounters (OSCEs).9, 10, 11, 12, 13, 14 The correlations are however quite variable and there is occasional published literature that contradict the above positive correlations.15, 16 Of particular interest from the above studies was the study by Ram et al.10 suggesting that the written test could actually correlate as well with performance in the clinical environment as an OSCE.

The lack of evidence directly related to the research question and inconclusive data in closely related fields emphasizes the need for well conducted trials to help guide the development of learner assessments within advanced resuscitation courses. Miller's classic pyramid outlines four successive levels of clinical assessment—Knows, Knows How, Shows, and Does.17 ‘Ideal’ evaluation of a learning should theoretically focus on learners’ actions in the clinical environment (i.e. what one ‘does’) but is generally not feasible for most short courses. Performance based tests, such as the resuscitation case scenario, assess the learner's ability to ‘show’ what they are capable of while written tests may be limited to lower levels assessing what the learner ‘knows’ (knowledge) or what they ‘know how’ to do (comprehension). Knowledge of a correlation between written and performance based tests (or lack thereof) is valuable, if not imperative, to the development of a well designed and practical assessment strategy for advanced resuscitation courses.

Section snippets

Materials and methods

A correlational study was conducted to compare ACLS written evaluation scores with practical skills performance in a simulated cardiac arrest scenario. Appropriate Institutional Review Board (IRB) approvals were obtained and subjects provided written informed consent for participation. Due to the need to match subject scores on the written evaluation with skills performance scores, subject anonymity was not possible. However, subject confidentiality was maintained.

Demographics of study subjects

Thirty-seven subjects initially enrolled in the study courses. The mean age of the subjects was 32.5 years. There were five males and 32 females in the study. Fifteen subjects were from bachelor degree nursing programs and 22 subjects were from associate degree nursing programs. 51.4% of the participants had prior health care experience, with a mean of 2.5 years experience (range 6 months to 30 years). Only one subject had prior ACLS experience. Three subjects withdrew from the program prior to

Discussion

At both the combined class level and the individual class level for these two ACLS courses, there was no correlation between the written evaluation and the practical skills evaluation. Therefore, the ACLS written evaluation cannot be used as a predictor of performance in cardiac arrest practical skills. The results from this study are consistent with previously reported findings in resuscitation education examining written testing instruments and practical skills assessments.6, 7

Development of

Conclusions

In the two ACLS classes conducted for this study, the ACLS written evaluation did not correlate with participant skills in managing a simulated cardiac arrest event immediately following an ACLS course. The ACLS skills evaluation tests a narrow portion of ACLS content while the written evaluation tests a much broader spectrum of content. Both work in concert to define participant knowledge and neither should be used exclusively to determine participant competence.

The broader implications of

Conflict of interest statement

The authors have no relevant conflict of interest on this topic.

Acknowledgements

The authors wish to thank David Matics, Louis Robinson, and Katrina Craddock; the instructors of the Charleston Area Medical Center Health Education and Research Institute ACLS program; and the course participants for their assistance with this study. Partial funding for this study was supplied by the Charleston Area Medical Center Health Education and Research Institute Research Appropriate Committee.

References (22)

  • 2005 international consensus on cardiopulmonary resuscitation and emergency cardiovascular care science with treatment recommendations. Part 8: Interdisciplinary topics

    Resuscitation

    (2005)
  • Cited by (39)

    • European Resuscitation Council Guidelines for Resuscitation 2015. Section 10. Education and implementation of resuscitation

      2015, Resuscitation
      Citation Excerpt :

      It is believed that learners with increased clinical experience have improved long-term retention of knowledge and skills.118,119 Written tests in ALS courses do not reliably predict practical skill performance and should not be used as a substitute for demonstration of clinical skill performance.120,121 Assessment at the end of training seems to have a beneficial effect on subsequent performance and retention.122,123

    • A systematic review of retention of adult advanced life support knowledge and skills in healthcare providers

      2012, Resuscitation
      Citation Excerpt :

      Perhaps not surprisingly, skills appear to deteriorate more rapidly than knowledge. Similar discrepancies between knowledge and skill performance have been well documented in studies of ALS,32,33 BLS34–36 and other clinical skills.37 This latter finding suggests that the appropriate intervals and optimal retraining strategies may be determined separately for ALS knowledge and ALS skills.

    View all citing articles on Scopus

    A Spanish translated version of the abstract of this article appears as Appendix in the final online version at doi:10.1016/j.resuscitation.2009.12.018.

    View full text