Elsevier

Surgery

Volume 135, Issue 1, January 2004, Pages 6-20
Surgery

Special Section: Competency—When, Why, How?
Assessing competency in surgery: Where to begin?

https://doi.org/10.1016/S0039-6060(03)00154-5Get rights and content

Section snippets

What are the competencies?

In 1999, the ACGME endorsed 6 general competencies reflective of different skill sets that all doctors should possess.1 The 6 competences are:

  • 1.

    patient care

  • 2.

    medical knowledge

  • 3.

    practice-based learning and improvement

  • 4.

    interpersonal and communication skills

  • 5.

    professionalism

  • 6.

    systems-based practice

A description of these competencies is given in Table I.

Several countries outside of the United States have concurrently attempted to define the core competencies required of all physicians. Although the labels may

Why assess these competencies?

The ability to ensure a resident is capable of handling all job-related tasks is the ultimate goal of a postgraduate education system. The ACGME has recognized the need to be accountable to society, patients, and the medical profession in this regard.1

As a first step toward this mandate, surgical training programs must clearly identify educational outcomes and objectives related to the general competencies. Through various assessment tools, programs must then determine how well their trainees

Assessment and the general competencies: Is there a magic bullet?

Surgical performance requires proficiency in all of the general competencies with special emphasis on technical skill. No single tool adequately assesses these multiple dimensions of competency. Furthermore, each individual assessment tool has its own specific limitations. For this reason, the use of multiple assessments tools by multiple observers over time is recommended to completely assess the broad range of competency-based educational objectives.6 For assessment of competency, there

Understanding educational jargon: What constitutes the ideal assessment tool?

When looking at different methods of assessment, 3 questions must be asked:

  • 1.

    Are the results of the assessment reliable?

  • 2.

    Are the results of the assessment valid?

  • 3.

    Is the assessment practical?

The ideal assessment produces reliable, valid results and is practical. These concepts are explained in the following section and the individual assessment tools are discussed using the same framework.

Are the results of the assessment reliable? Reliability describes the precision, consistency, and

Assessing the general competencies

Adopting the same framework used to describe the characteristics of the ideal assessment, this section reviews assessment tools currently in use by most surgical training programs, as well as those with which program directors should become familiar with. These assessment tools have been grouped into 7 categories: (1) in-training evaluations reports; (2) written examinations; (3) oral examinations; (4) OSCE; (5) procedure and case logs; (6) 360-degree reviews and surveys; and (7)

Description

In-training evaluation reports (ITERs) are a rating instrument frequently used by many training programs to assess a variety of observable skills performed in real clinical settings, either on an ongoing basis or to evaluate a snapshot of the residents' performance at one moment in time.11 ITERs generally take the form of global rating scales and require assessors to evaluate a variety of observable skill sets (ie, history taking, physical examination, procedural skills) based on general

Assessing technical competence

As detailed in the introduction, any specific mention of technical competence is buried within the patient care competency domain defined by the ACGME. Few would argue the tremendous importance of technical proficiency as one of the required attributes of a competent surgeon. Hence, this section is dedicated specifically to assessment tools used in the evaluation of technical skill. It is likely that such instruments will become more valuable with increasing societal demands for programs to

Description

Borrowed from the success of the OSCE in assessing clinical competence,41 the OSATS was developed at the University of Toronto3 to measure the technical ability of practicing surgeons and trainees. Because of issues of practicality, patient safety, and the desire for a standardized process, most OSATS examinations are conducted outside the operating room using bench model simulators. The OSATS consists of a multistation, OSCE-like examination in which candidates' technical skills are assessed

Suggestions for implementation of competency assessment in residency programs

Based on the preceding sections, there are a multitude of evaluation tools with varied strengths and weaknesses. Implementation of a system of assessment in any residency programs must be graduated starting from simple, practical tools and incorporating more elaborate instruments over time. The following suggestions provide a framework for the establishment of an assessment program.

What can be done right now?

  • 1.

    Recognize that most training programs are currently using many assessment tools. For

Conclusion

Society, the medical profession, and bodies such as the ACGME are demanding that residency programs be accountable by documenting and verifying that their trainees are proficient in all of the competencies required of a physician. As a result, residency programs must implement assessment systems in addition to their longstanding goal of providing an optimal educational environment. Although this task of implementation seems formidable, most residency programs are currently using a variety of

References (80)

  • D.J. Anastakis et al.

    Assessment of technical skills transfer from the bench training model to the human model

    Am J Surg

    (1999)
  • R.K. Reznick et al.

    Testing technical skill via innovative “bench station” examination

    Am J Surg

    (1997)
  • G. Ault et al.

    Exporting a technical skills evaluation technology to other sites

    Am J Surg

    (2001)
  • D.J. Scott et al.

    Laparoscopic training on bench models: better and more cost effective than operating room experience?

    J Am Coll Surg

    (2000)
  • B.A. Goff et al.

    Surgical skills assessment: a blinded examination of obstetrics and gynecology residents

    Am J Obstet Gynecol

    (2002)
  • R.S. Haluck et al.

    Are surgery training programs ready for virtual reality? A survey of program directors in general surgery

    J Am Coll Surg

    (2001)
  • L.J. van Ruijven et al.

    The accuracy of joint surface models constructed from data obtained with an electromagnetic tracking device

    J Biomech

    (2000)
  • V. Datta et al.

    The relationship between motion analysis and surgical technical assessments

    Am J Surg

    (2002)
  • V. Datta et al.

    Relationship between skill and outcome in the laboratory-based model

    Surgery

    (2002)
  • Accreditation Council for Graduate Medical Education (ACGME) General Competencies. Version 1.3. September 28, 1999

    (2000)
  • Societal Needs Working Group, CanMEDs 2000 Project. Skills for the new millennium

    Ann R Coll Phys Surg Can

    (1996)
  • J.A. Martin et al.

    Objective structured assessment of technical skill (OSATS) for surgical residents

    Br J Surg

    (1997)
  • K.R. Wanzel et al.

    Teaching the surgical craft: from selection to certification

    Curr Prob Surg

    (2002)
  • A. Darzi et al.

    Assessment of surgical competence

    Qual Health Care

    (2001)
  • L.M. Harvill

    An NCME instructional module on standard error of measurement

    Educ Measure Issues Pract

    (1991)
  • R.E. Traub et al.

    An NCME instructional module on understanding reliability

    Educ Measure Issues Pract

    (1991)
  • J.J. Norcini

    Standards and reliability in evaluation: when rules of thumb don't apply

    Acad Med

    (1999)
  • S. Messick

    Standards of validity and the validity of standards in performance assessment

    Educ Measure Issues Pract

    (1995)
  • J. Turnbull et al.

    Improving in-training evaluation programs

    J Gen Intern Med

    (1998)
  • J.D. Gray

    Global rating scales in residency education

    Acad Med

    (1996)
  • P.A. Cranton et al.

    The reliability and validity of int raining evaluation reports in obstetrics and gynecology

    Proc Annu Conf Res Med Educ

    (1984)
  • J.D. Gray

    Primer on resident evaluation

    Ann R Coll Phys Surg Can

    (1996)
  • S. Phelan

    Evaluation of the non-cognitive professional traits of medical students

    Acad Med

    (1990)
  • G. Regehr et al.

    Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format Examination

    Acad Med

    (1998)
  • G.R. Norman et al.

    Pitfalls in the pursuit of objectivity: issues of validity, efficiency and acceptability

    Med Educ

    (1992)
  • C.P.M. van der Vleuten et al.

    Pitfalls in the pursuit of objectivity: issues of reliability

    Med Educ

    (1991)
  • D.D. Hunt

    Functional and dysfunctional characteristics of the prevailing model of clinical evaluation systems in North American medical schools

    Acad Med

    (1992)
  • J. Turnbull et al.

    Clinical work sampling: a new approach to the problem of in-training evaluation

    J Gen Intern Med

    (2000)
  • P.F. Stillman

    Positive effects of a clinical performance assessment program

    Acad Med

    (1991)
  • R. Gallagher et al.

    Toward a comprehensive methodology for resident evaluation

  • Cited by (121)

    View all citing articles on Scopus
    View full text