J Wrist Surg 2017; 06(01): 046-053
DOI: 10.1055/s-0036-1587316
Scientific Article
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

AO Distal Radius Fracture Classification: Global Perspective on Observer Agreement

Prakash Jayakumar
1   Department of General Surgery, OLVG, Amsterdam, The Netherlands
,
Teun Teunis
1   Department of General Surgery, OLVG, Amsterdam, The Netherlands
,
Beatriz Bravo Giménez
2   Orthopaedic Upper Extremity Service, Hospital Universitario Doce de Octubre-Universidad Complutense, Madrid, Spain
,
Frederik Verstreken
3   Department of Hand Surgery, Monica Hospital/Antwerp University Hospital, Edegem, Belgium
,
Livio Di Mascio
4   Department of Trauma and Orthopaedic Surgery, Barts and The Royal London Hospital, London, United Kingdom
,
Jesse B. Jupiter
1   Department of General Surgery, OLVG, Amsterdam, The Netherlands
› Author Affiliations
Further Information

Publication History

18 April 2016

30 June 2016

Publication Date:
08 August 2016 (online)

Abstract

Background The primary objective of this study was to test interobserver reliability when classifying fractures by consensus by AO types and groups among a large international group of surgeons. Secondarily, we assessed the difference in inter- and intraobserver agreement of the AO classification in relation to geographical location, level of training, and subspecialty.

Methods A randomized set of radiographic and computed tomographic images from a consecutive series of 96 distal radius fractures (DRFs), treated between October 2010 and April 2013, was classified using an electronic web-based portal by an invited group of participants on two occasions.

Results Interobserver reliability was substantial when classifying AO type A fractures but fair and moderate for type B and C fractures, respectively. No difference was observed by location, except for an apparent difference between participants from India and Australia classifying type B fractures. No statistically significant associations were observed comparing interobserver agreement by level of training and no differences were shown comparing subspecialties. Intra-rater reproducibility was “substantial” for fracture types and “fair” for fracture groups with no difference accounting for location, training level, or specialty.

Conclusion Improved definition of reliability and reproducibility of this classification may be achieved using large international groups of raters, empowering decision making on which system to utilize.

Level of Evidence Level III

Note

Prakash Jayakumar and Teun Teunis contributed equally to this work. This work was performed at the Orthopaedic Hand and Upper Extremity Service, Massachusetts General Hospital - Harvard Medical School.


 
  • References

  • 1 Müller ME, Nazarian S, Koch P. Classification AO des Fractures: Tome I: les os Longs. 1st ed. Berlin, Germany: Springer-Verlag; 1987. Part 2
  • 2 Marsh JL, Slongo TF, Agel J , et al. Fracture and dislocation classification compendium - 2007: Orthopaedic Trauma Association classification, database and outcomes committee. J Orthop Trauma 2007; 21 (10, Suppl): S1-S133
  • 3 Müller ME, Koch P, Nazarin S , et al. Radius and ulna. In: Müller ME, Koch P, Nazarin S, et al., eds. The Comprehensive Classification of Fractures of Long Bones. Berlin, Heidelberg: Springer-Verlag; 1990: 106-115
  • 4 Illarramendi A, González Della Valle A, Segal E, De Carli P, Maignon G, Gallucci G. Evaluation of simplified Frykman and AO classifications of fractures of the distal radius. Assessment of interobserver and intraobserver agreement. Int Orthop 1998; 22 (2) 111-115
  • 5 Kreder HJ, Hanel DP, McKee M, Jupiter J, McGillivary G, Swiontkowski MF. Consistency of AO fracture classification for the distal radius. J Bone Joint Surg Br 1996; 78 (5) 726-731
  • 6 Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009; 42 (2) 377-381
  • 7 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33 (1) 159-174
  • 8 Trumble TE, Culp R, Hanel HP , et al. Instructional course lectures, the American Academy of Orthopaedic Surgeons—Intra-articular fractures of the distal aspect of the radius. J Bone Joint Surgery 1998; 80 (4) 582-600
  • 9 Flikkilä T, Nikkola-Sihto A, Kaarela O, Pääkkö E, Raatikainen T. Poor interobserver reliability of AO classification of fractures of the distal radius. Additional computed tomography is of minor value. J Bone Joint Surg Br 1998; 80 (4) 670-672
  • 10 Kural C, Sungur I, Kaya I, Ugras A, Ertürk A, Cetinus E. Evaluation of the reliability of classification systems used for distal radius fractures. Orthopedics 2010; 33 (11) 801 DOI: 10.3928/01477447-20100924-14.
  • 11 Belloti JC, Tamaoki MJ, Franciozi CE , et al. Are distal radius fracture classifications reproducible? Intra and interobserver agreement. Sao Paulo Med J 2008; 126 (3) 180-185
  • 12 Ploegmakers JJ, Mader K, Pennig D, Verheyen CC. Four distal radial fracture classification systems tested amongst a large panel of Dutch trauma surgeons. Injury 2007; 38 (11) 1268-1272
  • 13 Küçük L, Kumbaracı M, Günay H, Karapınar L, Ozdemir O. Reliability and reproducibility of classifications for distal radius fractures. Acta Orthop Traumatol Turc 2013; 47 (3) 153-157
  • 14 Jin WJ, Jiang LS, Shen L , et al. The interobserver and intraobserver reliability of the Cooney classification of distal radius fractures between experienced orthopaedic surgeons. J Hand Surg Eur Vol 2007; 32 (5) 509-511
  • 15 Andersen DJ, Blair WF, Steyers Jr CM, Adams BD, el-Khouri GY, Brandser EA. Classification of distal radius fractures: an analysis of interobserver reliability and intraobserver reproducibility. J Hand Surg Am 1996; 21 (4) 574-582
  • 16 Siripakarn Y, Niempoog S, Boontanapibul K. The comparative study of reliability and reproducibility of distal radius' fracture classification among: AO Frykman and Fernandez classification systems. J Med Assoc Thai 2013; 96 (1) 52-57
  • 17 Arealis G, Galanopoulos I, Nikolaou VS, Lacon A, Ashwood N, Kitsis C. Does the CT improve inter- and intra-observer agreement for the AO, Fernandez and Universal classification systems for distal radius fractures?. Injury 2014; 45 (10) 1579-1584
  • 18 Fleiss JL. Statistical Methods for Rates and Proportions. 2nd ed. New York: John Wiley & Sons; 1981: 217-218
  • 19 Jupiter JB, Fernandez DL. Comparative classification for fractures of the distal end of the radius. J Hand Surg Am 1997; 22 (4) 563-571
  • 20 MacDermid JC, Richards RS, Donner A, Bellamy N, Roth JH, Hildebrand KA. Reliability of hand fellows' measurements and classifications from radiographs of distal radius fractures. Plastic Surgery 2001; 9 (2) 51-58
  • 21 Oskam J, Kingma J, Klasen HJ. Interrater reliability for the basic categories of the AO/ASIF's system as a frame of reference for classifying distal radial fractures. Percept Mot Skills 2001; 92 (2) 589-594
  • 22 van Leerdam RH, Souer JS, Lindenhovius AL, Ring DC. Agreement between Initial Classification and Subsequent Reclassification of Fractures of the Distal Radius in a Prospective Cohort Study. Hand (NY) 2010; 5 (1) 68-71