Skip to main content
Log in

How reliable is the manual correction of the autoscoring of a level IV sleep study (ApneaLink™) by an observer without experience in polysomnography?

  • Short Communication
  • Published:
Sleep and Breathing Aims and scope Submit manuscript

Abstract

Objective

This study aims to compare the manual correction of the automatic analysis of ApneaLink™ between a skilled observer in the interpretation of sleep studies and a subject trained only in the scoring of ApneaLink™ device.

Methods

Ninety-six subjects performed the ApneaLink™ and polysomnography (PSG) simultaneously in the sleep laboratory. Two blind observers, who were independent from the results of the PSG, performed first the automatic scoring and then the hand correction from the ApneaLink™ device. The scorers of ApneaLink™ represented two physicians with different levels of training (scorer A: 20 years of experience in reading polysomnography plus 3 years of experience in the interpretation of ApneaLink™, scorer B: 1 year of experience in the analysis of ApneaLink™). The interobserver agreement was assessed with the intraclass correlation coefficient (ICC) and kappa statistics. The diagnostic accuracy of the manual analysis ApneaLink™ device was evaluated by the area under the receiver operator curve (AUC-ROC).

Results

Ninety patients were included (69 men; mean age, 49.6; median RDI, 13.9; median BMI, 29.3 Kg/m2). The ICC between the manual apnea/hypopnea index from ApneaLink™ and the respiratory disturbance index of the PSG for each observer was similar (scorer A, 0.902; CI 95% 0.80–0.95; vs. scorer B, 0.904; CI 95% 0.86–0.94; p = 0.9). The agreement between the observers on the presence or absence of obstructive sleep apnea syndrome (OSAS) was very good (kappa, 0.83; CI 95% 0.69–0.98). The AUC-ROC was similar between the observers (scorer A, 0.88; CI 95% 0.78–0.98; scorer B, 0.83; CI 95% 0.71–0.95; p = 0.5).

Conclusions

The non-expert observer showed a very good agreement with the expert observer on the results of the manual correction of the ApneaLink™ autoscoring. Both observers had similar diagnostic accuracy to identify subjects with OSAS when compared with PSG.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Whiting P, Rutjes AW, Reitsma JB, Glas AS, Bossuyt PM, Kleijnen J (2004) Sources of variation and bias in studies of diagnostic accuracy: a systematic review. Ann Intern Med 140:189–202

    PubMed  Google Scholar 

  2. Cohen MB, Rodgers RP, Hales MS, Gonzales JM, Ljung BM, Beckstead JH, Bottles K, Miller TR (1987) Influence of training and experience in fine-needle aspiration biopsy of breast. Receiver operating characteristics curve analysis. Arch Pathol Lab Med 111:518–520

    PubMed  CAS  Google Scholar 

  3. Ronco G, Montanari G, Aimone V, Parisio F, Segnan N, Valle A, Volante R (1996) Estimating the sensitivity of cervical cytology: errors of interpretation and test limitations. Cytopathology 7:151–158

    Article  PubMed  CAS  Google Scholar 

  4. Raab SS, Thomas PA, Lenel JC, Bottles K, Fitzsimmons KM, Zaleski MS, Wittchow RJ, McPhaul LW, Slagel DD, Cohen MB (1995) Pathology and probability. Likelihood ratios and receiver operating characteristic curves in the interpretation of bronchial brush specimens. Am J Clin Pathol 103:588–593

    PubMed  CAS  Google Scholar 

  5. Bliwise D, Bliwise NG, Kraemer HC, Dement W (1984) Measurement error in visually scored electrophysiological data: respiration during sleep. J Neurosci Methods 12:49–56

    Article  PubMed  CAS  Google Scholar 

  6. Collop NA, Anderson WM, Boehlecke B, Claman D, Goldberg R, Gottlieb DJ, Hudgel D, Sateia M, Schwab R (2007) Clinical guidelines for the use of unattended portable monitors in the diagnosis of obstructive sleep apnea in adult patients. J Clin Sleep Med 3:737–747

    PubMed  Google Scholar 

  7. Nigro CA, Dibur E, Aimaretti S, González S, Rhodius E (2011) Comparison of the automatic analysis versus the manual scoring from ApneaLink™ device for the diagnosis of obstructive sleep apnoea syndrome. Sleep Breath (in press)

  8. Pita Fernández S, Pértega Díaz S (2003) Pruebas diagnósticas. Cad Aten Prim 10:120–124

    Google Scholar 

  9. Fleiss JL (1981) The measurement of inter-rater agreement. In: Fleiss JL (ed) Statistical methods for rates and proportions. Wiley, New York, pp 212–225

    Google Scholar 

  10. Whitney CW, Gottlieb DJ, Redline S, Norman RG, Dodge RR, Shahar E, Surovec S, Nieto FJ (1998) Reliability of scoring respiratory disturbance indices and sleep staging. Sleep 21:749–757

    PubMed  CAS  Google Scholar 

  11. Suzuki M, Saigusa H, Chiba S, Yagi T, Shibasaki K, Hayashi M, Suzuki M, Moriyama K, Kodera K (2005) Discrepancy in polysomnography scoring for a patient with obstructive sleep apnea hypopnea syndrome. Tohoku J Exp Med 206:353–360

    Article  PubMed  Google Scholar 

Download references

Acknowledgments

The authors wish to thank Ms. Jaquelina Mastantuono for revising the English text.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlos Alberto Nigro.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nigro, C.A., Malnis, S., Dibur, E. et al. How reliable is the manual correction of the autoscoring of a level IV sleep study (ApneaLink™) by an observer without experience in polysomnography?. Sleep Breath 16, 275–279 (2012). https://doi.org/10.1007/s11325-011-0524-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11325-011-0524-y

Keywords

Navigation