Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Prediction Models and Their External Validation Studies for Mortality of Patients with Acute Kidney Injury: A Systematic Review

  • Tetsu Ohnuma,

    Affiliation Intensive Care Unit, Department of Anesthesiology, Saitama Medical Center, Jichi Medical University, Saitama, Japan

  • Shigehiko Uchino

    s.uchino@mac.com

    Affiliation Intensive Care Unit, Department of Anesthesiology, Jikei University School of Medicine, Tokyo, Japan

Abstract

Objectives

To systematically review AKI outcome prediction models and their external validation studies, to describe the discrepancy of reported accuracy between the results of internal and external validations, and to identify variables frequently included in the prediction models.

Methods

We searched the MEDLINE and Web of Science electronic databases (until January 2016). Studies were eligible if they derived a model to predict mortality of AKI patients or externally validated at least one of the prediction models, and presented area under the receiver-operator characteristic curves (AUROC) to assess model discrimination. Studies were excluded if they described only results of logistic regression without reporting a scoring system, or if a prediction model was generated from a specific cohort.

Results

A total of 2204 potentially relevant articles were found and screened, of which 12 articles reporting original prediction models for hospital mortality in AKI patients and nine articles assessing external validation were selected. Among the 21 studies for AKI prediction models and their external validation, 12 were single-center (57%), and only three included more than 1,000 patients (14%). The definition of AKI was not uniform and none used recently published consensus criteria for AKI. Although good performance was reported in their internal validation, most of the prediction models had poor discrimination with an AUROC below 0.7 in the external validation studies. There were 10 common non-renal variables that were reported in more than three prediction models: mechanical ventilation, age, gender, hypotension, liver failure, oliguria, sepsis/septic shock, low albumin, consciousness and low platelet count.

Conclusions

Information in this systematic review should be useful for future prediction model derivation by providing potential candidate predictors, and for future external validation by listing up the published prediction models.

Introduction

Acute kidney injury (AKI) is a common complication among critically ill patients and their mortality is high [14]. Reliable AKI specific scoring systems are important to predict outcome of AKI patients and to provide severity stratification for clinical studies. However, general severity scores for critically ill patients, e.g., Acute Physiology and Chronic Health Evaluation (APACHE) [57], Simplified Acute Physiology Score (SAPS) [8, 9], and Mortality Probability Model [10] have shown controversial results on the accuracy of predicting mortality in AKI patients [1113], partly because those scores were generated from data that included only a few AKI patients.

Over the past three decades, multiple AKI outcome prediction models, which incorporated physiologic, laboratory, organ dysfunction and previous comorbidity, have been derived [1420]. Even in the 21st century, five additional prediction models have been generated [12, 2124]. Although internal validation of these prediction models has shown good accuracy, the results of external validation studies for the models have been unsatisfactory [11, 25, 26]. Currently, there is neither consensus nor guideline recommending which prediction model to apply to clinical practice.

The objectives of this study are to systematically review the AKI outcome prediction models and their external validation studies, to describe the discrepancy of reported accuracy between the results of internal and external validations, and to identify variables frequently included in the prediction models, which might be potentially useful for future prediction model derivation.

Materials and Methods

Studies eligible for review

Studies published in the medical literature were eligible if they derived a model to predict mortality of AKI patients or externally validated at least one of the prediction models, and presented area under the receiver-operator characteristic curves (AUROC) [27] or the concordance index (c-statistic) to assess model discrimination. Studies were excluded if they described only results of logistic regression without reporting a scoring system, or if a prediction model was generated from a specific cohort. Unpublished conference abstracts were also excluded. This study followed the same principal as in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (S1 PRISMA Checklist) [28].

Literature review and study selection

We searched the MEDLINE and Web of Science electronic databases (until January 2016). In the MEDLINE search, we used the terms of “acute kidney injury” (MeSH Terms), “statistical model” (MeSH Terms), “predictive value of tests” (MeSH Terms) and “validation”. In the Web of Science, we used Key words of “acute kidney injury”, “acute renal failure”, “model”, “prediction”, “predictor”, “validity”, and “validation”. References of all selected articles were searched to identify any eligible studies. The search was restricted to human subjects. Each article selected by the primary reviewer (TO) was assessed by the second reviewer to confirm eligibility (SU).

Data extraction

A standardized data abstraction form was used to collect data on study characteristics and outcomes of interest. Data collected to describe characteristics of articles for original outcome prediction models were the type of study, study period, number of centers, sample size, mean age, gender, region, population, renal replacement therapy (RRT) requirement, hospital mortality, AKI definition, exclusion criteria, follow-up and variables included in prediction models. Following information was also collected for quality assessment of the prediction models: definition of predictors, indications for RRT defined, missing data definition, bootstrap resampling, multivariable analysis approach, event per variable ratio and internal validation cohort.

Data collected to describe characteristics of articles for external validation were type of study, study period, number of centers, sample size, mean age, hospital mortality, number of validated models and methods of discrimination and calibration. AUROCs reported in both original prediction models and external validation studies were also collected.

Results

A total of 2204 potentially relevant articles were found and screened, of which 80 were retrieved for detailed evaluation (Fig 1). We excluded five articles that had no prediction models developed by multivariate regression analysis, six articles that had no discrimination results, seven articles that validated only general severity scores or had no external discrimination results and 41 articles that assessed specific cohorts (cardiac surgery: 10, contrast-induced nephropathy: eight, others: 23). Fifty-nine articles excluded from this study are listed in a supplement file (S1 File). Finally, 12 articles reporting original prediction models for hospital mortality in AKI patients [12, 1424] and nine additional articles assessing external validation of the outcome prediction models [11, 25, 26, 2934] were selected for analysis. Five out of 12 articles reporting original prediction models also assessed other models (14 articles in total for external validation).

Characteristics of the 12 articles reporting outcome prediction models for AKI are shown in Tables 1 and 2. The study sample size ranged from 126 to 1,122 patients and the hospital mortality ranged from 36% to 75%. Only five studies (Chertow 1998, Mehta, Lins 2004, Chertow 2006, Demirjian) included more than one center and remaining seven were conducted in single center. The definition of AKI was not uniform among the 12 articles and none used recently published consensus definitions for AKI. Quality assessment for these articles is shown in Table 3. How missing data were dealt was defined only in four articles, and all of these articles also used bootstrap resampling. Eight articles used multivariable logistic regression analysis, and the other four articles (Ramussen, Schaefer, Liano and Lins 2000) used multivariable linear regression analysis. The event per variable ratio was more than 10 in all articles except for the earliest (Ramussen).

thumbnail
Table 1. Characteristics of articles reporting outcome prediction models for acute kidney injury.

https://doi.org/10.1371/journal.pone.0169341.t001

thumbnail
Table 2. AKI definitions, exclusion criteria and follow-up of articles reporting outcome prediction models for acute kidney injury.

https://doi.org/10.1371/journal.pone.0169341.t002

thumbnail
Table 3. Quality assessment for articles reporting outcome prediction models for acute kidney injury.

https://doi.org/10.1371/journal.pone.0169341.t003

Characteristics of the 14 external validation studies are shown in Table 4. The study sample size ranged from 197 to 17,326 patients and the hospital mortality ranged from 37% to 85%. Five studies were conducted in single center. All studies evaluated discrimination with the AUROC and nine studies evaluated calibration with the Hosmer-Lemeshow test.

thumbnail
Table 4. Characteristics of external validation studies for acute kidney injury outcome prediction models.

https://doi.org/10.1371/journal.pone.0169341.t004

AUROCs for hospital mortality reported in the original articles (internal validation) and external validation studies are shown in Fig 2. Seven recently published articles for AKI outcome prediction models reported AUROCs for internal validation and all of them had high AUROCs of above 0.7. All prediction models were externally validated by one or more studies. AUROCs in the external validation studies for these scores were generally low (less than 0.7 in most studies). In addition, seven prediction models that were validated both internally and externally had invariably lower AUROCs in external validation than those in internal validation.

thumbnail
Fig 2. Area under the receiver operating characteristic curves (AUROC) for hospital mortality reported in the original articles and external validation studies.

Black horizontal bars: AUROC in original studies, gray columns: AUROC in external validation studies.

https://doi.org/10.1371/journal.pone.0169341.g002

Table 5 shows variables included in more than one prediction model and their odds ratios / p values. There were 10 common non-renal variables that were reported in more than three prediction models: mechanical ventilation, age, gender, hypotension, liver failure, oliguria, sepsis/septic shock, low albumin, consciousness and low platelet count. Renal variables (low creatinine and high urea) were often used in the same prediction models.

thumbnail
Table 5. Variables included in more than one prediction model and their odds ratios / p values.

https://doi.org/10.1371/journal.pone.0169341.t005

Discussion

Key findings

We have systematically reviewed AKI outcome prediction models and their external validation studies. We found 12 articles reporting original prediction models for hospital mortality in AKI patients and nine articles assessing external validation of the outcome prediction models. Although good performance was reported in their internal validation, most of the prediction models had poor discrimination with an AUROC below the threshold of 0.7 in their external validation studies. We also identified 10 common variables that were frequently included in the prediction models.

Relationship to previous studies

The establishment of a clinical prediction model encompasses three consecutive research phases, namely derivation, external validation and impact analysis [35]. In this study, we conducted a systematic review for the first two phases in AKI outcome prediction. Several systematic reviews for clinical prediction models and their external validation have been conducted in other medical conditions, which consistently found methodological limitations. [3640]. Such limitations include case mix heterogeneity, small sample sizes, insufficient description of study design, and lack of external validation. We found the same limitations in the AKI outcome prediction studies. For example, all prediction models examined in this study were relatively old (data collected more than 10 years ago) and conducted before consensus criteria for AKI were published [4143]. Therefore, patients included in these prediction models were heterogeneous, with varied RRT requirement and mortality. We also found that more than half of the studies for AKI prediction models and their external validation were single-center (12/21, 57%), and most of them included less than 1,000 patients (19/22, 86%). Furthermore, the moment of data collection for each clinical prediction model and external validation was different. Data collection can be done at admission, at AKI diagnosis, at the start of RRT, at nephrologist consultation, and so on. Demirjian’s model for instance, collected variables at RRT start [24], while other models collected variables at nephrologist consultation [12, 23], or at AKI diagnosis [21, 23]. This variable is also important for external validation, as the discrimination AUROC value can be altered if variables are collected at different moments in the new cohort. Considering the poor generalizability of currently available prediction models (AUROCs lower than 0.7 in most external validation studies), a large database collected in multicenter using consensus AKI criteria will be needed both to derive and validate AKI outcome prediction models.

Among the prediction models included in this systematic review, we found that the Liano’s score [17] was the most often evaluated externally (11 studies). The range of AUROC validated externally for the Liano’s score was from 0.55 to 0.90, and four of them were above 0.7. The reason why Liano showed high AUROCs in some external validation studies is unclear. It might be partially explained by that the Liano’ score contained several risk factors that are frequently used in the prediction models (mechanical ventilation, age, gender, hypotension, liver failure, oliguria, consciousness disturbance), although Dharan also included nine variables, with poor discrimination by one external validation study (Table 5).

Significance and implications

To derive an accurate prediction model, choosing appropriate candidate predictors is of much importance. Previous studies have shown that clinical intuition may not be suitable for identifying candidate predictors [44]. A better approach is to combine a systematic literature review of prognostic factors associated with the outcome of interest with opinions of field experts [35]. We identified 10 common variables that were frequently included in the prediction models. These variables are also often found to be related to mortality in more recent epidemiological studies using consensus AKI criteria [4548]. We believe that our study results will be useful for future studies to derive accurate AKI outcome prediction models by including these variables for data collection.

Although often included in the prediction models, we think that including both low creatinine and high urea concentrations as independent variables can be problematic (Table 5). Low serum creatinine is included in general severity scores as one of independent variables [5]. Serum urea has been used as a marker of timing of starting RRT in several studies, which showed that patients with higher urea at start of RRT had worse outcome than patients with lower urea [49]. High urea is also included in general severity scores [5]. However, serum creatinine and urea concentrations clearly have strong co-linearity. In AKI patients, urea is almost always high when creatinine is high. Even if both variables are found to be independent variables in multivariable analysis, it seems unlikely that including both variables in a prediction model will improve prediction ability [50].

Physicians are faced with the impractical situation of having to choose among many concurrent outcome prediction models for AKI. To overcome this issue, it is recommended that investigators who have large data sets should conduct external validation studies of multiple existing models at once, in order to determine which model is most useful [51]. We believe that our study results will also be useful for future studies by providing the list of published outcome prediction models for AKI.

Strengths and limitations

The strength of our study is that, to the best of our knowledge, this is the first systematic review on AKI outcome prediction models in the medical literature. We have reviewed studies for both prediction models and their external validation, and provided potential candidate variables for future prediction models and the list of published prediction models for future external validation studies.

However, our study also contains several limitations. First, recent studies suggest that AKI biomarkers might be useful to predict outcome and could be combined with physiological and laboratory variables to improve predicting ability [52, 53]. However, prediction models should include only variables that are available at the time when the model is intended to be used, and biomarkers are not yet widely used clinically [54]. Second, we excluded six studies due to discrimination results not available [5560]. However, these studies were generally old, small, and of poor methodological quality. We believe that including these studies would not change our main findings. Finally, the AKI definitions used in both prediction models and their external validation studies are outdated, and studies included were relatively old (the most recently published study is from 2011 and the data were collected between 2003 and 2007). There is an urgent need for a mortality prediction model based on current definitions of AKI, and this systematic review can be considered a first step to accomplish this task.

Conclusions

Multiple outcome prediction models for AKI have been derived previously. These scores had good performance in their internal validation studies, while poor performance was reported in their external validation, suggesting that there is no accurate model currently available. To generate accurate AKI prediction models, several recommendations can be provided: using a large database collected in multicenter, applying consensus AKI criteria, and collecting variables frequently used in previous models (mechanical ventilation, age, gender, hypotension, liver failure, oliguria, sepsis/septic shock, low albumin, consciousness and low platelet count). Information in this systematic review should be useful both for future prediction model derivation by providing potential candidate predictors, and for future external validation by listing up the published prediction models.

Author Contributions

  1. Conceptualization: SU.
  2. Data curation: TO.
  3. Formal analysis: TO.
  4. Investigation: TO.
  5. Methodology: SU.
  6. Project administration: SU.
  7. Resources: TO.
  8. Software: TO.
  9. Supervision: SU.
  10. Validation: TO.
  11. Visualization: TO.
  12. Writing – original draft: TO.
  13. Writing – review & editing: SU.

References

  1. 1. Bagshaw SM, George C, Dinu I, Bellomo R. A multi-centre evaluation of the RIFLE criteria for early acute kidney injury in critically ill patients. Nephrol Dial Transplant 2008; 23: 1203–1210. pmid:17962378
  2. 2. Uchino S, Kellum JA, Bellomo R, Doig GS, Morimatsu H, Morgera S et al. Acute renal failure in critically ill patients: a multinational, multicenter study. JAMA 2005; 294: 813–818. pmid:16106006
  3. 3. Liangos O, Wald R, O'Bell JW, Price L, Pereira BJ, Jaber BL. Epidemiology and outcomes of acute renal failure in hospitalized patients: a national survey. Clin J Am Soc Nephrol 2006; 1: 43–51. pmid:17699189
  4. 4. Nash K, Hafeez A, Hou S. Hospital-acquired renal insufficiency. Am J Kidney Dis 2002; 39: 930–936. pmid:11979336
  5. 5. Knaus WA, Draper EA, Wagner DP, Zimmerman JE. APACHE II: a severity of disease classification system. Crit Care Med 1985; 13: 818–829. pmid:3928249
  6. 6. Knaus WA, Wagner DP, Draper EA, Zimmerman JE, Bergner M, Bastos PG, et al. The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. Chest 1991; 100: 1619–1636. pmid:1959406
  7. 7. Zimmerman JE, Kramer AA, McNair DS, Malila FM. Acute Physiology and Chronic Health Evaluation (APACHE) IV: hospital mortality assessment for today's critically ill patients. Crit Care Med 2006; 34: 1297–1310. pmid:16540951
  8. 8. Le Gall JR, Lemeshow S, Saulnier F. A new Simplified Acute Physiology Score (SAPS II) based on a European/North American multicenter study. JAMA 1993; 270: 2957–2963. pmid:8254858
  9. 9. Moreno RP, Metnitz PG, Almeida E, Jordan B, Bauer P, Campos RA, et al. SAPS 3—From evaluation of the patient to evaluation of the intensive care unit. Part 2: Development of a prognostic model for hospital mortality at ICU admission. Intensive Care Med 2005; 31: 1345–1355. pmid:16132892
  10. 10. Higgins TL, Teres D, Copes WS, Nathanson BH, Stark M, Kramer AA. Assessing contemporary intensive care unit outcome: an updated Mortality Probability Admission Model (MPM0-III). Crit Care Med 2007; 35: 827–835. pmid:17255863
  11. 11. Douma CE, Redekop WK, van der Meulen JH, van Olden RW, Haeck J, Struijk DG et al. Predicting mortality in intensive care patients with acute renal failure treated with dialysis. J Am Soc Nephrol 1997; 8: 111–117. pmid:9013455
  12. 12. Mehta RL, Pascual MT, Gruta CG, Zhuang S, Chertow GM. Refining predictive models in critically ill patients with acute renal failure. J Am Soc Nephrol 2002; 13: 1350–1357. pmid:11961023
  13. 13. Costa e Silva VT, de Castro I, Liano F, Muriel A, Rodriguez-Palomares JR, Yu L. Performance of the third-generation models of severity scoring systems (APACHE IV, SAPS 3 and MPM-III) in acute kidney injury critically ill patients. Nephrol Dial Transplant 2011; 26: 3894–3901. pmid:21505093
  14. 14. Rasmussen HH, Pitt EA, Ibels LS, McNeil DR. Prediction of outcome in acute renal failure by discriminant analysis of clinical variables. Arch Intern Med 1985; 145: 2015–2018. pmid:4062452
  15. 15. Lohr JW, McFarlane MJ, Grantham JJ. A clinical index to predict survival in acute renal failure patients requiring dialysis. Am J Kidney Dis 1988; 11: 254–259. pmid:3344747
  16. 16. Schaefer JH, Jochimsen F, Keller F, Wegscheider K, Distler A. Outcome prediction of acute renal failure in medical intensive care. Intensive Care Med 1991; 17: 19–24. pmid:1903797
  17. 17. Liano F, Gallego A, Pascual J, García-Martín F, Teruel JL, Marcén R et al. Prognosis of acute tubular necrosis: an extended prospectively contrasted study. Nephron 1993; 63: 21–31. pmid:8446248
  18. 18. Paganini EP, Halstenberg WK, Goormastic M. Risk modeling in acute renal failure requiring dialysis: the introduction of a new model. Clin Nephrol 1996; 46: 206–211. pmid:8879857
  19. 19. Chertow GM, Lazarus JM, Paganini EP, Allgren RL, Lafayette RA, Sayegh MH. Predictors of mortality and the provision of dialysis in patients with acute tubular necrosis. The Auriculin Anaritide Acute Renal Failure Study Group. J Am Soc Nephrol 1998; 9: 692–698. pmid:9555672
  20. 20. Lins RL, Elseviers M, Daelemans R, Zachée P, Zachée P, Gheuens E et al. Prognostic value of a new scoring system for hospital mortality in acute renal failure. Clin Nephrol 2000; 53: 10–17. pmid:10661477
  21. 21. Lins RL, Elseviers MM, Daelemans R, Arnouts P, Billiouw JM, Couttenye Met al. Re-evaluation and modification of the Stuivenberg Hospital Acute Renal Failure (SHARF) scoring system for the prognosis of acute renal failure: an independent multicentre, prospective study. Nephrol Dial Transplant 2004; 19: 2282–2288. pmid:15266030
  22. 22. Dharan KS, John GT, Antonisamy B, Kirubakaran MG, Jacob CK. Prediction of mortality in acute renal failure in the tropics. Ren Fail 2005; 27: 289–296. pmid:15957545
  23. 23. Chertow GM, Soroko SH, Paganini EP, Cho KC, Himmelfarb J, Ikizler TA et al. Mortality after acute renal failure: models for prognostic stratification and risk adjustment. Kidney Int 2006; 70: 1120–1126. pmid:16850028
  24. 24. Demirjian S, Chertow GM, Zhang JH, O'Connor TZ, Vitale J, Paganini EP et al. Model to predict mortality in critically ill adults with acute kidney injury. Clin J Am Soc Nephrol 2011; 6: 2114–2120. pmid:21896828
  25. 25. Kolhe NV, Stevens PE, Crowe AV, Lipkin GW, Harrison DA. Case mix, outcome and activity for patients with severe acute kidney injury during the first 24 hours after admission to an adult, general critical care unit: application of predictive models from a secondary analysis of the ICNARC Case Mix Programme database. Crit Care 2008; 12 Suppl 1: S2.
  26. 26. Uchino S, Bellomo R, Morimatsu H, Morgera S, Schetz M, Tan I et al. External validation of severity scoring systems for acute renal failure using a multinational database. Crit Care Med 2005; 33: 1961–1967. pmid:16148466
  27. 27. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982; 143: 29–36. pmid:7063747
  28. 28. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009; 151: 264–269. pmid:19622511
  29. 29. Martin C, Saran R, Leavey S, Swartz R. Predicting the outcome of renal replacement therapy in severe acute renal failure. ASAIO J 2002; 48: 640–644. pmid:12455775
  30. 30. d'Avila DO, Cendoroglo Neto M, dos Santos OF, Schor N, Poli de Figueiredo CE. Acute renal failure needing dialysis in the intensive care unit and prognostic scores. Ren Fail 2004; 26: 59–68. pmid:15083924
  31. 31. Lima EQ, Dirce MT, Castro I, Yu L. Mortality risk factors and validation of severity scoring systems in critically ill patients with acute renal failure. Ren Fail 2005; 27: 547–556. pmid:16152992
  32. 32. Lin YF, Ko WJ, Wu VC, Chen YS, Chen YM, Hu FC et al. A modified sequential organ failure assessment score to predict hospital mortality of postoperative acute renal failure patients requiring renal replacement therapy. Blood Purif 2008; 26: 547–554. pmid:19052448
  33. 33. Costa e Silva VT, de Castro I, Liano F, Muriel A, Rodriguez-Palomares JR, Yu L. Sequential evaluation of prognostic models in the early diagnosis of acute kidney injury in the intensive care unit. Kidney Int 2009; 75: 982–986. pmid:19212423
  34. 34. Ohnuma T, Uchino S, Toki N, Takeda K, Namba Y, Katayama S et al.; JSEPTIC (Japanese Society for Physicians and Trainees in Intensive Care) Clinical Trial Group. External Validation for Acute Kidney Injury Severity Scores: A Multicenter Retrospective Study in 14 Japanese ICUs. Am J Nephrol. 2015; 42: 57–64. pmid:26337793
  35. 35. Labarère J, Renaud B, Fine MJ. How to derive and validate clinical prediction models for use in intensive care medicine. Intensive Care Med. 2014; 40: 513–27. pmid:24570265
  36. 36. Ettema RG, Peelen LM, Schuurmans MJ, Nierich AP, Kalkman CJ, Moons KG. Prediction models for prolonged intensive care unit stay after cardiac surgery: systematic review and validation study. Circulation. 2010; 122: 682–9 pmid:20679549
  37. 37. Wlodzimirow KA, Eslami S, Chamuleau RA, Nieuwoudt M, Abu-Hanna A. Prediction of poor outcome in patients with acute liver failure-systematic review of prediction models. PLoS One. 2012; 7: e50952. pmid:23272081
  38. 38. Jaja BN, Cusimano MD, Etminan N, Hanggi D, Hasan D, Ilodigwe D et al. Clinical prediction models for aneurysmal subarachnoid hemorrhage: a systematic review. Neurocrit Care. 2013; 18: 143–53. pmid:23138544
  39. 39. Warnell I, Chincholkar M, Eccles M. Predicting perioperative mortality after oesophagectomy: a systematic review of performance and methods of multivariate models. Br J Anaesth. 2015; 114: 32–43. pmid:25231768
  40. 40. Silver SA, Shah PM, Chertow GM, Harel S, Wald R, Harel Z. Risk prediction models for contrast induced nephropathy: systematic review. BMJ. 2015 27; 351: h4395. pmid:26316642
  41. 41. Bellomo R, Ronco C, Kellum JA, Mehta RL, Palevsky P; Acute Dialysis Quality Initiative workgroup. Acute renal failure—definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group. Crit Care, 8: R204–12, 2004. pmid:15312219
  42. 42. Mehta RL, Kellum JA, Shah SV, Molitoris BA, Ronco C, Warnock DG et al.: Acute Kidney Injury Network: report of an initiative to improve outcomes in acute kidney injury. Crit Care, 11: R31, 2007. pmid:17331245
  43. 43. Kidney Disease: Improving Global Outcomes (KDIGO) Acute Kidney Injury Work Group: KDIGO Clinical Practice Guideline for Acute Kidney Injury. Kidney Int Suppl., 2: 1–138, 2012.
  44. 44. Randolph AG, Guyatt GH, Calvin JE, Doig G, Richardson WS. Understanding articles describing clinical prediction tools. Evidence Based Medicine in Critical Care Group. Crit Care Med. 1998; 26: 1603–12. pmid:9751601
  45. 45. Nisula S, Kaukonen KM, Vaara ST, Korhonen AM, Poukkanen M, Karlsson S, et al.; FINNAKI Study Group. Incidence, risk factors and 90-day mortality of patients with acute kidney injury in Finnish intensive care units: the FINNAKI study. Intensive Care Med. 2013; 39: 420–8. pmid:23291734
  46. 46. Bouchard J, Acharya A, Cerda J, Maccariello ER, Madarasu RC, Tolwani AJ, et al. A Prospective International Multicenter Study of AKI in the Intensive Care Unit. Clin J Am Soc Nephrol. 2015; 10: 1324–31. pmid:26195505
  47. 47. Xu X, Nie S, Liu Z, Chen C, Xu G, Zha Y, et al. Epidemiology and Clinical Correlates of AKI in Chinese Hospitalized Adults. Clin J Am Soc Nephrol. 2015; 10: 1510–8. pmid:26231194
  48. 48. Hoste EA, Bagshaw SM, Bellomo R, Cely CM, Colman R, Cruz DN, et al. Epidemiology of acute kidney injury in critically ill patients: the multinational AKI-EPI study. Intensive Care Med. 2015; 41: 1411–23. pmid:26162677
  49. 49. Gettings LG, Reynolds HN, Scalea T. Outcome in post-traumatic acute renal failure when continuous renal replacement therapy is applied early vs. late. Intensive Care Med. 1999; 25: 805–13. pmid:10447537
  50. 50. Uchino S. Outcome prediction for patients with acute kidney injury. Nephron Clin Pract. 2008; 109: c217–23. pmid:18802370
  51. 51. Collins GS, Moons KG. Comparing risk prediction models. BMJ. 2012; 344: e3186. pmid:22628131
  52. 52. McIlroy DR, Farkas D, Matto M, Lee HT. Neutrophil gelatinase-associated lipocalin combined with delta serum creatinine provides early risk stratification for adverse outcomes after cardiac surgery: a prospective observational study. Crit Care Med. 2015; 43: 1043–52. pmid:25768681
  53. 53. Pike F, Murugan R, Keener C, Palevsky PM, Vijayan A, Unruh M et al.; Biological Markers for Recovery of Kidney (BioMaRK) Study Investigators. Biomarker Enhanced Risk Prediction for Adverse Outcomes in Critically Ill Patients Receiving RRT. Clin J Am Soc Nephrol. 2015; 10: 1332–9. pmid:26048891
  54. 54. Moons KG, Royston P, Vergouwe Y, Grobbee DE, Altman DG. Prognosis and prognostic research: what, why, and how? BMJ. 2009; 338: b375. pmid:19237405
  55. 55. Cioffi WG, Ashikaga T, Gamelli RL. Probability of surviving postoperative acute renal failure. Development of a prognostic index. Ann Surg 1984; 200: 205–11. pmid:6465976
  56. 56. Lien J, Chan V. Risk factors influencing survival in acute renal failure treated by hemodialysis. Arch Intern Med 1985; 145: 2067–9. pmid:4062459
  57. 57. Corwin HL, Teplick RS, Schreiber MJ, Fang LS, Bonventre JV, Coggins CH. Prediction of outcome in acute renal failure. Am J Nephrol 1987; 7:8–12. pmid:3578381
  58. 58. Barton IK, Hilton PJ, Taub NA, Warburton FG, Swan AV, Dwight J et al. Acute renal failure treated by haemofiltration: factors affecting outcome. Q J Med 1993; 86: 81–90. pmid:8464996
  59. 59. Chertow GM, Christiansen CL, Cleary PD, Munro C, Lazarus JM. Prognostic stratification in critically ill patients with acute renal failure requiring dialysis. Arch Intern Med 1995; 155: 1505–11. pmid:7605152
  60. 60. Radovic M, Ostric V, Djukanovic L. Validity of prediction scores in acute renal failure due to polytrauma. Ren Fail 1996; 18: 615–20. pmid:8875687