Background

Artificial intelligence (AI), broadly defined, is replication of human logic, thought, and processing using machines. Continual advancements in computer processors, logic algorithms, and software increase computing power exponentially. The ability for artificial intelligence and machine learning systems to integrate data from multiple sources and apply continuously learning algorithms in real time make AI systems appealing in multiple fields, including medicine. Regulatory changes and medical advances in the past two decades have driven massive increases in the amounts of data generated during patient care. The advent of electronic medical records has made this data increasingly accessible. Specifically, in the field of organ transplantation, access to field-defined data for organ offers, candidate characteristics, and long-term clinical outcomes has allowed the development of both retrospective risk adjustment models and prospective predictive analytics (Table 1 and 2). Incorporation of data from electronic medical records, outpatient immunosuppression monitoring, and novel monitoring programs utilizing the patient’s personal technology have increased data available for analysis exponentially. Although in its infancy, machine learning can integrate and process data from multiple inputs and apply evidence-based computational tools to guide clinical decision-making. The goal of this review is to highlight the areas relevant to solid organ transplantation where artificial intelligence and machine learning have potential to improve outcomes and access to transplant.

Pretransplant Evaluation of Donors and Recipients

Patients in the pretransplant period undergo extensive clinical, social, and financial evaluations for organ transplantation. Within this framework, current evaluation and listing criteria are based on “clinical judgment” and generalized heuristics. Based on the landmark paper by Merion et al. in 2005, the use of “extended criteria donors (ECD),” defined as donors over the age of 60 or over the age of 50 with a history of hypertension, creatinine > 1.5 mg/dL, or death from stroke, was found to be beneficial for older patients and those patients living in areas with longer waiting times [1]. However, in the intervening 15 years, the field has grown more complex, as donors have more adverse characteristics and recipients now have greater burdens of comorbidities. Consequently, not all patients benefit from all kidneys, specifically marginal and higher risk organs [2]. Developing more sophisticated risk calculators by using both recipient, donor, and center-specific characteristics to provide a personalized risk assessment to each patient can maximize utilization of recovered organs, decrease discard, and lower waitlist mortality [3•].

In 2009, the Kidney Donor Risk Index was introduced to quantify the risk graft failure based upon clinical characteristics, adding precision to the prior ECD/standard donor dichotomy [4]. Utilizing ten donor-specific, and four transplant-specific factors, Rao et al provided a robust estimate of the relative risk of posttransplant graft failure compared to a reference donor. Currently, a variant of KDRI called the Kidney Donor Profile Index (KDPI) is used to assess the “quality” of a kidney. By removing the transplant specific factors from the KDRI, and normalizing the data to a percentile score ratio, KDPI provides a metric to judge “quality” of the offered kidney against the entire population of offered kidneys [5]. Organs with a KDPI ≥85%, also known as “High KDPI” organs, are associated with reduced 5-year survival and greater risk of graft failure compared to kidneys with KDPI <85. While these data provide some increased clarity, there is a risk of adverse selection and labeling of organs which may contribute to excess organ discard [6, 7].

Recipient longevity after transplant has been modeled as the estimated posttransplant survival (EPTS) score. Utilizing a recipient’s length of time on dialysis, current diabetes diagnosis status, history of prior organ transplants, and age, the EPTS score provides a percentile score, with lower scores expected to experience more years of live from kidney transplant [8]. A recipient’s EPTS score has significant implications for organ offers, as the current allocation system assigns priority for the top 20% of kidneys (as denoted by KDPI <20) to patients with an EPTS of ≤20 [5]. While patients with shorter expected EPTS may derive more limited duration of benefit from kidney transplant, many will experience improved quality of life and reduced burden of disease. One major limitation to effective evaluation of donor-recipient interactions is the current metrics utilized to evaluate recipient and organ quality. For example, the EPTS model has a C-statistic of 0.697 and only uses 4 characteristics to evaluate potential recipients. Other statistical models derived using machine learning show a higher 5-year concordance statistic than the published EPTS model (0.724 vs 0.697) and have the advantage of integrating donor and recipient criteria in the model [9]. Although numerically small, an increase in concordance statistic can have significant impact on the allocation of organs nationally. Additionally, this analysis demonstrated that machine learning approaches can effectively identify the differential impacts of clinical factors in various subpopulations of a clinical model.

Machine learning algorithms have been utilized to better define the benefit, defined as the difference between survival on dialysis and survival with transplant, of specific offers for individual candidates incorporating EPTS, KDPI, and waitlist characteristics. Bae et al analyzed 120,000 patient records in the Scientific Registry of Transplant Recipients (SRTR) database, and applied machine learning algorithms identify patients likely to benefit from a specific donated kidney [3•]. The authors suggest that some patients have a higher probability of 5-year survival by waiting for a lower KDPI kidney rather than taking a higher risk donor offer early given their low risk of death on dialysis (Figure 1). Conversely, even high KDPI organs result in improved survival and measurable benefit for patients with a high risk of death on dialysis in regions with prolonged waiting time. Easy access to a more precise estimate of the benefit of transplant providers provides an opportunity to make a more nuanced assessment of patient and organ-specific benefit and assist patients in making informed choices within the current severe shortage of transplantable organs. The practical utility of this tool is shown in Figure 1, whereby a patient with an EPTS of 35 is offered two different organs, one of KDPI 50 and one of KDPI 90. The Hopkins transplant calculator takes both EPTS and KDPI and incorporates them into a conferred survival advantage represented by a numerical increase in percentage points. The model is available at http://www.transplantmodels.com/kdpi-epts.

Fig. 1
figure 1

Estimated survival benefit for transplant for patients with expected posttransplant survival (EPTS) score of 35% who accepts a kidney with a kidney donor profile score (KDPI) of 50 (top panel) or 90 (bottom panel). Available at http://www.transplantmodels.com/kdpi-epts/.

Table 1 Multivariate statistical models in organ transplant
Table 2 Machine learning models in organ transplant

Procurement frozen section biopsies are frequently performed to evaluate pathologic changes in high KDPI kidneys, despite evidence that the current system of interpretation by local on-call pathologists provides inconsistent data and increases the rate of inappropriate organ discard [10]. Given the complexity of interpretation of these biopsies and the inherent time limitations, use of AI to interpret whole-slide multilevel images shows promise in early trials [11, 12]. Rapid evaluation and turnaround of kidney and liver biopsies using machine learning can improve the reliability data available at the time of organ offers [13, 14]. Machine learning has been shown to have similar sensitivity and specificity in identifying T-cell-mediated rejection (TCMR) and antibody-mediated rejection (AMR) when compared to expert opinion [11]. Whole-slide imaging has also been shown to have promise in standardizing renal fibrosis quantification [15]. Having fast access to reliably interpreted kidney biopsy data not only has implications for organ acceptance but may eventually reduce diagnostic uncertainty after transplant, by more accurate identification of rejection and recurrent disease.

Technology-enabled algorithms also have recently been introduced by UNOS to more precisely target offers to centers most likely to use them. In an effort to increase surgeon satisfaction and decrease unwanted organ offers, UNOS has piloted the Organ Offer Explorer tool, whereby new organ offers are compared to prior accepted organs of the surgeon and only organs consistent with prior behaviors are convey to the transplant program [16]. This technology-assisted decision tool requires data on previous acceptance behavior for transplant centers and surgeons to prioritize organ offers and reduce burden, both of which are needed in light of recent kidney allocation reforms which substantially increase the complexity of organ placement [17]. Thus, given the enhanced computational power, AI-enhanced scoring systems can extend KDPI and EPTS by incorporating additional factors such as anticipated cold ischemic time and pulsatile perfusion parameters into the assessments of donor quality and cardiac disease, frailty, and socioeconomic status into recipient scoring. These novel measures would enhance the precision of survival prediction tools.

Other machine learning assist tools exist to help inform surgeons regarding novel situations and their impact on transplant. In light of the recent COVID-19 pandemic, a machine learning model to identify scenarios of the benefit or harm from kidney transplant during the pandemic was reported [18•]. This study highlights how machine learning technologies have potential to address rapidly evolving clinical and social situations to provide “evidence-based” care without the benefit of extensive clinical trials [4]. In conclusion, in the pretransplant space, AI offers the potential to improve candidate evaluation donor acceptance and patient education.

Posttransplant Management

Following transplantation, recipients require an initial period intensive monitoring followed by routine testing for the duration of their transplant. Attempting to identify post-operative complications early, manage immunosuppression, and maintain close follow-up are challenges that every transplant center faces. In the immediate post-operative period, complications such as infection, acute graft rejection, and delayed graft function are of significant concern. Luo et al utilized machine learning algorithms to develop a predictive model to identify patients at higher risk of severe pneumonia during the posttransplant hospitalization [19]. Delayed graft function (DGF), defined as the need for dialysis within 1 week of kidney transplantation, is associated with greater rates of rejection, higher costs, and impaired patient quality of life [12, 13]. The first predictive model utilizing patient characteristics to calculate risk of delayed graft function was developed in 2010 which was found to have a c-statistic of 0.704 and generalizable to external populations [20, 21]. New analyses by Kawakita et al compared several of the accepted methods of developing clinically oriented machine learning models (algorithms-logistic regression (LR), elastic net, random forest, artificial neural network (ANN), and extreme gradient boosting (XGB)) to assess the efficacy head to head in predicting delayed graft function in kidney transplant recipients [22]. The resulting models used 30 variables: 13 were donor-related, eight were recipient-related, and five were transplant-related. When compared with standard regression analyses, machine learning models had improved discrimination compared with published regression model measured by greater area under the receiver operating curve results (0.742 vs. 0.703). Accurate identification of grafts at high risk of DGF may assist with future interventional studies to decrease ischemia-reperfusion injury, targeted use of pulsatile perfusion, and patient counseling at the time of organ offer.

Choice of immunosuppression regimens after kidney and liver transplant varies widely [23, 24]. Prior to national registry, analyses have confirmed that this variation is primarily driven by protocols and are center specific. However, immunosuppression regimens and patient characteristics have been shown to impact 3-year graft survival and complications from immunosuppression when comparing similar individuals on different immunosuppression regiments [25]. Thus, developing a personalized approach to immunosuppression may lead to reduced posttransplant morbidity. For example, regimens with early steroid withdrawal after kidney transplant are associated with fewer complications in elderly populations. An interactive tool which allows prediction of complications based on immunosuppression using both donor and recipient data is accessible through the CISTEM Immunosuppression Complication Risk Rejection Tool (www.CISTEM.wustl.edu). This site provides a visual graphic with 3-year projection risk of complications from immunosuppression as well as 3-year graft failure rates.

In addition to impact in graft failure, genetic differences in individuals such as single nucleotide polymorphisms in the CYP3A and the presence of the APOL-1 risk allele have clinically significant impacts in posttransplant outcomes and tacrolimus dosing [ 26, 27]. A recent review analyzed publications which utilized protein biomarkers and pharmacogenetic factors in machine learning models trying to model tacrolimus bioavailability. These studies showed that artificial neural networks have shown superior AUC, sensitivity, and specificity than empiric weight or race-based drug dosing strategies [28]. Machine learning systems have potential to improve the precision of immunosuppression dosing. Variations in tacrolimus trough concentrations have been associated with increased risks of acute graft loss.

Technology-enabled care with handheld technologies such as cell phones provides a significant opportunity to improve adherence and transplant outcome. McGillicuddy et al demonstrated that 1-year tacrolimus trough variability was significantly reduced using a mobile medication monitoring application [29]. There are several other ongoing clinical trials also evaluating the efficaciousness of technology-enabled care in monitoring immunosuppression in kidney transplant [30, 31]. The ubiquitous access to technology in the current era provides opportunity for development of patient-oriented interventions to improve clinical outcomes through patient empowerment and AI-driven feedback.

In light of the COVID-19 epidemic, technology has also provided an accessible means to provide transplant care remotely. Transplants patients have been shown to have an overall positive disposition toward using technology in their care [32]. Easily obtainable metrics of frailty when paired with patient reported functional questionnaires have been shown to be a potential screening tool for poor functional candidates awaiting kidney transplantation [33]. For example, assessment of patient mobility easily measured frailty assessments such as sit to stand, get up and go, and steps per day, which can be incorporated into telehealth visits and can accurately predict poor transplant candidacy [34]. Telehealth systems and remote monitoring of biomarkers including deceased donor cell free DNA and genetic expression profile is a potentially feasible model for allografts for graft injury when incorporated into multidimensional assessments of graft function and survival [35].

Conclusions

Artificial intelligence and technology are becoming increasingly important adjuncts for clinical decision-making. Ubiquitous access to technology and the increasing amount patient data readily available argue for rapid adoption of big data analytics into transplant care. Clinical decision-making at multiple phases in transplant care have been heavily influenced by center-specific algorithms to guide donor organ selection and recipient approval without reliable assessment of the predicted outcomes given specific donor and characteristics. Multiple AI models now exist to help guide clinicians in providing data-driven personalized care. As outlined above, in almost every aspect of transplant care, there exist tools, calculators, and clinical models that provide important insight, yet many are not yet readily available at the point of care.

Many factors have been shown to influence long-term graft function beyond KDPI and EPTS, including biopsy results, perfusion pump parameters, and estimated cold ischemic time, and should be integrated into the organ offers. The technology exists to utilize this data within transplant network to optimize allocation and guide clinical decision-making. We suggest that UNOS should incorporate these data into the DonorNet® to provide objective patient-specific data at the time of organ offer. Similarly, patients and referring providers should have access to patient-specific graft and patient survival estimates to inform their decisions about specific offers. Without these data, we are simply providing our best guess.

Availability of Data and Material

All data obtained from published works are available on pubmed.gov.