Skip to main content
Erschienen in: Ophthalmology and Therapy 3/2023

Open Access 08.03.2023 | REVIEW

Artificial Intelligence for Anterior Segment Diseases: A Review of Potential Developments and Clinical Applications

verfasst von: Zhe Xu, Jia Xu, Ce Shi, Wen Xu, Xiuming Jin, Wei Han, Kai Jin, Andrzej Grzybowski, Ke Yao

Erschienen in: Ophthalmology and Therapy | Ausgabe 3/2023

Abstract

Artificial intelligence (AI) technology is promising in the field of healthcare. With the developments of big data and image-based analysis, AI shows potential value in ophthalmology applications. Recently, machine learning and deep learning algorithms have made significant progress. Emerging evidence has demonstrated the capability of AI in the diagnosis and management of anterior segment diseases. In this review, we provide an overview of AI applications and potential future applications in anterior segment diseases, focusing on cornea, refractive surgery, cataract, anterior chamber angle detection, and refractive error prediction.
Hinweise
Zhe Xu and Jia Xu contributed equally as co-first authors.
Key Summary Points
The application of artificial intelligence (AI) in anterior segment diseases is promising.
AI is capable of diagnosing and managing anterior segment diseases.
Cornea, refractive surgery, cataract, anterior chamber angle detection and refractive error prediction are the most common fields of AI applications.
AI methods still face potential challenges.

Introduction

With the development of artificial intelligence (AI) technology, it is proving to be promising in the field of healthcare, including radiology, pathology, microbiology, electronic medical records, and surgery [13]. The potential value of AI in ophthalmology is expanding, especially in the areas relying on big data and image-based analysis [4, 5]. The cornea and the lens are the two most important refractive structures in the anterior segment [2]. Cataract and cornea opacity rank within the top five leading causes of blindness worldwide [6, 7]. Delayed recognition of anterior segment diseases may lead to severe complications and permanent vision loss. The diagnosis and treatment of anterior segment diseases often involve imaging analysis, including slit-lamp photography (SLP), anterior segment optical coherence tomography (AS-OCT), corneal tomography/topography (CT), ultrasound biomicroscope (UBM), specular microscopy (SM), and in vivo confocal microscopy (IVCM) [24]. Thus, AI algorithms, based on anterior segment images, are mostly committed to improving accuracy for disease screening and predicting possible outcomes after disease treatments, in combination with clinical data [2, 4, 5].
In this review, we detail the history and basic principles of AI algorithms in anterior segment diseases. In addition, we provide updated overview of AI applications and discuss the potential futures in anterior segment diseases, encompassing cornea, refractive surgery, cataract, anterior chamber angle detection, and refractive error prediction.

Methods

We performed the systematic review using the PubMed database, focusing on the most recent studies and clinical trials on the anterior segment diseases. The following keywords were searched: “artificial intelligence,” “machine learning,” “deep learning,” “cornea,” “keratitis,” “keratoplasty,” “corneal sub-basal nerve,” “corneal epithelium,” “corneal endothelium,” “refractive surgery,” “keratoconus,” “cataract,” “cataract surgery”, “intraocular lens,” “paediatric cataract,” “cataract surgery training,” “cataract surgery monitoring,” “anterior chamber angle,” “refractive error,” and “anterior segment diseases.” Articles published in English were included, which were manually screened for further relevant studies. This article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors. The literature search is illustrated in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRIMSA) flowchart (Fig. 1).

History and Principle

McCarthy et al. first reported the concept of AI in 1955 [8]. In general, the core of AI algorithm is to mimic human behaviors in the real world and make human-like decisions by software programs [9]. The first generation of an AI algorithm relied on the curation of medical experts and the formulation of robust decision rules [10]. During the past 60 years, AI algorithms have improved considerably. Recent AI algorithms can manage more complex interactions, which include machine learning (ML) and deep learning (DL) as the two most common subfields [5, 9].
The ML algorithms contain two categories: supervised and unsupervised. Supervised ML methods are trained by input–output pairs (termed “ground truth”), which contain inputs and the desired output labels manually marked by human experts [5, 9, 10]. The algorithm learns to give the correct output for an input on new cases. Unsupervised ML methods are trained using unlabeled data to find subclusters of the original data, which may demonstrate something previously unknown to experts. There are common ML algorithms, such as logistic regression (LR), decision trees (DT), support vector machines (SVM), Gaussian mixture model (GMM), and random forests (RF), etc.
The DL algorithms, the latest incarnation of the artificial neural networks (ANN), can perform sophisticated multilevel data abstraction without manual feature engineering [9, 11]. The DL method consists of an input layer, an output layer, and a few hidden layers in between. An artificial neuron, such as a cortical neuron, responds to one element of the input layer and hidden layer. The “weight” of each input and hidden layer can be ascribed before passing to the next layer. Convolutional neural networks (CNN) and recurrent neural networks (RNN) are the progression of DL methods.

Cornea

Infectious Keratitis

Corneal blindness largely results from keratitis [6, 12]. Keratitis can get worse rapidly and even progress to corneal perforation without timely and correct diagnosis [12]. Early detection and appropriate treatment of keratitis can halt the disease progression and reach a better visual prognosis [12, 13]. However, the diagnosis of keratitis often requires a skilled ophthalmologist, the current clinical positive rate (33–80%) is far from ideal [14].
AI has been developed to recognize causes and improve diagnostic accuracy of keratitis. Saini et al. built an ANN classifier using 106 corneal ulcer subjects with laboratory evidence on smear or culture, and complete healing response to specific antibiotic or antifungal drugs [15]. Specificities of the ANN classifier for bacterial and fungal categories were 76.5% and 100%, respectively. Using 2008 IVCM images, Lv et al. tested a DL ResNet system to diagnose fungal keratitis [16]. The accuracy, specificity, and sensitivity were 0.9364, 0.9889, and 0.8256, respectively. Also, with 1870 IVCM images, Liu et al. trained an novel CNN framework (AlexNet) for the automatic diagnosis of fungal keratitis (accuracy = 99.95%) [17]. Kuo et al. designed a DL framework with DenseNet architecture, relying on 288 slit-lamp images, to diagnose fungal keratitis (sensitivity of 71%, specificity of 68%) [18]. Hung et al. developed deep learning models for identifying bacterial keratitis and fungal keratitis, with a highest average accuracy of 80.0% using slit-lamp images from 580 patients [19]. Ghosh et al. tested CNN models with the highest area under the curve (AUC) of 0.86 for rapidly discriminating between fungal keratitis and bacterial keratitis using slit-lamp images from 194 patients [20]. Li et al. built a DL algorithm, DenseNet121, based on 6567 slit-lamp images [21]. DenseNet121 achieved an AUC of 0.998, a sensitivity of 97.7%, and a specificity of 98.2% in keratitis detection.
To test the capability of distinguishing keratitis from other anterior diseases, AI methods have also proven to be workable. Based on 1772 slit-lamp images, Li et al. combined Visionome with a DL framework for dense annotation of the pathological features [22]. DL frameworks using ResNet and faster region-based CNN detected anterior disease, such as keratitis, conjunctival hyperemia, and pterygium, etc. Gu et al. proved that a novel DL network focusing on 5325 slit-lamp images, which contained a family of multitask and multilabel learning classifiers, was workable to diagnose infectious keratitis, noninfectious keratitis, corneal dystrophy or degeneration, and corneal neoplasm (AUC = 0.910) [23]. Loo et al. also proposed a DL algorithm for segmentation of ocular structures and microbial keratitis biomarkers [24]. Using slit-lamp images from 133 eyes, the DL algorithm is promising for the quantification of corneal physiology and pathology.

Keratoplasty

As reported by the Eye Bank, the demand for corneal graft tissue is increasing, which represents a significant financial and public health burden [25]. AI methods may help cornea surgeons to better decide on possibilities for performing keratoplasty and related procedures. Yousefi et al. introduced a ML approach, which was useful for identifying keratoconus and suspects (3495 subjects) who may be at higher risk for future keratoplasty using AS-OCT information [26]. Hayashi et al. built a DL model (AUC = 0.964) to judge the need for rebubbling after Descemet’s endothelial membrane keratoplasty (DMEK) using AS-OCT images from 62 eyes.
AI models have been tested for detecting and improving therapeutic effects of surgical procedures during keratoplasty. Hayashi et al. built a deep neural network model, Visual Geometry Group-16, to predict successful big-bubble (SBB) formation during deep anterior lamellar keratoplasty (DALK) [27]. Based on AS-OCT and corneal biometric values of 46 patients, the AUC of this model reached 0.75. Using AS-OCT images from 1172 subjects, Treder et al. evaluated a CNN classifier (accuracy = 96%) to detect graft detachment after DMEK [28]. Heslinga et al. developed a DL model to locate and quantify graft detachment after DMEK using 1280 AS-OCT scans [29]. Dice scores for the horizontal projections of all B-scans with detachments was 0.896. Pan et al. used a DL framework for augmented reality (AR)-based surgery navigation to guide the suturing in DALK [30]. The corneal contour tracking accuracy was 99.2% on average. Vigueras-Guillén et al. developed a DL method to estimate the corneal endothelium parameters from SM images of 41 eyes after Descemet stripping automated endothelial keratoplasty (DSAEK) [31]. The proposed DL method obtained reliable and accurate estimations.

Corneal Subbasal Nerve Neuropathy

Diabetic peripheral neuropathy (DPN) is the most common complication of both type 1 and 2 diabetes [32]. IVCM is capable of quantifying corneal subbasal plexus, which represents small nerve fiber damage and repair. Scarpa et al. used a CNN algorithm for the classification of IVCM images from 50 healthy subjects and 50 diabetic subjects with neuropathy [33]. This CNN algorithm provided a completely automated analysis for identifying clinically useful features for corneal nerves (accuracy = 96%). Focusing on corneal nerve fiber, Williams et al. applied a deep learning algorithm on IVCM images from 222 subjects, with a specificity of 87% and sensitivity of 68% for the identification of DPN [34]. Based on IVCM images from 369 subjects, Preston et al. utilized a CNN algorithm for corneal subbasal plexus detection to classify DPN, with the highest F1 score of 0.91 [35].

Cornea Epithelium and Endothelium Parameters

The epithelium is the outermost layer of the cornea, critical for refractive status and wound healing [36]. The most common examination method used by ophthalmologists is slit-lamp microscopy combined with different illuminations and eye staining techniques. Noor et al. reported that AI methods could differentiate abnormal corneal epithelium tissues using 25 hyperspectral imaging (HSI) images without eye staining [37]. SVM and CNN algorithms were used to extract image features with an accuracy of 100%.
Fuchs endothelial dystrophy (FED) comes with an increase in the thickness of the Descemet’s membrane, guttae, and a progressive loss of endothelial cells [38]. The guttae is a large deposition of extracellular matrix in the corneal endothelium. Vigueras-Guillén et al. developed DL methods to estimate the corneal endothelium parameters using 500 SM images with guttae [39]. The DL methods obtained lower mean absolute errors compared with commercial software.

Refractive Surgery

Early Keratoconus

Keratoconus (KC) is a noninflammatory corneal ectasia disorder, over 90% of reported KC is bilateral, and because of the progressive corneal thinning, the corneal protrusion and irregular corneal astigmatism induces poor visual performance in KC patients [40]. Early KC is a general concept of early stage KC, including subclinical KC, preclinical KC, KC suspects, and forme fruste KC (FFKC) [41]. Unlike KC, the visual performance of early KC patients is good, and there is no specific corneal topographic finding in early KC patients [42]. However, undetected early KC is known to be associated with iatrogenic keratectasia, which is the most severe and irreversible complication after laser in situ keratomileusis (LASIK) [43, 44]. Hence, discrimination of early KC from normal eyes is an urgent task for ophthalmologist and ophthalmology science.
Discriminating early KC from normal eyes by single anterior corneal topographic map is difficult; however, AI makes it possible to generate thousands of features to improve the accuracy of the discrimination power of early KC based on big data. In 1997, Smolek et al. introduced a neural network (NN) algorithm to discriminate early KC eyes from KC eyes by using corneal topography with a limited sample size [45]. After that, numerous studies have used ML algorithms to detect early KC eyes by using corneal topography. Accardo et al. used a NN method to discriminate early KC eyes from normal eyes, achieving sensitivity of 94.1% and specificity of 97.6% [46]. Saad et al. also used a NN method to detect early KC eyes, achieving sensitivity of 63% [47].
After the Scheimpflug camera became widely applied in ophthalmology clinics, multiple AI-related have studies used this system to detect early KC by collecting both anterior and posterior corneal surface information. Kovacs et al. [48] and Hidalgo et al. [49] both applied ML algorithms to detect early KC eyes by using a Scheimpflug camera, achieving sensitivities of 92% and 79.1%, respectively. Lopes et al. [50] and Smadja et al. [51] did similar studies to detect early KC eyes by using ML algorithms of RF and DT. Recently, Xie et al. used a DL algorithm to detect early KC eyes and achieved an accuracy of 95%, which was higher than achieved by senior ophthalmologists [52]. Further, Xu et al. combined raw data of the entire cornea (anterior curvature, posterior curvature, anterior elevation, posterior elevation, and pachymetry) to build a ML model called the KerNet [53]. The KerNet was helpful for distinguishing clinically unaffected eyes in patients with asymmetric KC from normal eyes (AUC = 0.985). Chen et al. reported a CNN model that combined color-coded maps of the axis, thickness, and front and back elevation [54]. The CNN model reached accuracy of 0.90 for recognizing healthy eyes and early stage KC. The outcomes of these studies show great potential for the application of AI in detecting early KC eyes by using a Scheimpflug camera, though the detection accuracy varies among studies. The limited information of low-resolution images captured by the Scheimpflug camera is one of the possible reasons.
In recent years, based on AI algorithms, several studies have tried to combine corneal information from multiple instruments to improve the detection accuracy of early KC. Hwang et al. [55] and Shi et al. [56] both combined Scheimpflug camera information and AS-OCT, which quantified corneal epithelial information to differentiate early KC eyes from normal eyes. The AUCs were 1.0 and 0.93, respectively. Perez-Rueda et al. combined biomechanical and corneal topographic information to detect early KC with an AUC of 0.951 [57]. These studies show that corneal information at different dimensions can improve the detection accuracy of early KC. AI will be a useful tool to detect early KC by generating numerous features from cornea.

Surgery Outcomes in Refractive Surgery

Predicting the outcomes of laser refractive surgery is important during clinical work. AI methods may enhance the quality of refractive surgical results by preventing the misdiagnosis in nomograms. Based on big data (17,592 cases and 38 clinical parameters for each patient), Achiron et al. developed ML classifiers to support clinical decision-making and lead to better individual risk assessment [58]. Cui et al. analyzed the ML technique for prediction of small incision lenticule extraction (SMILE) nomograms with 1465 eyes [59]. Compared with the experienced surgeon, the ML model showed significantly higher efficacy and smaller error of SE correction. However, the ML model was less predictable for eyes with high myopia and astigmatism. Park et al. tested ML algorithms using 3034 eyes composed of four categorical features and 28 numerical features accepted for SMILE surgery [60]. AdaBoost achieved the highest performance and accuracy for the sphere, cylinder, and astigmatism axis, respectively.

Lens

Cataract is the leading cause of blindness worldwide [6]. Approximately 32 million cataract surgeries have been performed globally up to 2020. With the global aging trend, public eye care demand is growing fast [6]. The clinical consultation and surgery for cataract patients will cause heavy burden for healthcare systems [61]. Thus, more precise diagnosis and treatment are in urgent need.
Age-related cataract always manifests as cortical cataract, nuclear sclerotic, and posterior subcapsular cataract. Slit-lamp microscopy and photography are mostly used to observe and capture cataract with diffuse, slit-beam, and retro-illumination techniques. In general, doctors usually diagnose and grade cataract according to subjective experience, based on the Lens Opacities Classification (LOCS) III and/or the Wisconsin cataract grading system, which might be inconsistent [62, 63]. Therefore, a unified and objective assessment of cataract is critical. AI technology based on big data is capable of this diagnostic and grading task. Using 37,638 slit-lamp photographs, Wu et al. built a ResNet model to capture mode recognition (AUC > 99%), cataract diagnosis (AUC > 99%), and detection of referable cataracts (AUC > 91%) [64]. To grade nuclear cataracts, Gao et al. built a CNN model using 5378 slit-beam images (mean absolute error = 0.304) [65]. Similarly, Xu et al. built a SVR model to grade nuclear cataracts using 5378 slit-beam images (mean absolute error = 0.336) [66]. Cheung et al. built a SVM model to grade nuclear cataracts using 5547 slit-beam images (AUC = 0.88–0.90) [67]. Keenan et al. developed a deep learning model called DeepLensNet to perform automated and quantitative classification of cataract severity for all three types of cortex, nuclear, and posterior subcapsular cataract. Compared with clinical ophthalmologists, the DeepLensNet performed significantly more accurately for cortex opacity (mean squared error = 13.1) and nuclear sclerosis (mean squared error = 0.23). For the least common posterior subcapsular cataract, the grading capability was similar between the DeepLensNet (mean squared error = 16.6) and clinical ophthalmologists [68]. Objectively, silt-lamp images of good quality require a certain amount of training to reduce inter-examiner deviation. Clinical grading labels for the training set may bring biases. Future work might focus on more objective grading algorithms combined with cortex, nuclear, and posterior subcapsular cataract, avoiding such errors of subjectivity.
AI models using fundus images to diagnose and grade cataract have also been developed. Xu et al. built a CNN model using 8030 fundus images to grade cataract (accuracy = 86.24%) [69]. Zhang et al. applied a SVM and CNN model using 1352 fundus images to grade cataract (accuracy = 94.75%) [70]. Xiong et al. utilized a DT model for cataract grading using 1355 fundus images (accuracy = 83.8%) [71]. Yang et al. developed an ensemble learning model for cataract grading using 1239 fundus images (accuracy = 84.5%) [72]. However, the fundus images do not directly capture the lenticular opacity. Along the visual axis, any opacity from cornea and vitreous or small pupil can blur the fundus images, leading to inaccurate diagnosis of cataract.
Moreover, Scheimpflug tomography was used to build an objective tool for nuclear cataract staging, called Pentacam Nucleus Staging (PNS). Based on the pixel intensity in the nucleus region, the PNS provided the severity value for nuclear sclerosis [73, 74]. Based on a deep learning algorithm, swept-source optical coherence tomography (SS-OCT) images have also been used to quantify cataract by lens pixels intensity. The sensitivity and specificity to detect cataract were 94.4% and 94.7%, respectively, for cortex, nuclear, and posterior subcapsular cataract detecting [75]. However, these images are potentially flawed by the limitations of themselves. All the AI-based models and tools are hypothesized based on the properties of each image device. Well-designed AI models may be further developed along the following criteria: objective grading, detection cataract of three types, specificity of lens opacity, interpretability of results, robustness to corneal disease, and limited inter-operator variability [75].

IOL Prediction

Intraocular lens (IOL) formulas are developed to improve IOL selection for cataract eye. New IOL formulas are applying big data and computational methodologies to achieve better prediction of targeted refraction and simplify the IOL selection process (Table 1).
Table 1
Artificial intelligence (AI)-based formulas for intraocular lens (IOL) power calculations
AI-based IOL formula
Principles
Metrics
Hill-RBF 3.0
Neural network, regression
AL, K, ACD, CCT, WTW, LT, gender
Kane
Theoretical optics, regression
AL, K, ACD, CCT, LT, gender
Ladas Super Formula
Deep learning with various formulas
AL, K, ACD
Pearl-DGS
Machine learning, output linearization
AL, K, ACD, CCT, WTW, LT
Clarke
Bayesian additive regression trees
AL, K, ACD, CCT, LT
VRF-G
Theoretical optics, regression, ray-tracing
AL, K, ACD, LT, CCT, WTW, gender, Pre-R
FullMonte
Neural network
AL, K, STS width and perpendicular depth, cornea shape factor Q
Kamona
Machine learning
Mean K of anterior corneal surface, mean K of posterior corneal surface, AL, ACD, LT, WTW
AL axial length, K keratometry, ACD anterior chamber depth, CCT central corneal thickness, WTW white-to-white, LT lens thickness, STS sulcus-to-sulcus, Pre-R preoperative refraction
Using the method of pattern recognition, the Hill-radial basis function (Hill-RBF) is developed based on AI and regression analysis of a very large database of actual postsurgical refractive outcomes to predict the IOL power [76]. The Hill-RBF can deal with the undefined factors in IOL power calculations. However, it is still an empirical algorithm that relies on the type of data and eye characteristics [77]. The Hill-RBF 2.0 is improved, with a larger database combined with expanded “in-bounds” biometry ranges [76]. Hill-RBF 3.0 included gender into the main metrics and further improved accuracy [78].
The Kane formula is a new IOL power formula combined AI with theoretical optics, which includes axial length, keratometry, anterior chamber depth, lens thickness, central corneal thickness, A-constant, and gender for IOL power prediction. Connell et al. found the Kane formula was more accurate than the Hill-RBF 2.0 Barrett Universal II for actual postoperative refraction [79].
In 2015, the concept of “super formula” was introduced with the integration of AI technology. This formula allowed the 3D analysis framework, including IOL power, axial length, and corneal power, to observe areas of similarities and disparities between IOL formulas [80]. The Ladas super formula consists of SRK/T, Hoffer Q, Holladay 1, Holladay with WK adjustment, and Haigis with the help of a complex DL algorithm. Kane et al. compared the Ladas formula with the new IOL formulas [77]. The study found that the Ladas super formula was less accurate than the Barrett Universal II, but was more accurate than Hill-RBF.
Based on the prediction of the internal lens position (TILP), the Pearl-DGS is a thick-lens IOL calculation formula using AI and linear algorithms [81]. The Pearl-DGS is currently an open-source tool for IOL power prediction. Clarke et al. reported a ML model (Bayesian additive regression trees) for IOL power calculation to optimize postoperation refractive outcomes, which had a median error close to the IOL manufacturer tolerance [82]. VRF formula is a vergence-based thin-lens formula. Based on theoretical optics with regression and ray-tracing principles, the VRF-G formula is a modification of VRF, including eight metrics: axial length, keratometry, anterior chamber depth, horizontal corneal diameter, lens thickness, preoperative refraction, central corneal thickness, and gender [83]. The FullMonte formula was built on mathematical neural networks combined with the Monte Carlo Markov Chain algorithm [84]. The Kamona method is based on ML algorithms using the mean K of anterior corneal surface, mean K of posterior corneal surface, axial length, anterior chamber depth, lens thickness, and white-to-white values [85].
Other AI-based IOL models have been tested to predict the IOL power. Li et al. developed a novel ML-based IOL power calculation method, the Nallasamy formula, based on a dataset of cataract patients [86]. They proved that the Nallasamy formula outperformed Barrett Universal II. Li et al. also developed a ML-prediction method to improve the performance of the ray-tracing IOL calculation, which showed more precise results in long eyes [87]. Sramka et al. evaluated the Support Vector Machine Regression model (SVM-RM) and the Multilayer Neural Network Ensemble model (MLNN-EM) for IOL power calculations, which achieved better results than the Barrett Universal II formula [88].
Li et al. incorporated a ML method for effective lens position (ELP) predictions, which is an important factor for IOL power formulas [89]. Brant et al. tested a ML algorithm to optimize the IOL inventory close to the target refractive status [90]. With the development of global databases and AI algorithms, more new IOL power calculators and models will achieve better IOL power prediction, especially for short eyes, long eyes, and post-refractive surgery eyes.

Pediatric Cataract

Although pediatric cataracts are relatively rare (1 per 3000), clinical manifestations are quite inconsistent [91]. Pediatric cataract will cause deprivation of visual stimuli, which is a big threat to visual development. Appropriate diagnosis and treatment will be helpful to reduce deprivation amblyopia and blindness [92, 93].
Recent developments in AI have shown their potential possibility for the diagnosis and management of pediatric cataract using slit-lamp images. Liu et al. built an AI model that combined CNN with SVM methods for qualitative and quantitative pediatric cataract detection using 886 slit-lamp images [94]. This model was validated for pediatric cataract diagnosis as classification (accuracy = 97.07%), area grading (accuracy = 89.02%), density (accuracy = 92.68%), and location (accuracy = 89.28%). Lin et al. created the Congenital Cataract-Cruiser (CC-Cruiser) to identify, stratify, and strategize treatment for images of pediatric cataract [95]. The accuracies of cataract diagnosis and treatment determination reached 87.4% and 70.8%, respectively. In addition, Long et al. developed a Congenital Cataract-Guardian (CC-Guardian) to accurately detect and address complications using internal and multiresource validation [96]. The CC-Guardian included three functional modules: a prediction module, a scheduling follow-up module, and a clinical decision module. The CC-Guardian provided real medical benefits for the effective management of congenital cataract. Combing the silt-lamp images and clinical information, Zhang et al. applied random forest (RF), Naïve Bayesian (NB), and association rules mining to build an AI model to predicate postoperative complications of pediatric cataract patients, with average classification accuracies over 75% [97]. In future studies, it is important to contain more clinical data and image results to improve the prognosis accuracy of pediatric cataract.

Cataract Surgery Training and Monitoring

Cataract surgery is the cornerstone operation to master for an eye surgeon. Video recording of cataract surgery is an effective way to collect surgical workflows, which are useful for surgical skill training and optimization. Combined with AI algorithms, this may extend applications to automatic report generation and real-time support. The Challenge on Automatic Tool Annotation for cataRACT Surgery (CATARACTS), in the context of a decision support algorithm, demonstrated that the annotation of cataract surgery was workable [98]. Yu et al. proved that a CNN-RNN algorithm inputted with instrument labels was accurate for identifying presegmented phases from cataract surgery videos [99]. Yeh et al. developed a CNN-RNN model that showed accurate predictions for routine steps of cataract surgery and estimated the possibility for complex cataract surgeries with advanced surgical steps [100]. Yoo et al. trained a CNN-based smart speaker (accuracy = 93.5%) for the timeout speech to confirm surgical information, such as right side, left side, cataract, phacoemulsification, and IOL [101]. Further improvements might offer more help in cataract surgery education and monitoring.

Primary Angle-Closure Glaucoma

Primary angle-closure glaucoma (PACG) accounts for 50% of global bilateral blindness due to glaucoma [102]. By 2040, the number of PACG will reach 32.04 million worldwide [103]. Objective and quantified assessments of anterior chamber depth (ACD) and anterior chamber angle (ACA) are important. To detect shallow anterior chamber, DL methods have been developed using anterior segment photographs (AUC = 0.86) [104] and fundus photographs (AUC = 0.987) [105]. Based on images obtained using ultrasound biomicroscope, automatic AI methods were applied for ACA analysis [106108]. However, ultrasound biomicroscopy only shows the cross-section of localized ACA. The development of SS-OCT can capture the 3D structure of ACA. Combining AI algorithms with SS-OCT images, automatic classification for ACA evaluation is becoming more efficient for PACG diagnosis [109]. Using the SS-OCT images, Pham et al. developed a convolutional neural network (DCNN) for discrimination of scleral spur, iris, corneosclera shell, and anterior chamber, with a Dice coefficient of 95.7% [110]. In addition, Liu et al. tested the reproducibility of a DL algorithm to recognize scleral spur and anterior chamber angle [111]. The repeatability coefficients of were 0.049–0.058 mm for structure detection. Randhawa et al. tested the generalizability of DL algorithms to detect gonioscopic angle closure based on three independent patient populations, with AUCs of 0.894–0.992 [112]. Porporato et al. validated a DL algorithm for 360° angle assessment, with an AUC of 0.85 [113]. Li et al. proposed a DL method using SS-OCT images for classification of open angle, narrow angle, and angle closure (sensitivity = 0.989, specificity = 0.995) [114]. Similarly, Xu et al. tested a DL classifier for primary angle closure disease, the AUC of which reached 0.964 [115]. The incorporation of AI analysis and SS-OCT images might work as a useful tool for the management of angle-closure glaucoma.

Uncorrected Refractive Error

Uncorrected refractive error is associated with visual impairment worldwide [6]. The timely prediction of refractive error, including severe myopia and hyperopia, is essential for reducing the risks of retinal diseases, glaucoma, and amblyopia [116, 117]. Varadarajan et al. evaluated a DL method to extract information of refractive error from retinal fundus images. The mean absolute error of spherical equivalent (SE) was 0.56–0.91 diopters [118]. Yoo et al. evaluated a DL model to estimate uncorrected refractive error using posterior segment OCT (PS-OCT) images, which yielded an AUC of 0.813 for high myopia detection [119]. Chun et al. tested a DL-based system for refractive error using photorefraction images captured by a smartphone, with an overall accuracy of 81.6% [120]. With the application of ultrawide field fundus images, Yang et al. proved the feasibility of predicting refractive error in myopic patients with DL models [121]. Above all, for refractive error prediction, the AI analysis of fundus and PS-OCT images showed a clear focus of attention to the morphological changes of macular and optic nerve head. However, the accuracy of exact refractive error prediction might need further improvements considering more real-world factors.

Limitations and Future Applications

Combined with a considerable number of relevant data and images, AI shows significant promise in clinical diagnosis and decision-making. However, elaborating underlying features and classifying results of different diseases through the AI algorithms yet might be a black box, which is uncertainty whether agreeing with the real-world or not [5]. The Standard Protocol Items: Recommendations for Interventional Trials-AI (SPIRIT-AI) and Consolidated Standards of Reporting Trials-AI (CONSORT-AI) Steering Groups presented the need for consensus-based reporting guidelines for AI-related interventions [122]. These guidelines would improve both the consistency of medical professionals and the effectiveness of regulatory authorities. Therefore, AI interventions still face potential challenges before being introduced in clinical practice.
Multimodal inputs as structural images and functional data will be the future trend of AI developments [123]. Multiple input types of clinical tests are closer to the real world. However, more inputs also require more training samples to avoid overfitting. Moreover, the weight of each input should be considered when performing the integration.
Up to now, most AI algorithms have been trained with samples far away from clinical reality. AI algorithms are not comparable due to the applications of different databases. An accessible public dataset of multi-ethnicity patient cohorts is the key to enhance the generalizability of AI algorithms [122, 124]. Unified data may thus be beneficial for optimization and comparison among different AI models.

Conclusions

The application of AI in anterior segment diseases is promising (Table 2). With the advanced developments in AI algorithms, early diagnosis of debilitating anterior segment disorders and prediction of relative treatments, in the fields of cornea, refractive surgery, cataract, anterior chamber angle detection, and refractive error prediction, will make great strides. Although AI interventions face potential challenges before their full application into clinical practice, AI technologies swill make a significant impact on intelligent healthcare in the future.
Table 2
Summary of AI applications in anterior segment diseases
Cornea
Refractive surgery
Lens
Primary angle-closure glaucoma
Infectious keratitis
Early keratoconus
Age-related cataract
Assessing anterior chamber depth
Capturing 3D structure of anterior chamber angle
Recognizing the causes of infectious keratitis
Distinguishing keratitis and improving diagnostic accuracy
Discriminating early keratoconus from normal eyes
Cataract diagnosis
Cataract grading
 
Keratoplasty
Surgery outcomes
IOL prediction
 
Deciding on possibilities and methods to perform keratoplasty
Detecting the surgical effect
Predicting refractive surgical results; supporting clinical decision-making
IOL formula
IOL power prediction model
Corneal subbasal nerve neuropathy
 
Pediatric cataract
Uncorrected refractive error
Classifying corneal subbasal plexus changes of diabetic peripheral neuropathy
 
Pediatric cataract diagnosis
Strategizing treatment and management
Predicting refractive error
Cornea epithelium and endothelium
 
Training and monitoring
 
Differentiating abnormal corneal epithelium
Estimating the corneal endothelium parameters with gutta
 
Identifying of cataract surgery steps
Confirming surgical information

Acknowledgements

Funding

No funding or sponsorship was received for this study or publication of this article.

Authorship

All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work, and have given their approval for this version to be published.

Author Contributions

Concept and design: Ke YAO and Andrzej Grzybowski; Literature search: Zhe XU, Jia XU, Ce SHI, Wen XU, Xiuming JIN, Wei HAN, and Kai JIN; Manuscript preparation: Zhe XU, Ce SHI, Andrzej Grzybowski, and Ke YAO.

Disclosures

All named authors confirm that they have no conflicts of interest to disclose.

Compliance with Ethics Guidelines

This article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Open AccessThis article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by-nc/​4.​0/​.
Literatur
1.
Zurück zum Zitat Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol. 2019;28:73–81.PubMedCrossRef Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol. 2019;28:73–81.PubMedCrossRef
2.
Zurück zum Zitat Rampat R, Deshmukh R, Chen X, et al. Artificial intelligence in cornea, refractive surgery, and cataract: basic principles, clinical applications, and future directions. Asia Pac J Ophthalmol (Phila). 2021;10:268–81.PubMedCrossRef Rampat R, Deshmukh R, Chen X, et al. Artificial intelligence in cornea, refractive surgery, and cataract: basic principles, clinical applications, and future directions. Asia Pac J Ophthalmol (Phila). 2021;10:268–81.PubMedCrossRef
3.
Zurück zum Zitat Siddiqui AA, Ladas JG, Lee JK. Artificial intelligence in cornea, refractive, and cataract surgery. Curr Opin Ophthalmol. 2020;31:253–60.PubMedCrossRef Siddiqui AA, Ladas JG, Lee JK. Artificial intelligence in cornea, refractive, and cataract surgery. Curr Opin Ophthalmol. 2020;31:253–60.PubMedCrossRef
4.
Zurück zum Zitat Ting DSJ, Foo VH, Yang LWY, et al. Artificial intelligence for anterior segment diseases: emerging applications in ophthalmology. Br J Ophthalmol. 2021;105:158–68.PubMedCrossRef Ting DSJ, Foo VH, Yang LWY, et al. Artificial intelligence for anterior segment diseases: emerging applications in ophthalmology. Br J Ophthalmol. 2021;105:158–68.PubMedCrossRef
5.
Zurück zum Zitat Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019;103:167–75.PubMedCrossRef Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019;103:167–75.PubMedCrossRef
6.
Zurück zum Zitat Flaxman SR, Bourne RRA, Resnikoff S, et al. Global causes of blindness and distance vision impairment 1990–2020: a systematic review and meta-analysis. Lancet Glob Health. 2017;5:e1221–34.PubMedCrossRef Flaxman SR, Bourne RRA, Resnikoff S, et al. Global causes of blindness and distance vision impairment 1990–2020: a systematic review and meta-analysis. Lancet Glob Health. 2017;5:e1221–34.PubMedCrossRef
7.
Zurück zum Zitat Ting DSJ, Ho CS, Deshmukh R, Said DG, Dua HS. Infectious keratitis: an update on epidemiology, causative microorganisms, risk factors, and antimicrobial resistance. Eye (Lond). 2021;35:1084–101.PubMedCrossRef Ting DSJ, Ho CS, Deshmukh R, Said DG, Dua HS. Infectious keratitis: an update on epidemiology, causative microorganisms, risk factors, and antimicrobial resistance. Eye (Lond). 2021;35:1084–101.PubMedCrossRef
8.
Zurück zum Zitat McCarthy J, Minsky M, Rochester N, Shannon CE. A proposal for the dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag. 2006;27:12–4. McCarthy J, Minsky M, Rochester N, Shannon CE. A proposal for the dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag. 2006;27:12–4.
9.
Zurück zum Zitat Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018;2:719–31.PubMedCrossRef Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018;2:719–31.PubMedCrossRef
11.
Zurück zum Zitat Hinton G. Deep learning-a technology with the potential to transform health care. JAMA. 2018;320:1101–2.PubMedCrossRef Hinton G. Deep learning-a technology with the potential to transform health care. JAMA. 2018;320:1101–2.PubMedCrossRef
12.
Zurück zum Zitat Austin A, Lietman T, Rose-Nussbaumer J. Update on the management of infectious keratitis. Ophthalmology. 2017;124:1678–89.PubMedCrossRef Austin A, Lietman T, Rose-Nussbaumer J. Update on the management of infectious keratitis. Ophthalmology. 2017;124:1678–89.PubMedCrossRef
13.
Zurück zum Zitat Lin A, Rhee MK, Akpek EK, et al. Bacterial keratitis preferred practice pattern®. Ophthalmology. 2019;126:P1-p55.PubMedCrossRef Lin A, Rhee MK, Akpek EK, et al. Bacterial keratitis preferred practice pattern®. Ophthalmology. 2019;126:P1-p55.PubMedCrossRef
14.
Zurück zum Zitat Ung L, Bispo PJM, Shanbhag SS, Gilmore MS, Chodosh J. The persistent dilemma of microbial keratitis: global burden, diagnosis, and antimicrobial resistance. Surv Ophthalmol. 2019;64:255–71.PubMedCrossRef Ung L, Bispo PJM, Shanbhag SS, Gilmore MS, Chodosh J. The persistent dilemma of microbial keratitis: global burden, diagnosis, and antimicrobial resistance. Surv Ophthalmol. 2019;64:255–71.PubMedCrossRef
15.
Zurück zum Zitat Saini JS, Jain AK, Kumar S, Vikal S, Pankaj S, Singh S. Neural network approach to classify infective keratitis. Curr Eye Res. 2003;27:111–6.PubMedCrossRef Saini JS, Jain AK, Kumar S, Vikal S, Pankaj S, Singh S. Neural network approach to classify infective keratitis. Curr Eye Res. 2003;27:111–6.PubMedCrossRef
16.
Zurück zum Zitat Lv J, Zhang K, Chen Q, et al. Deep learning-based automated diagnosis of fungal keratitis with in vivo confocal microscopy images. Ann Transl Med. 2020;8:706.PubMedPubMedCentralCrossRef Lv J, Zhang K, Chen Q, et al. Deep learning-based automated diagnosis of fungal keratitis with in vivo confocal microscopy images. Ann Transl Med. 2020;8:706.PubMedPubMedCentralCrossRef
17.
Zurück zum Zitat Liu Z, Cao Y, Li Y, et al. Automatic diagnosis of fungal keratitis using data augmentation and image fusion with deep convolutional neural network. Comput Methods Progr Biomed. 2020;187: 105019.CrossRef Liu Z, Cao Y, Li Y, et al. Automatic diagnosis of fungal keratitis using data augmentation and image fusion with deep convolutional neural network. Comput Methods Progr Biomed. 2020;187: 105019.CrossRef
18.
19.
Zurück zum Zitat Hung N, Shih AK, Lin C, et al. Using slit-lamp images for deep learning-based identification of bacterial and fungal keratitis: model development and validation with different convolutional neural networks. Diagnostics (Basel). 2021;11:1246.PubMedCrossRef Hung N, Shih AK, Lin C, et al. Using slit-lamp images for deep learning-based identification of bacterial and fungal keratitis: model development and validation with different convolutional neural networks. Diagnostics (Basel). 2021;11:1246.PubMedCrossRef
20.
Zurück zum Zitat Ghosh AK, Thammasudjarit R, Jongkhajornpong P, Attia J, Thakkinstian A. Deep learning for discrimination between fungal keratitis and bacterial keratitis: deepkeratitis. Cornea. 2022;41:616–22.PubMedCrossRef Ghosh AK, Thammasudjarit R, Jongkhajornpong P, Attia J, Thakkinstian A. Deep learning for discrimination between fungal keratitis and bacterial keratitis: deepkeratitis. Cornea. 2022;41:616–22.PubMedCrossRef
21.
22.
Zurück zum Zitat Li W, Yang Y, Zhang K, et al. Dense anatomical annotation of slit-lamp images improves the performance of deep learning for the diagnosis of ophthalmic disorders. Nat Biomed Eng. 2020;4:767–77.PubMedCrossRef Li W, Yang Y, Zhang K, et al. Dense anatomical annotation of slit-lamp images improves the performance of deep learning for the diagnosis of ophthalmic disorders. Nat Biomed Eng. 2020;4:767–77.PubMedCrossRef
23.
24.
Zurück zum Zitat Loo J, Kriegel MF, Tuohy MM, et al. Open-source automatic segmentation of ocular structures and biomarkers of microbial keratitis on slit-lamp photography images using deep learning. IEEE J Biomed Health Inform. 2021;25:88–99.PubMedPubMedCentralCrossRef Loo J, Kriegel MF, Tuohy MM, et al. Open-source automatic segmentation of ocular structures and biomarkers of microbial keratitis on slit-lamp photography images using deep learning. IEEE J Biomed Health Inform. 2021;25:88–99.PubMedPubMedCentralCrossRef
25.
Zurück zum Zitat Gain P, Jullienne R, He Z, et al. Global survey of corneal transplantation and eye banking. JAMA Ophthalmol. 2016;134:167–73.PubMedCrossRef Gain P, Jullienne R, He Z, et al. Global survey of corneal transplantation and eye banking. JAMA Ophthalmol. 2016;134:167–73.PubMedCrossRef
26.
Zurück zum Zitat Yousefi S, Takahashi H, Hayashi T, et al. Predicting the likelihood of need for future keratoplasty intervention using artificial intelligence. Ocul Surf. 2020;18:320–5.PubMedCrossRef Yousefi S, Takahashi H, Hayashi T, et al. Predicting the likelihood of need for future keratoplasty intervention using artificial intelligence. Ocul Surf. 2020;18:320–5.PubMedCrossRef
27.
Zurück zum Zitat Hayashi T, Masumoto H, Tabuchi H, et al. A deep learning approach for successful big-bubble formation prediction in deep anterior lamellar keratoplasty. Sci Rep. 2021;11:18559.PubMedPubMedCentralCrossRef Hayashi T, Masumoto H, Tabuchi H, et al. A deep learning approach for successful big-bubble formation prediction in deep anterior lamellar keratoplasty. Sci Rep. 2021;11:18559.PubMedPubMedCentralCrossRef
28.
Zurück zum Zitat Treder M, Lauermann JL, Alnawaiseh M, Eter N. Using deep learning in automated detection of graft detachment in descemet membrane endothelial keratoplasty: a pilot study. Cornea. 2019;38:157–61.PubMedCrossRef Treder M, Lauermann JL, Alnawaiseh M, Eter N. Using deep learning in automated detection of graft detachment in descemet membrane endothelial keratoplasty: a pilot study. Cornea. 2019;38:157–61.PubMedCrossRef
29.
Zurück zum Zitat Heslinga FG, Alberti M, Pluim JPW, Cabrerizo J, Veta M. Quantifying graft detachment after descemet’s membrane endothelial keratoplasty with deep convolutional neural networks. Transl Vis Sci Technol. 2020;9:48.PubMedPubMedCentralCrossRef Heslinga FG, Alberti M, Pluim JPW, Cabrerizo J, Veta M. Quantifying graft detachment after descemet’s membrane endothelial keratoplasty with deep convolutional neural networks. Transl Vis Sci Technol. 2020;9:48.PubMedPubMedCentralCrossRef
30.
Zurück zum Zitat Pan J, Liu W, Ge P, et al. Real-time segmentation and tracking of excised corneal contour by deep neural networks for DALK surgical navigation. Comput Methods Progr Biomed. 2020;197: 105679.CrossRef Pan J, Liu W, Ge P, et al. Real-time segmentation and tracking of excised corneal contour by deep neural networks for DALK surgical navigation. Comput Methods Progr Biomed. 2020;197: 105679.CrossRef
31.
Zurück zum Zitat Vigueras-Guillén JP, van Rooij J, Engel A, Lemij HG, van Vliet LJ, Vermeer KA. Deep learning for assessing the corneal endothelium from specular microscopy images up to 1 year after ultrathin-dsaek surgery. Transl Vis Sci Technol. 2020;9:49.PubMedPubMedCentralCrossRef Vigueras-Guillén JP, van Rooij J, Engel A, Lemij HG, van Vliet LJ, Vermeer KA. Deep learning for assessing the corneal endothelium from specular microscopy images up to 1 year after ultrathin-dsaek surgery. Transl Vis Sci Technol. 2020;9:49.PubMedPubMedCentralCrossRef
32.
Zurück zum Zitat Burgess J, Frank B, Marshall A, et al. Early detection of diabetic peripheral neuropathy: a focus on small nerve fibres. Diagnostics (Basel). 2021;11:165.PubMedCrossRef Burgess J, Frank B, Marshall A, et al. Early detection of diabetic peripheral neuropathy: a focus on small nerve fibres. Diagnostics (Basel). 2021;11:165.PubMedCrossRef
33.
Zurück zum Zitat Scarpa F, Colonna A, Ruggeri A. Multiple-image deep learning analysis for neuropathy detection in corneal nerve images. Cornea. 2020;39:342–7.PubMedCrossRef Scarpa F, Colonna A, Ruggeri A. Multiple-image deep learning analysis for neuropathy detection in corneal nerve images. Cornea. 2020;39:342–7.PubMedCrossRef
34.
Zurück zum Zitat Williams BM, Borroni D, Liu R, et al. An artificial intelligence-based deep learning algorithm for the diagnosis of diabetic neuropathy using corneal confocal microscopy: a development and validation study. Diabetologia. 2020;63:419–30.PubMedCrossRef Williams BM, Borroni D, Liu R, et al. An artificial intelligence-based deep learning algorithm for the diagnosis of diabetic neuropathy using corneal confocal microscopy: a development and validation study. Diabetologia. 2020;63:419–30.PubMedCrossRef
35.
Zurück zum Zitat Preston FG, Meng Y, Burgess J, et al. Artificial intelligence utilising corneal confocal microscopy for the diagnosis of peripheral neuropathy in diabetes mellitus and prediabetes. Diabetologia. 2022;65:457–66.PubMedCrossRef Preston FG, Meng Y, Burgess J, et al. Artificial intelligence utilising corneal confocal microscopy for the diagnosis of peripheral neuropathy in diabetes mellitus and prediabetes. Diabetologia. 2022;65:457–66.PubMedCrossRef
36.
Zurück zum Zitat Wirostko B, Rafii M, Sullivan DA, Morelli J, Ding J. Novel therapy to treat corneal epithelial defects: a hypothesis with growth hormone. Ocul Surf. 2015;13:204-212.e201.PubMedPubMedCentralCrossRef Wirostko B, Rafii M, Sullivan DA, Morelli J, Ding J. Novel therapy to treat corneal epithelial defects: a hypothesis with growth hormone. Ocul Surf. 2015;13:204-212.e201.PubMedPubMedCentralCrossRef
37.
Zurück zum Zitat Noor SSM, Michael K, Marshall S, Ren J. Hyperspectral image enhancement and mixture deep-learning classification of corneal epithelium injuries. Sensors (Basel). 2017;17:2644.PubMedCrossRef Noor SSM, Michael K, Marshall S, Ren J. Hyperspectral image enhancement and mixture deep-learning classification of corneal epithelium injuries. Sensors (Basel). 2017;17:2644.PubMedCrossRef
39.
Zurück zum Zitat Vigueras-Guillén JP, van Rooij J, van Dooren BTH, et al. DenseUNets with feedback non-local attention for the segmentation of specular microscopy images of the corneal endothelium with guttae. Sci Rep. 2022;12:14035.PubMedPubMedCentralCrossRef Vigueras-Guillén JP, van Rooij J, van Dooren BTH, et al. DenseUNets with feedback non-local attention for the segmentation of specular microscopy images of the corneal endothelium with guttae. Sci Rep. 2022;12:14035.PubMedPubMedCentralCrossRef
40.
41.
Zurück zum Zitat Santodomingo-Rubido J, Carracedo G, Suzaki A, Villa-Collar C, Vincent SJ, Wolffsohn JS. Keratoconus: an updated review. Cont Lens Anterior Eye. 2022;45: 101559.PubMedCrossRef Santodomingo-Rubido J, Carracedo G, Suzaki A, Villa-Collar C, Vincent SJ, Wolffsohn JS. Keratoconus: an updated review. Cont Lens Anterior Eye. 2022;45: 101559.PubMedCrossRef
43.
Zurück zum Zitat Chan C, Saad A, Randleman JB, et al. Analysis of cases and accuracy of 3 risk scoring systems in predicting ectasia after laser in situ keratomileusis. J Cataract Refract Surg. 2018;44:979–92.PubMedCrossRef Chan C, Saad A, Randleman JB, et al. Analysis of cases and accuracy of 3 risk scoring systems in predicting ectasia after laser in situ keratomileusis. J Cataract Refract Surg. 2018;44:979–92.PubMedCrossRef
44.
Zurück zum Zitat Klein SR, Epstein RJ, Randleman JB, Stulting RD. Corneal ectasia after laser in situ keratomileusis in patients without apparent preoperative risk factors. Cornea. 2006;25:388–403.PubMedCrossRef Klein SR, Epstein RJ, Randleman JB, Stulting RD. Corneal ectasia after laser in situ keratomileusis in patients without apparent preoperative risk factors. Cornea. 2006;25:388–403.PubMedCrossRef
45.
Zurück zum Zitat Smolek MK, Klyce SD. Current keratoconus detection methods compared with a neural network approach. Invest Ophthalmol Vis Sci. 1997;38:2290–9.PubMed Smolek MK, Klyce SD. Current keratoconus detection methods compared with a neural network approach. Invest Ophthalmol Vis Sci. 1997;38:2290–9.PubMed
46.
Zurück zum Zitat Accardo PA, Pensiero S. Neural network-based system for early keratoconus detection from corneal topography. J Biomed Inform. 2002;35:151–9.PubMedCrossRef Accardo PA, Pensiero S. Neural network-based system for early keratoconus detection from corneal topography. J Biomed Inform. 2002;35:151–9.PubMedCrossRef
47.
Zurück zum Zitat Saad A, Gatinel D. Evaluation of total and corneal wavefront high order aberrations for the detection of forme fruste keratoconus. Invest Ophthalmol Vis Sci. 2012;53:2978–92.PubMedCrossRef Saad A, Gatinel D. Evaluation of total and corneal wavefront high order aberrations for the detection of forme fruste keratoconus. Invest Ophthalmol Vis Sci. 2012;53:2978–92.PubMedCrossRef
48.
Zurück zum Zitat Kovács I, Miháltz K, Kránitz K, et al. Accuracy of machine learning classifiers using bilateral data from a Scheimpflug camera for identifying eyes with preclinical signs of keratoconus. J Cataract Refract Surg. 2016;42:275–83.PubMedCrossRef Kovács I, Miháltz K, Kránitz K, et al. Accuracy of machine learning classifiers using bilateral data from a Scheimpflug camera for identifying eyes with preclinical signs of keratoconus. J Cataract Refract Surg. 2016;42:275–83.PubMedCrossRef
49.
Zurück zum Zitat Ruiz Hidalgo I, Rodriguez P, Rozema JJ, et al. Evaluation of a machine-learning classifier for keratoconus detection based on scheimpflug tomography. Cornea. 2016;35:827–32.PubMedCrossRef Ruiz Hidalgo I, Rodriguez P, Rozema JJ, et al. Evaluation of a machine-learning classifier for keratoconus detection based on scheimpflug tomography. Cornea. 2016;35:827–32.PubMedCrossRef
50.
Zurück zum Zitat Lopes BT, Ramos IC, Salomão MQ, et al. Enhanced tomographic assessment to detect corneal ectasia based on artificial intelligence. Am J Ophthalmol. 2018;195:223–32.PubMedCrossRef Lopes BT, Ramos IC, Salomão MQ, et al. Enhanced tomographic assessment to detect corneal ectasia based on artificial intelligence. Am J Ophthalmol. 2018;195:223–32.PubMedCrossRef
51.
Zurück zum Zitat Smadja D, Touboul D, Cohen A, et al. Detection of subclinical keratoconus using an automated decision tree classification. Am J Ophthalmol. 2013;156:237-246.e231.PubMedCrossRef Smadja D, Touboul D, Cohen A, et al. Detection of subclinical keratoconus using an automated decision tree classification. Am J Ophthalmol. 2013;156:237-246.e231.PubMedCrossRef
52.
Zurück zum Zitat Xie Y, Zhao L, Yang X, et al. Screening candidates for refractive surgery with corneal tomographic-based deep learning. JAMA Ophthalmol. 2020;138:519–26.PubMedPubMedCentralCrossRef Xie Y, Zhao L, Yang X, et al. Screening candidates for refractive surgery with corneal tomographic-based deep learning. JAMA Ophthalmol. 2020;138:519–26.PubMedPubMedCentralCrossRef
53.
Zurück zum Zitat Xu Z, Feng R, Jin X, et al. Evaluation of artificial intelligence models for the detection of asymmetric keratoconus eyes using Scheimpflug tomography. Clin Exp Ophthalmol. 2022;50:714–23.PubMedCrossRef Xu Z, Feng R, Jin X, et al. Evaluation of artificial intelligence models for the detection of asymmetric keratoconus eyes using Scheimpflug tomography. Clin Exp Ophthalmol. 2022;50:714–23.PubMedCrossRef
54.
Zurück zum Zitat Chen X, Zhao J, Iselin KC, et al. Keratoconus detection of changes using deep learning of colour-coded maps. BMJ Open Ophthalmol. 2021;6: e000824.PubMedPubMedCentralCrossRef Chen X, Zhao J, Iselin KC, et al. Keratoconus detection of changes using deep learning of colour-coded maps. BMJ Open Ophthalmol. 2021;6: e000824.PubMedPubMedCentralCrossRef
55.
Zurück zum Zitat Hwang ES, Perez-Straziota CE, Kim SW, Santhiago MR, Randleman JB. Distinguishing highly asymmetric keratoconus eyes using combined scheimpflug and spectral-domain oct analysis. Ophthalmology. 2018;125:1862–71.PubMedCrossRef Hwang ES, Perez-Straziota CE, Kim SW, Santhiago MR, Randleman JB. Distinguishing highly asymmetric keratoconus eyes using combined scheimpflug and spectral-domain oct analysis. Ophthalmology. 2018;125:1862–71.PubMedCrossRef
56.
Zurück zum Zitat Shi C, Wang M, Zhu T, et al. Machine learning helps improve diagnostic ability of subclinical keratoconus using Scheimpflug and OCT imaging modalities. Eye Vis (Lond). 2020;7:48.PubMedCrossRef Shi C, Wang M, Zhu T, et al. Machine learning helps improve diagnostic ability of subclinical keratoconus using Scheimpflug and OCT imaging modalities. Eye Vis (Lond). 2020;7:48.PubMedCrossRef
57.
Zurück zum Zitat Pérez-Rueda A, Jiménez-Rodríguez D, Castro-Luna G. Diagnosis of subclinical keratoconus with a combined model of biomechanical and topographic parameters. J Clin Med. 2021;10:2746.PubMedPubMedCentralCrossRef Pérez-Rueda A, Jiménez-Rodríguez D, Castro-Luna G. Diagnosis of subclinical keratoconus with a combined model of biomechanical and topographic parameters. J Clin Med. 2021;10:2746.PubMedPubMedCentralCrossRef
58.
Zurück zum Zitat Achiron A, Gur Z, Aviv U, et al. Predicting refractive surgery outcome: machine learning approach with big data. J Refract Surg. 2017;33:592–7.PubMedCrossRef Achiron A, Gur Z, Aviv U, et al. Predicting refractive surgery outcome: machine learning approach with big data. J Refract Surg. 2017;33:592–7.PubMedCrossRef
59.
Zurück zum Zitat Cui T, Wang Y, Ji S, et al. Applying machine learning techniques in nomogram prediction and analysis for smile treatment. Am J Ophthalmol. 2020;210:71–7.PubMedCrossRef Cui T, Wang Y, Ji S, et al. Applying machine learning techniques in nomogram prediction and analysis for smile treatment. Am J Ophthalmol. 2020;210:71–7.PubMedCrossRef
60.
61.
Zurück zum Zitat Wang W, Yan W, Fotis K, et al. Cataract surgical rate and socioeconomics: a global study. Invest Ophthalmol Vis Sci. 2016;57:5872–81.PubMedCrossRef Wang W, Yan W, Fotis K, et al. Cataract surgical rate and socioeconomics: a global study. Invest Ophthalmol Vis Sci. 2016;57:5872–81.PubMedCrossRef
62.
Zurück zum Zitat Chylack LT Jr, Wolfe JK, Singer DM, et al. The Lens Opacities Classification System III. The longitudinal study of cataract study group. Arch Ophthalmol. 1993;111:831–6.PubMedCrossRef Chylack LT Jr, Wolfe JK, Singer DM, et al. The Lens Opacities Classification System III. The longitudinal study of cataract study group. Arch Ophthalmol. 1993;111:831–6.PubMedCrossRef
63.
Zurück zum Zitat Panchapakesan J, Cumming RG, Mitchell P. Reproducibility of the Wisconsin cataract grading system in the Blue Mountains Eye Study. Ophthalmic Epidemiol. 1997;4:119–26.PubMedCrossRef Panchapakesan J, Cumming RG, Mitchell P. Reproducibility of the Wisconsin cataract grading system in the Blue Mountains Eye Study. Ophthalmic Epidemiol. 1997;4:119–26.PubMedCrossRef
64.
Zurück zum Zitat Wu X, Huang Y, Liu Z, et al. Universal artificial intelligence platform for collaborative management of cataracts. Br J Ophthalmol. 2019;103:1553–60.PubMedCrossRef Wu X, Huang Y, Liu Z, et al. Universal artificial intelligence platform for collaborative management of cataracts. Br J Ophthalmol. 2019;103:1553–60.PubMedCrossRef
65.
Zurück zum Zitat Gao X, Lin S, Wong TY. Automatic feature learning to grade nuclear cataracts based on deep learning. IEEE Trans Biomed Eng. 2015;62:2693–701.PubMedCrossRef Gao X, Lin S, Wong TY. Automatic feature learning to grade nuclear cataracts based on deep learning. IEEE Trans Biomed Eng. 2015;62:2693–701.PubMedCrossRef
66.
Zurück zum Zitat Xu Y, Gao X, Lin S, et al. Automatic grading of nuclear cataracts from slit-lamp lens images using group sparsity regression. Med Image Comput Comput Assist Interv. 2013;16:468–75.PubMed Xu Y, Gao X, Lin S, et al. Automatic grading of nuclear cataracts from slit-lamp lens images using group sparsity regression. Med Image Comput Comput Assist Interv. 2013;16:468–75.PubMed
67.
Zurück zum Zitat Cheung CY, Li H, Lamoureux EL, et al. Validity of a new computer-aided diagnosis imaging program to quantify nuclear cataract from slit-lamp photographs. Invest Ophthalmol Vis Sci. 2011;52:1314–9.PubMedCrossRef Cheung CY, Li H, Lamoureux EL, et al. Validity of a new computer-aided diagnosis imaging program to quantify nuclear cataract from slit-lamp photographs. Invest Ophthalmol Vis Sci. 2011;52:1314–9.PubMedCrossRef
68.
Zurück zum Zitat Keenan TDL, Chen Q, Agrón E, et al. DeepLensNet: deep learning automated diagnosis and quantitative classification of cataract type and severity. Ophthalmology. 2022;129:571–84.PubMedCrossRef Keenan TDL, Chen Q, Agrón E, et al. DeepLensNet: deep learning automated diagnosis and quantitative classification of cataract type and severity. Ophthalmology. 2022;129:571–84.PubMedCrossRef
69.
Zurück zum Zitat Xu X, Zhang L, Li J, Guan Y, Zhang L. A Hybrid global-local representation CNN model for automatic cataract grading. IEEE J Biomed Health Inform. 2020;24:556–67.PubMedCrossRef Xu X, Zhang L, Li J, Guan Y, Zhang L. A Hybrid global-local representation CNN model for automatic cataract grading. IEEE J Biomed Health Inform. 2020;24:556–67.PubMedCrossRef
70.
Zurück zum Zitat Zhang H, Niu K, Xiong Y, Yang W, He Z, Song H. Automatic cataract grading methods based on deep learning. Comput Methods Progr Biomed. 2019;182: 104978.CrossRef Zhang H, Niu K, Xiong Y, Yang W, He Z, Song H. Automatic cataract grading methods based on deep learning. Comput Methods Progr Biomed. 2019;182: 104978.CrossRef
71.
Zurück zum Zitat Xiong L, Li H, Xu L. An approach to evaluate blurriness in retinal images with vitreous opacity for cataract diagnosis. J Healthc Eng. 2017;2017:5645498.PubMedPubMedCentralCrossRef Xiong L, Li H, Xu L. An approach to evaluate blurriness in retinal images with vitreous opacity for cataract diagnosis. J Healthc Eng. 2017;2017:5645498.PubMedPubMedCentralCrossRef
72.
Zurück zum Zitat Yang JJ, Li J, Shen R, et al. Exploiting ensemble learning for automatic cataract detection and grading. Comput Methods Progr Biomed. 2016;124:45–57.CrossRef Yang JJ, Li J, Shen R, et al. Exploiting ensemble learning for automatic cataract detection and grading. Comput Methods Progr Biomed. 2016;124:45–57.CrossRef
73.
Zurück zum Zitat Grewal DS, Brar GS, Grewal SP. Correlation of nuclear cataract lens density using Scheimpflug images with Lens Opacities Classification System III and visual function. Ophthalmology. 2009;116:1436–43.PubMedCrossRef Grewal DS, Brar GS, Grewal SP. Correlation of nuclear cataract lens density using Scheimpflug images with Lens Opacities Classification System III and visual function. Ophthalmology. 2009;116:1436–43.PubMedCrossRef
74.
Zurück zum Zitat Lim SA, Hwang J, Hwang KY, Chung SH. Objective assessment of nuclear cataract: comparison of double-pass and Scheimpflug systems. J Cataract Refract Surg. 2014;40:716–21.PubMedCrossRef Lim SA, Hwang J, Hwang KY, Chung SH. Objective assessment of nuclear cataract: comparison of double-pass and Scheimpflug systems. J Cataract Refract Surg. 2014;40:716–21.PubMedCrossRef
75.
Zurück zum Zitat Zéboulon P, Panthier C, Rouger H, Bijon J, Ghazal W, Gatinel D. Development and validation of a pixel wise deep learning model to detect cataract on swept-source optical coherence tomography images. J Optom. 2022;15:43–39.CrossRef Zéboulon P, Panthier C, Rouger H, Bijon J, Ghazal W, Gatinel D. Development and validation of a pixel wise deep learning model to detect cataract on swept-source optical coherence tomography images. J Optom. 2022;15:43–39.CrossRef
77.
Zurück zum Zitat Kane JX, Van Heerden A, Atik A, Petsoglou C. Accuracy of 3 new methods for intraocular lens power selection. J Cataract Refract Surg. 2017;43:333–9.PubMedCrossRef Kane JX, Van Heerden A, Atik A, Petsoglou C. Accuracy of 3 new methods for intraocular lens power selection. J Cataract Refract Surg. 2017;43:333–9.PubMedCrossRef
78.
Zurück zum Zitat Tsessler M, Cohen S, Wang L, Koch DD, Zadok D, Abulafia A. Evaluating the prediction accuracy of the Hill-RBF 30 formula using a heteroscedastic statistical method. J Cataract Refract Surg. 2022;48:37–43.PubMedCrossRef Tsessler M, Cohen S, Wang L, Koch DD, Zadok D, Abulafia A. Evaluating the prediction accuracy of the Hill-RBF 30 formula using a heteroscedastic statistical method. J Cataract Refract Surg. 2022;48:37–43.PubMedCrossRef
79.
Zurück zum Zitat Connell BJ, Kane JX. Comparison of the Kane formula with existing formulas for intraocular lens power selection. BMJ Open Ophthalmol. 2019;4: e000251.PubMedPubMedCentralCrossRef Connell BJ, Kane JX. Comparison of the Kane formula with existing formulas for intraocular lens power selection. BMJ Open Ophthalmol. 2019;4: e000251.PubMedPubMedCentralCrossRef
80.
Zurück zum Zitat Ladas JG, Siddiqui AA, Devgan U, Jun AS. A 3-D “Super Surface” combining modern intraocular lens formulas to generate a “super formula” and maximize accuracy. JAMA Ophthalmol. 2015;133:1431–6.PubMedCrossRef Ladas JG, Siddiqui AA, Devgan U, Jun AS. A 3-D “Super Surface” combining modern intraocular lens formulas to generate a “super formula” and maximize accuracy. JAMA Ophthalmol. 2015;133:1431–6.PubMedCrossRef
81.
Zurück zum Zitat Debellemanière G, Dubois M, Gauvin M, et al. The PEARL-DGS formula: the development of an open-source machine learning-based thick IOL calculation formula. Am J Ophthalmol. 2021;232:58–69.PubMedCrossRef Debellemanière G, Dubois M, Gauvin M, et al. The PEARL-DGS formula: the development of an open-source machine learning-based thick IOL calculation formula. Am J Ophthalmol. 2021;232:58–69.PubMedCrossRef
82.
Zurück zum Zitat Clarke GP, Kapelner A. The Bayesian additive regression trees formula for safe machine learning-based intraocular lens predictions. Front Big Data. 2020;3: 572134.PubMedPubMedCentralCrossRef Clarke GP, Kapelner A. The Bayesian additive regression trees formula for safe machine learning-based intraocular lens predictions. Front Big Data. 2020;3: 572134.PubMedPubMedCentralCrossRef
83.
Zurück zum Zitat Hipólito-Fernandes D, Elisa Luís M, Gil P, et al. VRF-G, a new intraocular lens power calculation formula: a 13-formulas comparison study. Clin Ophthalmol. 2020;14:4395–402.PubMedPubMedCentralCrossRef Hipólito-Fernandes D, Elisa Luís M, Gil P, et al. VRF-G, a new intraocular lens power calculation formula: a 13-formulas comparison study. Clin Ophthalmol. 2020;14:4395–402.PubMedPubMedCentralCrossRef
84.
Zurück zum Zitat Kurochkin P, Weiss R, Chuck RS, Fay J, Yong C, Lee JK. A novel method of intraocular lens power selection in cataract surgery using a Markov Chain Monte Carlo Simulator. Investig Ophthalmol Vis Sci. 2015;56:2977–2977. Kurochkin P, Weiss R, Chuck RS, Fay J, Yong C, Lee JK. A novel method of intraocular lens power selection in cataract surgery using a Markov Chain Monte Carlo Simulator. Investig Ophthalmol Vis Sci. 2015;56:2977–2977.
85.
Zurück zum Zitat Carmona González D, Palomino BC. Accuracy of a new intraocular lens power calculation method based on artificial intelligence. Eye (Lond). 2021;35:517–22.PubMedCrossRef Carmona González D, Palomino BC. Accuracy of a new intraocular lens power calculation method based on artificial intelligence. Eye (Lond). 2021;35:517–22.PubMedCrossRef
86.
Zurück zum Zitat Li T, Stein J, Nallasamy N. Evaluation of the Nallasamy formula: a stacking ensemble machine learning method for refraction prediction in cataract surgery. Br J Ophthalmol. 2022; bjophthalmol-2021-320599. Epub ahead of print. Li T, Stein J, Nallasamy N. Evaluation of the Nallasamy formula: a stacking ensemble machine learning method for refraction prediction in cataract surgery. Br J Ophthalmol. 2022; bjophthalmol-2021-320599. Epub ahead of print.
87.
Zurück zum Zitat Li T, Reddy A, Stein JD, Nallasamy N. Ray tracing intraocular lens calculation performance improved by AI-powered postoperative lens position prediction. Br J Ophthalmol. 2021; bjophthalmol-2021-320283. Epub ahead of print. Li T, Reddy A, Stein JD, Nallasamy N. Ray tracing intraocular lens calculation performance improved by AI-powered postoperative lens position prediction. Br J Ophthalmol. 2021; bjophthalmol-2021-320283. Epub ahead of print.
88.
89.
Zurück zum Zitat Li T, Stein J, Nallasamy N. AI-powered effective lens position prediction improves the accuracy of existing lens formulas. Br J Ophthalmol. 2022;106:1222–6.PubMedCrossRef Li T, Stein J, Nallasamy N. AI-powered effective lens position prediction improves the accuracy of existing lens formulas. Br J Ophthalmol. 2022;106:1222–6.PubMedCrossRef
90.
Zurück zum Zitat Brant AR, Hinkle J, Shi S, et al. Artificial intelligence in global ophthalmology: using machine learning to improve cataract surgery outcomes at Ethiopian outreaches. J Cataract Refract Surg. 2021;47:6–10.PubMedCrossRef Brant AR, Hinkle J, Shi S, et al. Artificial intelligence in global ophthalmology: using machine learning to improve cataract surgery outcomes at Ethiopian outreaches. J Cataract Refract Surg. 2021;47:6–10.PubMedCrossRef
91.
Zurück zum Zitat Sheeladevi S, Lawrenson JG, Fielder AR, Suttle CM. Global prevalence of childhood cataract: a systematic review. Eye (Lond). 2016;30:1160–9.PubMedCrossRef Sheeladevi S, Lawrenson JG, Fielder AR, Suttle CM. Global prevalence of childhood cataract: a systematic review. Eye (Lond). 2016;30:1160–9.PubMedCrossRef
92.
Zurück zum Zitat Gilbert C, Foster A. Childhood blindness in the context of VISION 2020—the right to sight. Bull World Health Organ. 2001;79:227–32.PubMedPubMedCentral Gilbert C, Foster A. Childhood blindness in the context of VISION 2020—the right to sight. Bull World Health Organ. 2001;79:227–32.PubMedPubMedCentral
93.
Zurück zum Zitat Reid JE, Eaton E. Artificial intelligence for pediatric ophthalmology. Curr Opin Ophthalmol. 2019;30:337–46.PubMedCrossRef Reid JE, Eaton E. Artificial intelligence for pediatric ophthalmology. Curr Opin Ophthalmol. 2019;30:337–46.PubMedCrossRef
94.
Zurück zum Zitat Liu X, Jiang J, Zhang K, et al. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network. PLoS ONE. 2017;12: e0168606.PubMedPubMedCentralCrossRef Liu X, Jiang J, Zhang K, et al. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network. PLoS ONE. 2017;12: e0168606.PubMedPubMedCentralCrossRef
95.
Zurück zum Zitat Lin H, Li R, Liu Z, et al. Diagnostic efficacy and therapeutic decision-making capacity of an artificial intelligence platform for childhood cataracts in eye clinics: a multicentre randomized controlled trial. EClinicalMedicine. 2019;9:52–9.PubMedPubMedCentralCrossRef Lin H, Li R, Liu Z, et al. Diagnostic efficacy and therapeutic decision-making capacity of an artificial intelligence platform for childhood cataracts in eye clinics: a multicentre randomized controlled trial. EClinicalMedicine. 2019;9:52–9.PubMedPubMedCentralCrossRef
96.
Zurück zum Zitat Long E, Chen J, Wu X, et al. Artificial intelligence manages congenital cataract with individualized prediction and telehealth computing. NPJ Digit Med. 2020;3:112.PubMedPubMedCentralCrossRef Long E, Chen J, Wu X, et al. Artificial intelligence manages congenital cataract with individualized prediction and telehealth computing. NPJ Digit Med. 2020;3:112.PubMedPubMedCentralCrossRef
97.
Zurück zum Zitat Zhang K, Liu X, Jiang J, et al. Prediction of postoperative complications of pediatric cataract patients using data mining. J Transl Med. 2019;17:2.PubMedPubMedCentralCrossRef Zhang K, Liu X, Jiang J, et al. Prediction of postoperative complications of pediatric cataract patients using data mining. J Transl Med. 2019;17:2.PubMedPubMedCentralCrossRef
98.
Zurück zum Zitat Al Hajj H, Lamard M, Conze PH, et al. CATARACTS: Challenge on automatic tool annotation for cataRACT surgery. Med Image Anal. 2019;52:24–41.PubMedCrossRef Al Hajj H, Lamard M, Conze PH, et al. CATARACTS: Challenge on automatic tool annotation for cataRACT surgery. Med Image Anal. 2019;52:24–41.PubMedCrossRef
99.
Zurück zum Zitat Yu F, Silva Croso G, Kim TS, et al. Assessment of automated identification of phases in videos of cataract surgery using machine learning and deep learning techniques. JAMA Netw Open. 2019;2: e191860.PubMedPubMedCentralCrossRef Yu F, Silva Croso G, Kim TS, et al. Assessment of automated identification of phases in videos of cataract surgery using machine learning and deep learning techniques. JAMA Netw Open. 2019;2: e191860.PubMedPubMedCentralCrossRef
100.
Zurück zum Zitat Yeh HH, Jain AM, Fox O, Wang SY. PhacoTrainer: A multicenter study of deep learning for activity recognition in cataract surgical videos. Transl Vis Sci Technol. 2021;10:23.PubMedPubMedCentralCrossRef Yeh HH, Jain AM, Fox O, Wang SY. PhacoTrainer: A multicenter study of deep learning for activity recognition in cataract surgical videos. Transl Vis Sci Technol. 2021;10:23.PubMedPubMedCentralCrossRef
101.
Zurück zum Zitat Yoo TK, Oh E, Kim HK, et al. Deep learning-based smart speaker to confirm surgical sites for cataract surgeries: a pilot study. PLoS ONE. 2020;15: e0231322.PubMedPubMedCentralCrossRef Yoo TK, Oh E, Kim HK, et al. Deep learning-based smart speaker to confirm surgical sites for cataract surgeries: a pilot study. PLoS ONE. 2020;15: e0231322.PubMedPubMedCentralCrossRef
103.
Zurück zum Zitat Tham YC, Li X, Wong TY, Quigley HA, Aung T, Cheng CY. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis. Ophthalmology. 2014;121:2081–90.PubMedCrossRef Tham YC, Li X, Wong TY, Quigley HA, Aung T, Cheng CY. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis. Ophthalmology. 2014;121:2081–90.PubMedCrossRef
104.
Zurück zum Zitat Qian Z, Xie X, Yang J, et al. Detection of shallow anterior chamber depth from two-dimensional anterior segment photographs using deep learning. BMC Ophthalmol. 2021;21:341.PubMedPubMedCentralCrossRef Qian Z, Xie X, Yang J, et al. Detection of shallow anterior chamber depth from two-dimensional anterior segment photographs using deep learning. BMC Ophthalmol. 2021;21:341.PubMedPubMedCentralCrossRef
105.
Zurück zum Zitat Yoo TK, Ryu IH, Kim JK, Lee IS, Kim HK. A deep learning approach for detection of shallow anterior chamber depth based on the hidden features of fundus photographs. Comput Methods Progr Biomed. 2022;219: 106735.CrossRef Yoo TK, Ryu IH, Kim JK, Lee IS, Kim HK. A deep learning approach for detection of shallow anterior chamber depth based on the hidden features of fundus photographs. Comput Methods Progr Biomed. 2022;219: 106735.CrossRef
106.
Zurück zum Zitat Li W, Chen Q, Jiang Z, et al. Automatic anterior chamber angle measurement for ultrasound biomicroscopy using deep learning. J Glaucoma. 2020;29:81–5.PubMedCrossRef Li W, Chen Q, Jiang Z, et al. Automatic anterior chamber angle measurement for ultrasound biomicroscopy using deep learning. J Glaucoma. 2020;29:81–5.PubMedCrossRef
107.
Zurück zum Zitat Wang W, Wang L, Wang X, Zhou S, Lin S, Yang J. A deep learning system for automatic assessment of anterior chamber angle in ultrasound biomicroscopy images. Transl Vis Sci Technol. 2021;10:21.PubMedPubMedCentralCrossRef Wang W, Wang L, Wang X, Zhou S, Lin S, Yang J. A deep learning system for automatic assessment of anterior chamber angle in ultrasound biomicroscopy images. Transl Vis Sci Technol. 2021;10:21.PubMedPubMedCentralCrossRef
108.
Zurück zum Zitat Shi G, Jiang Z, Deng G, et al. Automatic classification of anterior chamber angle using ultrasound biomicroscopy and deep learning. Transl Vis Sci Technol. 2019;8:25.PubMedPubMedCentralCrossRef Shi G, Jiang Z, Deng G, et al. Automatic classification of anterior chamber angle using ultrasound biomicroscopy and deep learning. Transl Vis Sci Technol. 2019;8:25.PubMedPubMedCentralCrossRef
109.
Zurück zum Zitat Porporato N, Baskaran M, Husain R, Aung T. Recent advances in anterior chamber angle imaging. Eye (Lond). 2020;34:51–9.PubMedCrossRef Porporato N, Baskaran M, Husain R, Aung T. Recent advances in anterior chamber angle imaging. Eye (Lond). 2020;34:51–9.PubMedCrossRef
110.
Zurück zum Zitat Pham TH, Devalla SK, Ang A, et al. Deep learning algorithms to isolate and quantify the structures of the anterior segment in optical coherence tomography images. Br J Ophthalmol. 2021;105:1231–7.PubMedCrossRef Pham TH, Devalla SK, Ang A, et al. Deep learning algorithms to isolate and quantify the structures of the anterior segment in optical coherence tomography images. Br J Ophthalmol. 2021;105:1231–7.PubMedCrossRef
111.
Zurück zum Zitat Liu P, Higashita R, Guo PY, et al. Reproducibility of deep learning based scleral spur localisation and anterior chamber angle measurements from anterior segment optical coherence tomography images. Br J Ophthalmol. 2022;2021:319798 (Epub ahead of print). Liu P, Higashita R, Guo PY, et al. Reproducibility of deep learning based scleral spur localisation and anterior chamber angle measurements from anterior segment optical coherence tomography images. Br J Ophthalmol. 2022;2021:319798 (Epub ahead of print).
112.
Zurück zum Zitat Randhawa J, Chiang M, Porporato N, et al. Generalisability and performance of an OCT-based deep learning classifier for community-based and hospital-based detection of gonioscopic angle closure. Br J Ophthalmol. 2021;2021:319470 (Epub ahead of print). Randhawa J, Chiang M, Porporato N, et al. Generalisability and performance of an OCT-based deep learning classifier for community-based and hospital-based detection of gonioscopic angle closure. Br J Ophthalmol. 2021;2021:319470 (Epub ahead of print).
113.
Zurück zum Zitat Porporato N, Tun TA, Baskaran M, et al. Towards ‘automated gonioscopy': a deep learning algorithm for 360° angle assessment by swept-source optical coherence tomography. Br J Ophthalmol. 2022;106:1387–1392.PubMedCrossRef Porporato N, Tun TA, Baskaran M, et al. Towards ‘automated gonioscopy': a deep learning algorithm for 360° angle assessment by swept-source optical coherence tomography. Br J Ophthalmol. 2022;106:1387–1392.PubMedCrossRef
114.
Zurück zum Zitat Li W, Chen Q, Jiang C, Shi G, Deng G, Sun X. Automatic anterior chamber angle classification using deep learning system and anterior segment optical coherence tomography images. Transl Vis Sci Technol. 2021;10:19.PubMedPubMedCentralCrossRef Li W, Chen Q, Jiang C, Shi G, Deng G, Sun X. Automatic anterior chamber angle classification using deep learning system and anterior segment optical coherence tomography images. Transl Vis Sci Technol. 2021;10:19.PubMedPubMedCentralCrossRef
115.
Zurück zum Zitat Xu BY, Chiang M, Chaudhary S, Kulkarni S, Pardeshi AA, Varma R. Deep learning classifiers for automated detection of gonioscopic angle closure based on anterior segment OCT images. Am J Ophthalmol. 2019;208:273–80.PubMedPubMedCentralCrossRef Xu BY, Chiang M, Chaudhary S, Kulkarni S, Pardeshi AA, Varma R. Deep learning classifiers for automated detection of gonioscopic angle closure based on anterior segment OCT images. Am J Ophthalmol. 2019;208:273–80.PubMedPubMedCentralCrossRef
116.
Zurück zum Zitat Shen L, Melles RB, Metlapally R, et al. The association of refractive error with glaucoma in a multiethnic population. Ophthalmology. 2016;123:92–101.PubMedCrossRef Shen L, Melles RB, Metlapally R, et al. The association of refractive error with glaucoma in a multiethnic population. Ophthalmology. 2016;123:92–101.PubMedCrossRef
117.
Zurück zum Zitat Lavanya R, Kawasaki R, Tay WT, et al. Hyperopic refractive error and shorter axial length are associated with age-related macular degeneration: the Singapore Malay Eye Study. Invest Ophthalmol Vis Sci. 2010;51:6247–52.PubMedCrossRef Lavanya R, Kawasaki R, Tay WT, et al. Hyperopic refractive error and shorter axial length are associated with age-related macular degeneration: the Singapore Malay Eye Study. Invest Ophthalmol Vis Sci. 2010;51:6247–52.PubMedCrossRef
118.
Zurück zum Zitat Varadarajan AV, Poplin R, Blumer K, et al. Deep learning for predicting refractive error from retinal fundus images. Invest Ophthalmol Vis Sci. 2018;59:2861–8.PubMedCrossRef Varadarajan AV, Poplin R, Blumer K, et al. Deep learning for predicting refractive error from retinal fundus images. Invest Ophthalmol Vis Sci. 2018;59:2861–8.PubMedCrossRef
119.
Zurück zum Zitat Yoo TK, Ryu IH, Kim JK, Lee IS. Deep learning for predicting uncorrected refractive error using posterior segment optical coherence tomography images. Eye (Lond). 2022;36:1959–65.PubMedCrossRef Yoo TK, Ryu IH, Kim JK, Lee IS. Deep learning for predicting uncorrected refractive error using posterior segment optical coherence tomography images. Eye (Lond). 2022;36:1959–65.PubMedCrossRef
120.
Zurück zum Zitat Chun J, Kim Y, Shin KY, et al. Deep learning-based prediction of refractive error using photorefraction images captured by a smartphone: Model development and validation study. JMIR Med Inform. 2020;8: e16225.PubMedPubMedCentralCrossRef Chun J, Kim Y, Shin KY, et al. Deep learning-based prediction of refractive error using photorefraction images captured by a smartphone: Model development and validation study. JMIR Med Inform. 2020;8: e16225.PubMedPubMedCentralCrossRef
121.
Zurück zum Zitat Yang D, Li M, Li W, et al. Prediction of refractive error based on ultrawide field images with deep learning models in myopia patients. Front Med (Lausanne). 2022;9: 834281.PubMedCrossRef Yang D, Li M, Li W, et al. Prediction of refractive error based on ultrawide field images with deep learning models in myopia patients. Front Med (Lausanne). 2022;9: 834281.PubMedCrossRef
122.
Zurück zum Zitat Group C-AaS-AS. Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nat Med. 2019;25:1467–8.CrossRef Group C-AaS-AS. Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nat Med. 2019;25:1467–8.CrossRef
123.
Zurück zum Zitat Rahimy E. Deep learning applications in ophthalmology. Curr Opin Ophthalmol. 2018;29:254–60.PubMedCrossRef Rahimy E. Deep learning applications in ophthalmology. Curr Opin Ophthalmol. 2018;29:254–60.PubMedCrossRef
124.
Metadaten
Titel
Artificial Intelligence for Anterior Segment Diseases: A Review of Potential Developments and Clinical Applications
verfasst von
Zhe Xu
Jia Xu
Ce Shi
Wen Xu
Xiuming Jin
Wei Han
Kai Jin
Andrzej Grzybowski
Ke Yao
Publikationsdatum
08.03.2023
Verlag
Springer Healthcare
Erschienen in
Ophthalmology and Therapy / Ausgabe 3/2023
Print ISSN: 2193-8245
Elektronische ISSN: 2193-6528
DOI
https://doi.org/10.1007/s40123-023-00690-4

Weitere Artikel der Ausgabe 3/2023

Ophthalmology and Therapy 3/2023 Zur Ausgabe

Leitlinien kompakt für die Innere Medizin

Mit medbee Pocketcards sicher entscheiden.

Seit 2022 gehört die medbee GmbH zum Springer Medizin Verlag

„Jeder Fall von plötzlichem Tod muss obduziert werden!“

17.05.2024 Plötzlicher Herztod Nachrichten

Ein signifikanter Anteil der Fälle von plötzlichem Herztod ist genetisch bedingt. Um ihre Verwandten vor diesem Schicksal zu bewahren, sollten jüngere Personen, die plötzlich unerwartet versterben, ausnahmslos einer Autopsie unterzogen werden.

Hirnblutung unter DOAK und VKA ähnlich bedrohlich

17.05.2024 Direkte orale Antikoagulanzien Nachrichten

Kommt es zu einer nichttraumatischen Hirnblutung, spielt es keine große Rolle, ob die Betroffenen zuvor direkt wirksame orale Antikoagulanzien oder Marcumar bekommen haben: Die Prognose ist ähnlich schlecht.

Schlechtere Vorhofflimmern-Prognose bei kleinem linken Ventrikel

17.05.2024 Vorhofflimmern Nachrichten

Nicht nur ein vergrößerter, sondern auch ein kleiner linker Ventrikel ist bei Vorhofflimmern mit einer erhöhten Komplikationsrate assoziiert. Der Zusammenhang besteht nach Daten aus China unabhängig von anderen Risikofaktoren.

Semaglutid bei Herzinsuffizienz: Wie erklärt sich die Wirksamkeit?

17.05.2024 Herzinsuffizienz Nachrichten

Bei adipösen Patienten mit Herzinsuffizienz des HFpEF-Phänotyps ist Semaglutid von symptomatischem Nutzen. Resultiert dieser Benefit allein aus der Gewichtsreduktion oder auch aus spezifischen Effekten auf die Herzinsuffizienz-Pathogenese? Eine neue Analyse gibt Aufschluss.

Update Innere Medizin

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert.