Corneal Pachymetry by AS-OCT after Descemet's Membrane Endothelial Keratoplasty
Friso G. Heslinga, Ruben T. Lucassen, Myrthe A. van den Berg, Luuk van der Hoek, Josien P.W. Pluim, Javier Cabrerizo, Mark Alberti, Mitko Veta
aa r X i v : . [ phy s i c s . m e d - ph ] F e b Corneal Pachymetry by AS-OCT after Descemet’sMembrane Endothelial Keratoplasty
Friso G. Heslinga , Ruben T. Lucassen , Myrthe A. van den Berg , Luuk van der Hoek ,Josien P.W. Pluim , Javier Cabrerizo , Mark Alberti , and Mitko Veta Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands Ophthalmology Department, Rigshospitalet - Glostrup, Copenhagen, Denmark Copenhagen Eye Foundation, Copenhagen, Denmark * [email protected] ABSTRACT
Corneal thickness (pachymetry) maps can be used to monitor restoration of corneal endothelial function, for example afterDescemet’s membrane endothelial keratoplasty (DMEK). Automated delineation of the corneal interfaces in anterior segmentoptical coherence tomography (AS-OCT) can be challenging for corneas that are irregularly shaped due to pathology, or asa consequence of surgery, leading to incorrect thickness measurements. In this research, deep learning is used to automat-ically delineate the corneal interfaces and measure corneal thickness with high accuracy in post-DMEK AS-OCT B-scans.Three different deep learning strategies were developed based on 960 B-scans from 68 patients. On an independent testset of 320 B-scans, corneal thickness could be measured with an error of 13.98 to 15.50 µ m for the central 9 mm range,which is less than 3% of the average corneal thickness. The accurate thickness measurements were used to construct de-tailed pachymetry maps. Moreover, follow-up scans could be registered based on anatomical landmarks to obtain differentialpachymetry maps. These maps may enable a more comprehensive understanding of the restoration of the endothelial func-tion after DMEK, where thickness often varies throughout different regions of the cornea, and subsequently contribute to astandardized postoperative regime. Introduction
Corneal thickness is a key biomarker for corneal disorders, including Fuchs’ endothelial dystrophy , keratoconus , andkeratitis . Measurements on the corneal thickness, called pachymetry, enable detection of thickness changes that are indica-tive of restoration of corneal endothelial function after surgical treatment. For visualization of the restoring cornea, anteriorsegment optical coherence tomography (AS-OCT) has become the preferred imaging modality due to its high resolution andreproducibility . While current OCT software works well for delineating the boundaries of healthy corneas, it often failsfor corneas that are irregularly shaped due to pathology, or as a consequence of surgery (Figure 2). Manual correction of thedelineation mistakes is time consuming and not practical for a clinical setting.In recent years, automated image analysis using deep learning has shown to be promising for ophthalmic applications ,including the analysis of AS-OCT images . Deep learning is a subset of machine learning techniques with models thatcontain many (typically millions of) trainable weights. These weights are iteratively updated with respect to some loss func-tion which compares model predictions to ground truth labels. In contrast with classical machine learning techniques, nohandcrafted features have to be selected as the relevant features are automatically learned from the (image) data. Recent workalready showed the potential of deep learning for corneal pachymetry by AS-OCT, specifically for keratoconus . We hypoth-esized that a similar approach could be used for corneal pachymetry for cases with irregular inner corneal curvature and/orstructures that look similar to the corneal boundaries, both of which can lead to delineation failures by standard AS-OCTsoftware.In this study we focus on OCT scans acquired after Descemet’s Membrane Endothelial Keratoplasty (DMEK) . DuringDMEK, the diseased corneal endothelium and Descemet’s membrane are replaced with a donor graft. After placement of thegraft, a gas bubble is injected into the anterior chamber to support graft attachment to the host cornea. Both the proceduralgas bubble and donor graft can mimic the appearance of the corneal interface and result in incorrect delineation.We compare three different deep learning techniques that were developed or used for ophthalmology applications andshown to be highly effective. We validate our thickness measurements for the central 9 mm diameter (Figure 1), whereasprevious work only did so for 3.1 mm . This is essential to assess corneal regeneration after DMEK surgery which uses agraft of ∼ mm6 mm3 mmPosteriorAnterior Figure 1.
Single image (B-scan) from an AS-OCT scan, showing the cornea and the anterior chamber. This B-scan wascropped centrally and horizontally aligned as reported by Heslinga & Alberti . Manual delineations of the corneal interfacesare shown in red. Corneal thickness is measured as the distance between the anterior and posterior interface, perpendicular tothe anterior interface. The blue lines illustrate a subset of these thickness measurements. For evaluation of the thicknessmeasurements, we distinguish the central 3 mm, 6 mm, and 9 mm diameter with respect to the corneal apex.the center of the cornea in subsequent images and visualizes thickness differences over time. Results
AS-OCT data & annotations
The AS-OCT scans used in this study were collected as part of a randomized controlled trial conducted at the Departmentof Ophthalmology, Rigshospitalet – Glostrup, Denmark. The trial was designed to compare air and sulfur hexafluoride (SF6)DMEK surgery in patients with Fuchs’ endothelial dystrophy or pseudophakic bullous keratopathy . Repeat DMEK proce-dures and patients with prior keratoplasty were excluded. A total of 80 swept-source AS-OCT scans (CASIA2; Tomey Corp.Nagoya, Japan) from 68 participants were acquired either immediately after surgery, one week after surgery, or both. Eachscan consists of 16 images (B-scans) acquired in a radial pattern, corresponding to 1280 B-scans in total.AS-OCT scans were preprocessed similar as reported by Heslinga & Alberti . In brief, a deep learning-based localizationmodel was applied to each B-scan to identify the scleral spur, a landmark in the anterior chamber of the eye . Per full AS-OCT scan, an ellipse was fitted through the scleral spur points of all 16 B-scans to ensure that the points lie in the same planeand to refine point locations. B-scans were horizontally aligned and cropped based on the scleral spur locations, centeringaround the corneal apex (Figure 1). Final crop sizes were 960 by 384 pixels (width by height) with a pixel size of 15 . .For each B-scan, the anterior corneal interface was annotated inside a 12 mm diameter from the radial center. A diameterof 10 mm was used for the posterior interface. Partial DMEK graft detachments were excluded from posterior interfaceannotations. The data set was randomly split on a participant level in a training set of 752 images, a validation set of 208images and a test set of 320 images. B-scans of the training and validation set were annotated by one of three observers undersupervision of a cornea specialist. The test set was annotated by all three observers to assess inter-observer variability. Thickness measurements
Three deep learning-based models were trained to locate the anterior and posterior corneal boundaries: (1) a patch-basedconvolutional neural network (CNN), (2) a U-Net based model, and (3) a CNN with dimension reduction. Details about themodel architectures and training process are provided in the methods section.Corneal thickness was measured perpendicularly to the anterior interface (see Figure 1), similar to . For each B-scanof the test set, we evaluated thickness for every pixel on the anterior interface inside a 3 mm, 6 mm, and 9 mm diameter.Corneal thickness estimates by the deep learning models were compared with all three sets of annotations (960 in total). Meanabsolute errors (MAE) (shown in Table 1) are very similar for the three deep learning models and across different diameters.The smallest error was found for the U-Net model for the 6 mm diameter (13.84 µm), while the largest error was found forthe patch-based CNN for the 9 mm diameter (15.50 µm). Apart from the latter, all mean absolute errors are smaller than onepixel (15.0 µm). The small standard deviations shown in Table 1 show the high repeatability over multiple training runs. In igure 2. AS-OCT B-scans collected from patients after DMEK surgery. The green lines represent delineations of the(corneal) interfaces by the built-in software of the OCT system. These examples were selected to show the types ofdelineation errors encountered. In (a), (b), and (c) the delineation partly follows the DMEK graft (green arrows) instead ofthe posterior interface. Other types of mistakes are indicated by white arrows: (a) Some of the posterior part of the cornea ismissed. (b) The delineation does not follow the irregularly shaped interface in the center. (d) The system confuses theboundaries of the gas bubble used in DMEK with the posterior corneal interface.
Table 1.
Mean absolute error in µm of corneal thickness predictions on test set. Comparisons represent deep learningmodels vs. annotations (left) and annotator vs. annotator (right). Mean ± SD of 5 training repetitions.Models vs. annotations Inter-observer comparisonDiameter Patch-based CNN U-Net CNN with dim. red. 1 vs 2 1 vs 3 2 vs 33 mm 14.40 ± ± ± ± ± ± ± ± ± Outlier analysis
We further inspected the origin of the deviations in thickness measurements between the deep learning models and the manualannotations by investigating the cases with the largest deviations. Figure 3 shows two example outliers with annotations anddelineations by the CNN with dimension reduction. In Figure 3a the remnant tissue from the host or donor prevents the graftfrom completely attaching at the right posterior side of the cornea. The tissue was correctly excluded from the annotation, butincluded in the delineation by the network.Another example is presented in Figure 3b, where the enlarged region shows a shortfall in detail of the annotation comparedto the network delineation. Note that the graft is not entirely attached at the right side of the posterior interface, which was orrectly recognized by the network and mistakenly included in the annotation. a b MAE: 31.03 μmMAE: 22.57 μm
Network delineation Annotation
Figure 3.
Two examples of B-scans including the annotations and delineations by the CNN with dimension reduction withsubstantial deviations in predicted thickness. The rectangular areas are enlarged and displayed to the right of the B-scan.Vertical dashed lines indicate the 9 mm diameter. Note that the thickness was not evaluated outside of the 9 mm diameter.
Pachymetry mapping
Corneal thickness measurements from 16 radial B-scans were combined to construct pachymetry maps as shown in Figure4. Thickness measurements were plotted on a polar coordinate axis with cubic interpolation between the cross-sections. Thepachymetry map was divided into three circular regions with diameters of 3 mm, 6 mm, and 9 mm. The outer two rings weredivided into octants where the average thickness is displayed. The inner circle displays the average of the four quarters, aswell as the average apex thickness inside the central 1 mm diameter. Thickness values were mapped to corresponding colorsof a discrete colormap. Similar to conventional pachymetry colormaps the corneal thickness at 600 µm is displayed in green,thinner regions in red, and thicker regions in blue . Immediately after DMEK One week after DMEK Differential pachymetry map a b c
Figure 4.
Example of pachymetry maps from one participant in the test set. (a) Pachymetry map of AS-OCT scan acquiredimmediately after DMEK; (b) Pachymetry map of AS-OCT scan acquired one week after DMEK; (c) Differentialpachymetry map of difference in corneal thickness between (b) and (a).
Effect of training set size
The precise manual annotation of the corneal boundaries is a time-consuming process. For future projects it is useful to knowwhether a smaller set of annotated data can be used to obtain similar results. We therefore investigated the effect of the sizeof the annotated training set on the quality of the thickness measurements. Table 2 shows thickness measurement errors on he test set B-scans when the deep learning models are trained with 100%, 50%, 25%, and 10% of the original training set.For the patch-based CNN and the CNN with dimension reduction we found only a marginal increase in the mean absoluteerror (of about 2 µm) for the whole central 9 mm range, when trained with only 10% of the data. In contrast, the U-Net modelperformance did decrease more substantially for the 9 mm range when trained with 10-50% of the data. Further inspectionlearned this was caused by some substantial error in the 6-9 mm range for a small portion of the B-scans.
Table 2.
Mean absolute error in µm of corneal thickness predictions on test set for varying partitions of the training set.Mean ± standard deviation of 5 training repetitions.Patch-based CNN U-Net CNN with dimension reductionDiameter 3 mm 6 mm 9 mm 3 mm 6 mm 9 mm 3 mm 6 mm 9 mm100% 14.40 14.80 15.50 13.94 13.84 13.98 13.94 14.17 14.40 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± Discussion
This research shows the feasibility of automated corneal thickness measurements in post-DMEK AS-OCT scans using deeplearning. While our data set contains many examples of irregularly shaped corneas and partly detached DMEK grafts, allmodels are able to measure corneal thickness with an average error of 13.98 to 15.50 µm for the central 9 mm range. Incomparison with a typical central corneal thickness of 540 µm , this corresponds to an error of less than 3%. The qualityof our thickness measurements is at least on par with manual annotations, as indicated by the inter-observer distance of 13.66to 23.39 µm for the central 9 mm range. Based on the accurate thickness measurements, detailed pachymetry maps can beconstructed. The preprocessing based on the scleral spur locations largely eliminates spatial translation in the coronal planeand sagittal rotation between follow-up scans . Our method does not correct for coronal rotation, but we expect the effect ofthis type of head tilt or eye rotation to be small.Since follow-up scans can be registered, differential pachymetry maps can be constructed to monitor thickness changes.This may enable a more comprehensive understanding of the restoration of the endothelial function after DMEK, wherethickness often varies throughout different regions of the cornea and the restoration of corneal thickness is associated withsuccess of the procedure . Typically, only the central corneal thickness (CCT) is reported, while this single parameter does notnecessarily reflect restoration of the full cornea after DMEK. The DMEK graft is about 8.5 mm and partial graft detachmenthappens most frequently in the peripheral region . Detachment can sometimes be ambiguous, even in high-quality OCTimaging, as the graft can be close to the inner cornea yet not attached . Local changes in corneal thickness could then beindicative of corneal restoration and thus graft attachment. The differential pachymetry maps presented in this research enableboth qualitative and quantitative progression tracking options within the central 9 mm range, covering the whole region of theDMEK graft.The data for this study only included post-DMEK AS-OCT scans of patients with previous Fuchs’ endothelial dystro-phy or pseudophakic bullous keratopathy. The application of the here presented deep learning models for scans from otherpathologies or taken in different centra requires further research.Three deep learning methods were compared in this research. Despite substantial differences in the approaches, the resultsfor the different models were all at least on par with manual annotations. This could indicate that delineating the cornealboundaries is well-defined and relatively easily solvable using deep learning. This finding is in line with other research aimingat delineating layers in ophthalmic OCT imaging . Based on the thickness results for the models trained with 100% ofthe training data, none of the models seems to outperform the other. However, construction of a pachymetry map with thepatch-based CNN takes considerably longer because of the large number of patches involved and the extensive post-processingsteps.Results of the deep learning models could not directly be compared with the delineations by the built-in software of theOCT system. Nevertheless, 28% of B-scans were found to contain delineations mistakes by the built-in software leading toa clinically relevant thickness inaccuracy. Moreover, 80% of the AS-OCT scans of the test set contained at least one B-scan ith a thickness inaccuracy of clinical relevance. We observed that these errors often occurred at locations of high clinicalrelevance, such as an irregularly shaped corneal center, or where the DMEK graft detached.Training the models with smaller partitions of the data provides insight in the value of adding extra annotated data. Sinceno improvements were obtained by using 100% of the available training data compared to only 50%, it can be concluded thatthe performance has saturated, and additional data would not contribute to much further improvement. An exception could bethe addition of data from rare cases or examples that led to errors in the current test set (e.g. Figure 3a). For the U-Net model,training with 10% or 25% of the training data did result in an increase in the thickness error in the 6 to 9 mm region. Furtherinspection revealed that this was due to a small number of B-scans for which the segmentation failed. Interestingly, for theCNN with dimension reduction and the patch-based CNN, even training with 80 B-scans from 5 patients does not seem toreduce the performance of the thickness measurements substantially (MAE of 16.08 µm and 16.76 µm respectively). Theseresults indicate that future projects on delineation of corneal boundaries could already be developed with less annotated data,yet still obtain reasonable results.The high resolution of the AS-OCT combined with deep learning for automated image processing supports fast andaccurate analysis of the corneal thickness after DMEK. The here presented (differential) pachymetry maps enable tracking oflocal corneal thickness changes indicative of corneal restoration. As such, these tools can contribute to the ongoing researchefforts towards further improvement of the DMEK procedure and management of the postoperative regime. Methods
Models & Training
We implemented three different deep learning models based on recent successful applications related to segmentation orinterface delineation in ophthalmic OCT imaging. The first model is based on the patch-based approach by Wang et al .for the identification of five different retinal interfaces in spectral domain OCT images . Using patches of 33 ×
33 pixelsand a relatively shallow network of 5 trainable layers, Want et al. achieved localization accuracies of 89-98%. The secondmodel is based on the U-Net architecture which has become the de facto standard method for medical image segmentation .Multiple adaptations of U-Net were evaluated by dos Santos et al. to segment three corneal layers in images captured witha custom-built ultrahigh-resolution OCT system . Using cross-validation, a mean segmentation-accuracy of 99.56% wasachieved. The third model was also inspired by U-Net, but modified by Liefers et al. to reduce the dimensionality and outputone-dimensional arrays with y -locations of three retinal layers in OCT images . For the localization of these retinal layers,the authors obtained a mean absolute difference between the predictions and annotations of 1.31 pixels.For our application of corneal interface localization, both the patch-based approach and the CNN with dimension reductionallow for direct delineation of the corneal interfaces. In contrast, a U-Net approach is used to segment the cornea and requiresa post-processing step to obtain the interface delineations from the segmented mask. Details about the model adaptations,implementations, and training procedures are described below for each model. Depending on the model requirements we alsoadapted the preprocessing and post-processing steps. All models were implemented in Keras with TensorFlow backendand optimized using Adam . Patch-based CNN
The architecture of the patch-based model was similar to that of Wang et al. with 2 modifications: (1) 5 × × ; (2) 3 × × ×
33 pixels were extracted for each x -coordinate where the interfaces had been annotated(center 12 mm for the anterior interface and center 10 mm for the posterior interface). Anterior and posterior patches weresampled using the respective annotations as center pixel locations. Similarly, for each x -coordinate within the central 12 mm,a non-interface patch was constructed for one random pixel not part of the interface annotations. All patches (1.70 million forthe training set and 0.47 million for the validation set) were extracted prior to training to speed up the training process.The model was optimized by minimizing the categorical cross-entropy between the pixel ground truth and model predic-tions. Online data augmentation was added by rotating the patches with a maximum of 30 degrees. Based on preliminaryexperiments, the model was trained for 20 epochs with a variable learning rate: 0.001 for epoch 1 to 12, 0.0001 for epoch 13to 16, and 0.00001 for epoch 17 to 20.For evaluations on the test set, patches were extracted for all pixels within the center 12 mm (width) and processed by thetrained model, predicting either anterior interface, posterior interface, or background for the center pixel of the patch. Basedon preliminary results on the validation set, the following post-processing steps were performed for the pixels identified asinterface: (1) small connected regions (0 - 250 pixels) were removed; (2) the largest connected region was considered to betrue; (3) other regions were considered to be part of the true prediction only when those would be at the same height as the argest connected region; (4) per interface, y -values of positive pixels were averaged to obtain a single value per x -coordinate;(5) any remaining gaps were filled using linear interpolation of adjacent interface locations. U-Net
As an alternative to directly delineating the interfaces, a U-Net was implemented to segment the whole cornea. The U-Netconsisted of the standard 4 downsampling (and upsampling) segments and we included batch normalization and residual layersto accelerate training. Binary masks of the cornea were created using the interface annotations. As a preprocessing step, theoriginal images and masks were cropped to 800 by 256 pixels (width by height) and split into a superior and inferior half. Thisstep was included to reduce the size of the input to the U-Net while doubling the number of training examples.For optimization of the U-Net we experimented with different loss functions (Dice, binary cross-entropy, and weightedbinary cross-entropy) and learning rate schedules. We found only minor differences in the results on the validation set andused binary cross-entropy for the final model. We trained for 30 epochs with an initial learning rate of 0.001 that was dividedby two at every 3 epochs. We also experimented with data augmentation (brightness adaptation and addition of Gaussiannoise) but did not identify any improvements on the validation results.For evaluations on the test set, the inferior an posterior crops were processed by the trained U-Net and combined. From thepredicted mask, the maximum and minimum y -values were used to reconstruct the anterior and posterior interface respectively.In contrast to the patch-based model, the predicted y -values of the interfaces only consisted of integers. CNN with dimension reduction
The architecture of the third model was designed by Liefers et al. to return a one-dimensional array of y -coordinates for atwo-dimensional image as network input. The model consists of a downsampling and upsampling path to incorporate a largecontextual region. While U-Net uses direct shortcut connections to provide local context, this architecture resorts to so-calledfunneling subnetworks between the downsampling and upsampling path to resolve the mismatch in activation map height. Theoriginal network architecture was designed for images of 512 by 512 pixels. To avoid unnecessary computations, we adaptedthe architecture to work for images of 512 by 256 pixels. The downsampling path and all funneling subnetworks thereforecontain one less downsampling operation and the upsampling path one less upsampling operation. 1 × × ≤
10 pixels) or rotated ( ≤
12 degrees) before inferior and superior crops were made. We also added uniformnoise ( ≤ σ ≤
1, and sigmoidal contrast changes with a gain between [4, 5]. For evaluations onthe test set, the superior and inferior crops were processed and combined by averaging the central overlapping section.
Thickness measurements & evaluation
Outputs of the deep learning models were y -values, describing the height of the anterior and posterior interface for each x -coordinate within the central 9 mm. The posterior interface of the cornea generally contains more irregular shapes afterDMEK. We therefore measured corneal thickness perpendicularly to the anterior interface (see Figure 1), similar to . First,the coefficient of proportionality was determined for the anterior interface. A 71 point moving average filter was used to reducethe effect of small deviations from the general curvature. We found that the proportionality coefficient was still affected bylocal irregularities after filtering with smaller filters, whereas larger filters introduced inaccuracies near the sides. The distancewas then measured perpendicularly to a tangent with the corresponding coefficient of proportionality for every pixel on theanterior interface inside a 9 mm diameter.Performance of the models was measured by comparing the thicknesses predictions with the thicknesses following fromthe three sets of manual annotations. The mean absolute error was then calculated for a 3 mm, 6 mm, and 9 mm diameter.To obtain a measure of variation, we trained all models five times from scratch using different random seeds. Mean absoluteerrors and sample standard deviations shown in Table 1 and Table 2 were based on these five repetitions.For assessment of the interface delineations errors by the built-in software of the OCT system, all B-scans of the test setwere semi-quantitatively assessed by a cornea specialist. With the built-in drawing tool, missed parts of the cornea or areasmistakenly classified as cornea were selected. This was done approximately for the central 9 mm diameter although theseB-scans were not centered or horizontally aligned. Sometimes the posterior delineation was not complete for the peripheralcornea. In such cases no extra misclassified area error was added; these regions were simply ignored. B-scans with a total ncorrectly classified area of more than 0.1 mm were considered to contain a clinically relevant inaccuracy. When this areawas larger than 0.25 mm the inaccuracy was considered severe. Reduced training set
Partitions of the training set were made by randomly selecting all B-scans from a subset of the study participants. For 50% ofthe training set this equaled 24 participants (384 B-scans). For 25% 12 participants (192 B-scans) and for 10% 5 participants(80 B-scans). Partitions were randomly sampled for each of the five training repetitions, and partitions were the same for eachdeep learning model. All other training parameters were similar as for the 100% training set size models.
References Kopplin, L. J. et al.
Relationship of Fuchs’ endothelial corneal dystrophy severity to central corneal thickness.
ArchOphthalmol. , 433–439 (2012). Patel, S. V., Hodge, D. O., Treichel, E. J., Spiegel, M. R. & Baratz, K. H. Predicting the prognosis of Fuchs endothelialcorneal dystrophy by using Scheimpflug tomography.
Ophthalmology , 315–323 (2020). Ambrósio Jr, R., Alonso, R. S., Luz, A. & Coca Velarde, L. G. Corneal-thickness spatial profile and corneal-volumedistribution: Tomographic indices to detect keratoconus.
J. Cataract. Refract. Surg. , 1851–1859 (2006). Li, Y. et al.
Keratoconus diagnosis with optical coherence tomography pachymetry mapping.
Ophthalmology ,2159–2166 (2008). Cook, C. & Langham, M. Corneal thickness in interstitial keratitis.
Br. J. Ophthalmol. , 301–304 (1953). Wilhelmus, K. R., Sugar, J., Hyndiuk, R. A. & Stulting, R. D. Corneal thickness changes during herpes simplex virusdisciform keratitis.
Cornea , 154–157 (2006). Lim, S. H. Clinical applications of anterior segment optical coherence tomography.
J. Ophthalmol. Wang, S. B., Cornish, E. E., Grigg, J. R. & McCluskey, P. J. Anterior segment optical coherence tomography and itsclinical applications.
Clin. Exp. Optom. , 195–207 (2019). Lecun, Y., Bengio, Y. & Hinton, G. Deep learning.
Nature , 436–444 (2015).
Ting, D. S. W. et al.
Artificial intelligence and deep learning in ophthalmology.
Br. J. Ophthalmol. , 167–175 (2019).
Xu, B. Y., Chiang, M., A., P. A., Moghimi, S. & Varme, R. Deep neural network for scleral spur detection in anteriorsegment OCT images: The Chinese American eye study.
Trans. Vis. Sci. Tech. , 18 (2020).
Fu, H. et al.
A deep learning system for automated angle-closure detection in anterior segment optical coherence tomog-raphy images.
Am. J. Ophthalmol. , 37–45 (2019).
Treder, M., Lauermann, J. L., Alnawaiseh, M. & Eter, N. Using deep learning in automated detection of graft detachmentin Descemet membrane endothelial keratoplasty: A pilot study.
Cornea , 157–161 (2019).
Heslinga, F. G., Alberti, M., Pluim, J. P. W., Cabrerizo, J. & Veta, M. Quantifying graft detachment after Descemet’smembrane endothelial keratoplasty with deep convolutional neural networks.
Trans. Vis. Sci. Tech. (2020).
Dos Santos, V. A. et al.
CorneaNet: fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deeplearning.
Biomed. Opt. Express , 622–641 (2019).
Melles, R. J., Ong, T. S., Ververs, B. & van der Wees, J. Descemet membrane endothelial keratoplasty (DMEK).
Cornea , 987–990 (2006).
Alberti, M. Air versus SF6 for Descemet’s membrane endothelial keratoplasty (DMEK). https://clinicaltrials.gov/ct2/show/NCT03407755 (accessed May 9, 2020) . Ang, M. et al.
Anterior segment optical coherence tomography.
Prog. Retin. Eye Res. , 132–156 (2018). Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional networks for biomedical image segmentation.
Proc. Int.Conf. Med. Image Comput. Interv.
Li, Y., Shekhar, R. & Huang, D. Corneal pachymetry mapping with high-speed optical coherence tomography.
Ophthal-mology , 792–799 (2006).
Bourges, J. L. et al.
Average 3-dimensional models for the comparison of Orbscan II and Pentacam pachymetry maps innormal corneas.
Ophthalmology , 2064–2071 (2009). Ma, R. et al.
Distribution and trends in corneal thickness parameters in a large population-based multicenter study ofyoung Chinese adults.
Investig. Ophthalmol. Vis. Sci. , 3366–3374 (2018). Hashemi, H. et al.
The distribution of corneal thickness in a 40- to 64-year-old population of Shahroud, Iran.
Cornea , 1409–1413 (2011).
Vasiliauskait˙e, I. et al.
Descemet membrane endothelial keratoplasty: Ten-Year Graft Survival and Clinical Outcomes.
Am. J. Ophthalmol. , 114–120 (2020).
Röck, T., Bramkamp, M., Bartz-Schmidt, K. U., Röck, D. & Yörük, E. Causes that influence the detachment rate afterDescemet membrane endothelial keratoplasty.
Graefes. Arch. Clin. Exp. Ophthalmol , 2217–2222 (2015).
Bucher, F. et al.
Spontaneous long-term course of persistent peripheral graft detachments after Descemet’s membraneendothelial keratoplasty.
Br. J. Ophthalmol. , 768–772 (2015). Deng, S. X., Sanchez, P. J. & Chen, L. Clinical outcomes of Descemet membrane endothelial keratoplasty using eyebank–prepared tissues.
Am. J. Ophthalmol , 590–596 (2015).
Wang, Y. Z., Galles, D., Klein, M., Locke, D. G. & Birch, D. G. Application of a deep machine learning model forautomatic measurement of EZ Width in SD-OCT Images of RP.
Transl. Vis. Sci. Technol. , 15 (2020).
Liefers, B., González-Gonzalo, C., van Ginneken, B. & Sánchez, C. I. Dense segmentation in selected dimensions:application to retinal optical coherence tomography.
Proc. Int. Conf. Med. Imaging with Deep. Learn.
Keras (2015). Software available from keras.io.
TensorFlow: Large-scale machine learning on heterogeneous systems (2015). Software available from tensorflow.org.
Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. arXiv:1412.6980 (2014).
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the Inception architecture for computer vision.
The IEEE Conf. on Comput. Vis. Pattern Recognit. (CVPR) (2016).
Acknowledgements
This research is financially supported by the TTW Perspectief program and Philips Research.
Author contributions statement
F.H, M.A. and M.V. designed the study. M.A. and J.C. provided the material and approved the study. R.L, M.B, L.H. and F.H.implemented the models, conducted the experiments, and analyzed the results. M.A, J.C, J.P, and M.V reviewed the analysis.All authors reviewed the manuscript.
Competing interests
The author(s) declare no competing interests.
Additional information
Correspondence and requests for materials should be addressed to F.H.and requests for materials should be addressed to F.H.