A Thickness Sensitive Vessel Extraction Framework for Retinal and Conjunctival Vascular Tortuosity Analysis
Ashwin De Silva, Malsha V. Perera, Navodini Wijethilake, Saroj Jayasinghe, Nuwan D. Nanayakkara, Anjula De Silva
aa r X i v : . [ ee ss . I V ] J a n A Thickness Sensitive Vessel ExtractionFramework for Retinal and Conjunctival VascularTortuosity Analysis
Ashwin De Silva,
Member, IEEE , Malsha V. Perera,
Member, IEEE , Navodini Wijethilake,
Member, IEEE ,Saroj Jayasinghe, Nuwan D. Nanayakkara,
Member, IEEE and Anjula De Silva
Member, IEEE
Abstract — Systemic diseases such as diabetes, hyper-tension, atherosclerosis are among the leading causes ofannual human mortality rate. It is suggested that retinal andconjunctival vascular tortuosity is a potential biomarker forsuch systemic diseases. Most importantly, it is observedthat the tortuosity depends on the thickness of these ves-sels. Therefore, selective calculation of tortuosity withinspecific vessel thicknesses is required depending on thedisease being analysed. In this paper, we propose a thick-ness sensitive vessel extraction framework that is primarilyapplicable for studies related to retinal and conjunctivalvascular tortuosity. The framework uses a ConvolutionalNeural Network based on the IterNet architecture to obtainprobability maps of the entire vasculature. They are thenprocessed by a multi-scale vessel enhancement techniquethat exploits both fine and coarse structural vascular de-tails of these probability maps in order to extract vessels ofspecified thicknesses. We evaluated the proposed frame-work on four datasets including DRIVE and SBVPI, andobtained Matthew’s Correlation Coefficient values greaterthan 0.71 for all the datasets. In addition, the proposedframework was utilized to determine the association ofdiabetes with retinal and conjunctival vascular tortuosity.We observed that retinal vascular tortuosity (Eccentricitybased Tortuosity Index) of the diabetic group was signifi-cantly higher ( p < . ) than that of the non-diabetic groupand that conjunctival vascular tortuosity (Total Curvaturenormalized by Arc Length) of diabetic group was signifi-cantly lower ( p < . ) than that of the non-diabetic group.These observations were in agreement with the literature,strengthening the suitability of the proposed framework. Index Terms — Multi-scale Vessel Extraction, Convolu-tional Neural Networks, Retinal and Conjunctival VascularTortuosity, Diabetes
I. I
NTRODUCTION I T is well known that structural changes of retinal vas-culature are markers of diabetes, diabetic retinopathy,nephropathy, aging, genetic disorders and cardiovascular dis-eases [1], [2]. Clinical observations suggest that these diseasesare linked with vascular tortuosity which reflects the twisted
A. De Silva, M. V. Perera, N. Wijethilake, N. D. Nanayakkara, A. C.De Silva are with the Department of Electronic and TelecommunicationEngineering, University of Moratuwa, Sri Lanka. (email: { ashwind, mal-shav, wijethilakemrn.20, nuwan, anjulads } @uom.lk )S. Jayasinghe (Professor of Medicine) is with the Department ofClinical Medicine, Faculty of Medicine, University of Colombo, Sri Lanka.(email: [email protected]) and curved nature of blood vessels [2]. In particular, retinalfundus images (Fig.1 (a)) are used in visualizing microvascu-lature non-invasively, which has been widely used to examinethe association of vascular tortuosity with diabetes and dia-betic retinopathy [1]. In addition to diabetic related research,vascular tortuosity has been used in studies pertaining tocardiovascular diseases [3], sickle cell retinopathy [4] andcentral vein occlusion [5].Apart from the retina, the bulbar conjunctiva that coversthe sclera of the eye is also a densely vascularized membranewhich can be conveniently accessed compared to the retina.Bulbar conjunctival vessels are primarily derived from theophthalmic artery and could be affected by systemic diseasesmentioned above [6]. However, to the best of our knowledge,only [7], [8] have explored the relationship between diabetesand vascular tortuosity of bulbar conjunctiva.Despite the popularity in using retinal fundus images, theacquisition of those is expensive and requires specializedequipment. Unlike the retina, the sclera can be imaged withoutusing expensive specialized equipment. For example, the stud-ies conducted by Iroshan et al. [7] and Sodi et al. [9] have usedimages of the external eye (Fig.1 (b)) that have been acquiredusing a regular digital single-lens reflex (DSLR) camera, inorder to visualize the bulbar conjunctival vasculature. There-fore, based on the hypothesis that bulbar conjunctival vasculartortuosity acts as a biomarker for systemic diseases, externaleye images together with an accurate vessel segmentationalgorithm could facilitate large scale patient screening.Vessel segmentation is a critical step that has to be per-formed prior to calculating vascular tortuosity in both reti- (a) (b) Fig. 1. (a) A retinal fundus images (b) An image of the external eye nal fundus images and external eye images. Executing thistask manually is highly laborious and also impractical dueto the high volume of data produced by modern imagingsystems. Therefore, numerous automated methods of vesselsegmentation have been developed to expedite this task. Tra-ditionally, vessel segmentation from retinal fundus imagesand external eye images were performed using B-COSFIRE(Bar-Selective Combination of Shifted Filter Responses) filterbased algorithms [10] and morphological operations basedalgorithms [11]. Recently, Convolutional Neural Networks(CNNs) based methods (UNet [12], DUNET [13], IterNet [14],R2U-Net [15], lightweight attention UNet [16]) are gettingincreasingly popular in retinal vessel segmentation tasks, dueto their improved performance. Contrary to retinal vesselsegmentation, only a few studies have utilized CNNs [17] forsegmenting conjunctival vessels. To date, studies [7], [11], [18]that are related to vascular tortuosity have only used traditionalapproaches described above. Since CNNs outperform thesetraditional methods in terms of vessel segmentation accuracy,it is reasonable to assume that vessels segmented using a CNNbased method would yield more reliable tortuosity values.In both retina and bulbar conjuctiva, the tortuous nature ofa vessel varies with its thickness [19], [20]. Also dependingon the study, the required vessel thicknesses could dependon the disease condition. For example, according to [21], thedifference between diabetic and non-diabetic retinal vasculartortuosity is more pronounced in relatively thick vessels.Multi-scale vessel enhancement methods proposed by Frangiet al. [22], Sato et al. [23] , Steger et al. [24] are popularimage processing techniques for enhancing tubular structuressuch as vessels, while being sensitive towards the structuralthicknesses. To the best of our knowledge, only Owen et al.[3] have employed Steger et al.’s method [24] to measurethe vessel thicknesses, but, it does not focus on extractingvessels based on their thicknesses. The methods mentionedabove do not contain any CNNs which clearly outperformtraditional methods of vessel segmentation. While CNNs hasthe potential to perform such a task on their own, they wouldrequire multiple datasets with explicitly annotated vessels ofspecified thicknesses, to train multiple models. Constructingmultiple datasets and training multiple models for differentthicknesses would be tedious and computationally expensive.Therefore, we speculate that a CNN trained on a singledataset with annotations of the entire vasculature paired witha multi-scale vessel enhancement method could boost theaccuracy of extracting vessels of specified thicknesses, whileeliminating the need to train multiple models on multipledatasets. Nevertheless, there have not been any previous workthat had attempted to combine a CNN with a multi-scale vesselenhancement method.In this work, we propose a novel framework that can extractvessels of specified thicknesses from retinal and external eyeimages. Such a framework would be highly beneficial forstudies involving the computation of retinal and conjunctivalvascular tortuosity. In order to extract retinal and conjunctivalvessels, we use an IterNet followed by a multi-scale vesselenhancement method that exploits fine and coarse vascularstructural details. IterNet has the state-of-the-art performance for retinal vessel segmentation at the time of this study andwe hypothesize that it would also yield a better performancein conjunctival vessel segmentation. In the context of theproposed framework, IterNet is used to generate probabilitymaps of the entire vasculature. The multi-scale vessel en-hancement method that follows the IterNet, ensures that onlythe vessels of specified thicknesses are extracted. Therefore,our framework combines the power of CNNs and multi-scalevessel enhancement methods to automatically and accuratelyextract the vessels of specified thicknesses without requiringmultiple datasets or training multiple models. In the case ofconjunctival vessels, prior to vessel extraction, a U-Net is usedto accurately segment the scleral region from the external eyeimages.In addition, we applied our proposed framework to deter-mine the association of diabetes with tortuosity of relativelythick retinal and conjunctival vessels. Our findings from thisstudy agreed with the existing literature, thus fortifying theapplicability of the proposed framework on studies related toretinal and conjunctival vascular tortuosity.
II. METHODOLOGY
A detailed block diagram of the proposed framework includ-ing results at intermediate steps is illustrated in Fig. 4. In thissection, we describe segmentation of the scleral region (sectionII-A and step A in Fig. 4), generation of vessel probabilitymaps (section II-B and step B in Fig. 4), extraction of vesselsof specified thicknesses (section II-C and steps C, D, E, F, G,H, I in Fig. 4) and calculation of tortuosity (section II-D andstep J in Fig. 4).
A. Segmentation of the Scleral Region
Unlike retinal fundus images which have an inherent cir-cular region of interest, a pre-processing step has to beperformed on external eye images to segment the scleralregion before segmenting its conjunctival vessels. Therefore,the accurate segmentation of the sclera from the external eyeimages is critical for accurate segmentation of conjunctivalvessels. In recent years, CNNs have outperformed traditionalmethods in semantic image segmentation tasks. U-Net [12] isa CNN architecture which is widely used in biomedical image ccc c
16 32 64 128 256 128 64 32 16128 64 32 16
Convolutional Block × Convolution LayerBatch NormalizationReLU Activation Layer × Max Pooling Layer × Transposed Convolution × Convolution Layer c ConcatenationExternalEye Image × × Scleral BinaryMaskwith Sigmoid Activation × Layer
Fig. 2. The network architecture of the ScleraUNet. All external eyeimages are reshaped to × × before feeding them to thenetwork. The number of filters present in each convolutional layer isstated on top of the respective convolutional block. The number of filtersare mentioned below each transposed convolutional layer. E SILVA et al. : A THICKNESS SENSITIVE VESSEL EXTRACTION FRAMEWORK FOR RETINAL AND CONJUNCTIVAL VASCULAR TORTUOSITY ANALYSIS 3 segmentation. In this work, we use the U-Net architectureillustrated in Fig. 2, to segment the scleral region. This networkis referred to as “ScleraUNet” from here onwards to avoidambiguity.The ScleraUNet is trained using external eye images andtheir corresponding annotations of scleral regions. Binarycross-entropy loss is used as the loss function together withADAM optimizer [25] to train the network. Random rotations,shifting, flipping, zooming and shearing are used to augmentthe training data. Details of this training process is describedin section III-B.During inference, the network outputs a binary mask of thescleral region. This binary mask is then used to obtain animage with the isolated scleral region from the external eyeimage, which we refer to as the scleral image.
B. Generation of Vessel Probability Maps
Before extracting the vessels of specified thicknesses fromretinal or scleral images, we first generate a vessel probabilitymap that is able to represent the entire underlying vasculaturein detail. Each pixel of this map should represent the proba-bility of that pixel belonging to a vessel of the given image. Inorder to generate these maps, we use the IterNet architecture(Fig. 3) proposed by Li et al. [14]. IterNet architecture is basedon U-Nets and it currently holds state-of-the-art performancein retinal vessel segmentation. Hence, we hypothesize thatan appropriately tuned IterNet architecture would also yielda better segmentation of the conjunctival vessels as well.IterNet contains a base U-Net followed by N number ofmini U-Nets. The base U-Net architecture is similar to the U-Net described in section II-A. Each mini U-Net is a simplifiedand light weight version of the base U-Net containing only8 convolutional blocks. Base U-Net and mini U-Nets, eachoutputs a probability map where each pixel represents theprobability of it belonging to a vessel. The base U-Net outputs a coarse probability map ( P ) of the underlying vasculature ofa retinal or a scleral image. The subsequent mini U-Nets act asa cascaded unit that refines P and outputs probability maps P , P , . . . , P N . Since the final mini U-Net outputs the mostrefined probability map, P N is able to represent the underlyingvasculature in detail. Therefore, P N is considered as the finalvessel probability map.An IterNet model trained with retinal images and theircorresponding vessel annotations, is used as the retinal vesselprobability map generator. Similarly, an IterNet model trainedwith scleral images and their vessel annotations, is used as theconjunctival vessel probability map generator.The total loss L of the network is computed as the weightedsum of binary cross-entropy losses L i ( i = 0 , , . . . , N ) , L = N X i =0 θ i L i (1)where L is the loss of base U-Net output, L j ( j =1 , , . . . , N ) is the loss of j th mini U-Net output and θ i is thecorresponding weight of loss L i . ADAM optimizer is usedto train the network. Random rotations, flipping, zooming,shifting, intensity changes and contrast changes are used toaugment the training data. The training process of the IterNetis described in section III-C.After vessel probability maps are obtained, they are pro-cessed together with the original scleral/retinal images toextract the vessels of specified thicknesses, using a multi-scalevessel enhancement method. C. Extraction of Vessels of Specified Thicknesses
To extract vessels of specified thicknesses, we use a seriesof steps that takes in the original scleral/retinal image andits vessel probability map, and returns a binary image thatcontains the vessels of specified thicknesses as output. Forthis purpose, we employ a multi-scale vessel enhancement
32 64 128 256 128 64 32128 64 32 16512 c c c c
64 128 256 128 64128 64 32512 c c
64 128 256 128 64128 64 32512 c c cc c . . .. . .. . . P P P N Scleral/RetinalImage
Base U-Net Mini U-Net 1 Mini U-Net N Concatenation Convolutional Block × Convolutional Layer with Sigmoid Activation × × c × Convolution Layer with ReLU Activation (64 fi lters in each layer) × Max Pooling Layer × Transposed Convolution Layer ×
512 512 ×
512 512 ×
256 256 256 c Fig. 3. The IterNet architecture; base U-Net and mini U-Nets are connected to each other using a series of connections. Black arrows in the figurerepresent the skip connections inside each U-Net. Green arrows indicate that the output from the first convolutional block of the base U-Net is sentto each of the mini U-Nets. Red arrows represent the passing of concatenated output from first convolutional block of base U-Net and first miniU-Net, to the remaining mini U-Nets in the network. Note that the convolutional blocks here are identical to the ones that are illustrated in Fig. 2 . method based on the Frangi filter [22] which is specializedin enhancing and separating nearby elongated structures ofmultiple scales [26].
1) Frangi Filter:
The Frangi filter outputs a map with en-hanced elongated structures (in this case, vessels) using themulti-scale second order local structure (Hessian) of a givenimage. Hessian of a grayscale image I at scale σ for a pixel x = [ x , x ] T ; x ∈ Ω where Ω is the set of all pixels in I ,is represented by a matrix H ( x , σ ) = (cid:8) h ij ( x , σ ) (cid:9) × definedas, h ij ( x , σ ) = σ I ( x ) ∗ ∂ G ( x , σ ) ∂x i ∂x j (2)where ∗ represents the convolution operation and G ( x , σ ) =(2 πσ ) − exp (cid:0) − x T x / σ (cid:1) is the bivariate Gaussian.Let λ σ, and λ σ, be the eigenvalues of H ( x , σ ) . With λ σ, , λ σ, , the Frangi filter response of I at scale σ for the pixel x is given by the point-wise operator F : Ω × R → R definedbelow. ∀ x ∈ Ω : F ( I )( x , σ ) = ( λ σ, > V ( x , σ ) λ σ, ≤ (3)where, V ( x , σ ) = exp (cid:18) − R B β (cid:19)(cid:18) − exp (cid:18) − S c (cid:19)(cid:19) (4)where, R B = | λ σ, /λ σ, | and S = λ σ, + λ σ, . R B is theblobness measure and S is the second order structuredness. β and c are thresholds that control the sensitivity of line filtersto the measures R B and S respectively. For this study, weset β and c as: β = 0 . and c = 0 . × max x ∈ Ω || H ( x , σ ) || . Inthis paper, Frangi filter response of I at scale σ is denoted as F ( I )( σ ) .It is important to note that the scale σ is related to the vesselthickness w (in pixels) according to σ = Cw (see Theorem 1of appendix). Here, C = 12 s − W (cid:18)
12 exp (cid:16)
12 + log( α ) (cid:17)(cid:19) − (5)where, W is the Lambert W function and α (0 ≤ α < isthe surround size ratio of the second derivative of the Gaussiankernel. When α is closer to 1, the response of the Frangi filterfor vessels becomes sharper. With the relationship σ = Cw , itfollows that F ( I )( Cw ) provides a map in which the vesselsof thickness w in image I are enhanced.
2) Computing Vessel Redness Maps:
We consider the greenchannel of the original scleral/retinal image to compute thevessel redness map. This is because, the contrast of bloodvessels are higher in the green channel of the image. In orderto further enhance the vessel contrast, a Contrast LimitedAdaptive Histogram Equalization (CLAHE) [27] is applied tothe green channel image. This contrast enhanced image is theninverted to represent the blood vessels as bright structures ina dark background. Let I g be this inverted image.Now we are interested in enhancing vessels of a certainthickness w , in I g . For this purpose we can make use of theFrangi filter described in section II-C.1. Applying Frangi filter on I g together with a gamma correction ( γ r ), yields the vesselredness map V r ( w ) . ∀ x ∈ Ω : V r ( x , w ) = F ( I g )( x , Cw ) γ r (6) V r ( w ) has a stronger emphasis on the degree of vessel redness,but it only provides a coarser enhancement (Fig. 4 (f)) forvessels of a specified thickness and often unable to preservefine structural details of vessels. Therefore, we compute aseparate vessel map that would have a strong emphasis onthe vessel structure.
3) Computing Vessel Structural Maps:
Vessel structural mapis derived using the final mini U-Net output P N of the IterNetdue to its detailed representation of underlying vasculature.Applying Frangi filter on P N together with a gamma correc-tion ( γ s ), yields desired vessel structural map V s ( w ) . ∀ x ∈ Ω : V s ( x , w ) = F ( P N )( x , Cw ) γ s (7) V s ( w ) inherits fine structural details of vessels in P N toprovide a finer enhancement (Fig. 4 (e)) to the vessels ofspecified thickness w .
4) Combining Vessel Structural and Redness Maps:
Coarservessel redness map and finer vessel structural map are com-bined to yield a combined vessel map V c ( w ) as follows. ∀ x ∈ Ω : V c ( x , w ) = (cid:0) V r ( x , w ) × V s ( x , w ) (cid:1) γ c (8)where γ c stands for gamma correction. The multiplicationensures that only vessels of specified thickness w in bothvessel structural and redness maps would have a higherresponse. This way, V c ( w ) provides a map where vesselsof thickness w are enhanced by combining both coarse andfine structural details of vessels. A combined vessel map at aselected thickness w is illustrated in Fig. 4 (g).In order to arrive at a map that includes enhanced vesselsof specified thicknesses W = { w i | i = 1 , , . . . , |W|} , weobtain V c ( w i ) for each w i ∈ W and compute V c ( x , W ) asfollows. ∀ x ∈ Ω : V c ( x , W ) = max w i ∈W V c ( x , w i ) (9)where V c ( W ) is a map with enhanced vessels of thicknesses W . We then binarize V c ( W ) , with a global histogram thresholdcomputed using Otsu’s method [28]. An example of V c ( W ) and its binarized version are illustrated in Fig. 4 (h) and Fig.4 (i) respectively.
5) Post-processing:
The binarized V c ( W ) contains smallwhite spots, holes and noisy fragments in addition to thedesired vessels of interest. Morphological operations are usedto remove these white spots and fill the holes in the binarized V c ( W ) . Then, to remove noisy fragments, we use a connectedcomponent based denoising method described below.First, a set K containing connected components of the bi-narized V c ( W ) is found using an iterative flood-fill algorithm.Then we compute normalized difference ( d k ) between k th component size ( N k ) and median component size ( M ) asfollows. d k = N k − M max k ∈K | N k − M | (10) E SILVA et al. : A THICKNESS SENSITIVE VESSEL EXTRACTION FRAMEWORK FOR RETINAL AND CONJUNCTIVAL VASCULAR TORTUOSITY ANALYSIS 5
Fig. 4. Block diagram of the proposed framework together with a selected external eye image to illustrate each step. The steps of the frameworkshown above each arrow are as follows: A - Segmenting the scleral region (This step is not applicable for retinal images), B - Vessel probability mapgeneration, C - Taking green channel of the scleral/retinal image and applying CLAHE followed by inversion, D - Computing vessel structural mapsfor ∀ w i ∈ W using the vessel probability map, E - Computing vessel redness maps for ∀ w i ∈ W using I g , F - Combining vessel structural andredness maps to compute combined vessel maps for ∀ w i ∈ W , G - Obtaining V c ( W ) , H - Binarizing V c ( W ) with Otsu’s method, I - Selectingthe largest connected components and noise removal, J - Skeletonizing the vessels followed by branching point removal. Higher the d k , larger the size of k th component. A threshold t was imposed to select the largest connected components (inthis case, vessels of interest) based on d k values. With thisselection, we can remove noisy fragments and retain a binarymap V ( W ) that contains vessels of specified thicknesses.Another form of noise, inherent to the Frangi filter response,arises when there exist vessels in retinal/scleral images withthicknesses greater than the specified thicknesses W . Let W = { w n , w n +1 , . . . , w m } and w l be the highest vessel thicknessin the given retinal/scleral image. If w m < w l , traces fromvessels of thicknesses U = { w m +1 , w m +2 , . . . , w l } appear in V ( W ) . These traces in V ( W ) are removed using the followingBoolean operation. ∀ x ∈ Ω : V ( x , W ) ← V ( x , W ) ∧ ( ∼ V ( x , U )) (11) D. Tortuosity Calculation
After extracting vessels from retinal/scleral images, first weskeletonize these vessels to reduce their thickness down toa single pixel. Next, branching points of these skeletonizedvessels are detected and removed. In this context, a branchingpoint is defined as a pixel surrounded by more than two 8-connected pixels. With the removal of branching points, weobtain an image where all the sub vessels are separated fromeach other. Then the tortuosity values of these sub vessels arecomputed using all the tortuosity indices described in [18] and[29] .
III. E
XPERIMENTS AND R ESULTS
A. Dataset Description
1) SBVPI Dataset:
Sclera Blood Vessels, Periocular and Iris(SBVPI) is a publicly available dataset primarily intended forsclera and periocular recognition research [30], [31]. It consistsof 1840 external eye images ( × ) in 4 different gazedirections (straight, left, right and up) belonging to 55 healthysubjects. Out of these, the entire conjunctival vasculature isannotated in 128 images and the scleral region is annotated in1000 images.
2) DRIVE Dataset:
Digital Retinal Images for Vessel Ex-traction (DRIVE) [32] is a publicly available dataset with 40retinal images ( × ) belonging to 40 subjects (33 healthysubjects and 7 subjects with mild early diabetic retinopathy)along with their corresponding annotations of the entire retinalvasculature.
3) REIDA Dataset:
Retinal and Eye Images for DiabeticAnalysis (REIDA) dataset is comprised of 58 retinal fundusimages ( × ) and 60 external eye images ( × )that are collected from 32 volunteer subjects of ages 40-70 years, from the National Diabetes Center, Colombo, SriLanka. Out of them, 14 are diabetic and 18 are non-diabetic.Subjects with fasting blood glucose values of more than mgdL − are categorized as diabetic and the rest as non-diabetic. Retinal fundus images from both eyes of each subjectare captured using ZEISS VISUCAM 524 retinal camera.External eye images containing superior and inferior bulbarconjunctiva from both eyes of each subject are captured usinga Canon digital single-lens reflex (DSLR) camera with a 100mm macro lens with no special lighting conditions. We shall denote the retinal image collection as REIDA Retina (REIDA-R) dataset and the external eye image collection as REIDAExternal Eye (REIDA-EE) dataset. The acquisition procedureof REIDA dataset has been approved by the Ethics ReviewCommittee, Faculty of Medicine, University of Colombo, SriLanka (Reference: EC-17-132).The REIDA-EE dataset contains scleral annotations of 40external eye images. In addition, we manually annotatedvessels of specified thicknesses in 8 images from each SBVPI,DRIVE, REIDA-R and REIDA-EE datasets. These were usedas the test sets to evaluate the performance of extractingvessels of specified thicknesses. Details of these vessel an-notations are given in Table I. All of these annotations wereverified by an expert.We evaluated the performance of our proposed frameworkon the aforementioned datasets. The proposed framework wasevaluated in terms of scleral segmentation, vessel probabilitymap generation and extraction of vessels of specified thick-nesses. In addition, we applied our framework on REIDAdataset to determine the association of retinal and conjunctivalvascular tortuosity with diabetes. B. Segmentation of Sclera
The ScleraUnet was evaluated on SBVPI and REIDA-EEwhere separate models were trained for each dataset. Fortraining, validating and testing the model for SBVPI, 1229,388 and 223 images were used respectively, whereas forREIDA-EE, 25, 5 and 10 images were used respectively. Bothof these models were trained at a learning rate of 0.0001 onan NVIDIA Tesla K80 GPU.To evaluate the performance of the ScleraUNet, we com-puted the accuracy score (
Acc ) and Dice Similarity Coefficient(
DSC ). These metrics are defined as follows;
Acc = (
T P + T N ) / ( T P + T N + F P + F N ) and DSC = (2 × T P ) / (2 × T P + F P + F N ) . Here T P, T N, F P and
F N indicate truepositives, true negatives, false positives and false negativesrespectively. For SBVPI, we obtained
Acc = 0 . and DSC = 0 . , and for REIDA-EE, we obtained Acc =0 . and DSC = 0 . . External Ey e Ground Truth ScleraUNet Image R E I DA - EE S B V P I Fig. 5. Scleral segmentation results for REIDA-EE (ACC = 0.9779, MCC= 0.9777) and SBVPI (ACC = 0.9850, DSC = 0.9849) datasets
The above accuracy scores and the Dice similarity scoresindicate that the ScleraUNet provides sufficient performancefor this task. A selected external eye image from each SBVPIand REIDA-EE, together with their scleral ground truth andscleral segmentations obtained from ScleraUNet are shown inFig. 5.
C. Vessel Probability Map Generation
We configured IterNet to output retinal vessel probabilitymaps as follows; the number of mini U-Nets of the architecturewas set to N = 1 with loss weights θ = 1 and θ = 0 . .We have denoted this IterNet architecture as R-IterNet (R -representing retina). To train R-IterNet, we used retinal imagesand their corresponding annotated vessels from the DRIVEdataset. The network was trained using 29 images, validatedon 3 images and tested with 5 retinal images.To output conjunctival vessel probability maps, we config-ured IterNet as follows; the number of mini U-Nets of thenetwork architecture was set to N = 2 with loss weights θ = θ = θ = 1 . We have denoted this IterNet architectureas C-IterNet (C - representing conjunctiva). This networkwas trained on scleral images and their corresponding vesselannotations from the SBVPI dataset. The network was trainedusing 108 images, validated and tested on 10 images each.The number of mini U-Nets and the loss weights of bothR-IterNet and C-IterNet were determined using a grid searchfor combinations of N = { , , . . . , } and θ = θ = θ = { , . , . , . . . , } . Both networks were trained at a learningrate of 0.0001 on an NVIDIA Tesla K80 GPU.During evaluation, generated probability maps were bina-rized using a threshold of 0.5, before computing the accuracy.The trained R-IterNet and C-IterNet models yielded testingaccuracies of 96.74% and 95.65% respectively, on their cor-responding test sets.For this study, we did not train or evaluate R-IterNet andC-IterNet using REIDA datasets, due to the unavailability ofannotations of the entire vasculature. Based on the hypothesisthat retinal and scleral images from REIDA dataset werecharacteristically similar to the images from DRIVE andSBVPI datasets respectively, we used R-IterNet (trained onDRIVE) and C-IterNet (trained on SBVPI) models to generatevessel probability maps for REIDA images. Vessel probabilitymaps of selected images from each dataset are shown in Fig.6. D. Extraction of Vessels of Specified Thicknesses
To evaluate the performance of extracting vessels of speci-fied thicknesses, we used 8 images from each dataset (DRIVE,SBVPI, REIDA-R and REIDA-EE) together with their corre-sponding annotations as mentioned in section III-A. Thick-nesses ( W ) of the annotations in each dataset and hyperpa-rameters ( γ r , γ s , γ c and t ) used for extracting these vesselsare tabulated in Table I. Note that these thickness values aregiven for retinal/scleral images that are resized to × while preserving the aspect ratio in order to avoid distortionsin vessel structure. Values for all hyperparameters except for W were set using a grid search for combinations of γ r = E SILVA et al. : A THICKNESS SENSITIVE VESSEL EXTRACTION FRAMEWORK FOR RETINAL AND CONJUNCTIVAL VASCULAR TORTUOSITY ANALYSIS 7
TABLE IH
YPERPARAMETERS USED FOR EXTRACTING VESSELS OF SPECIFIEDTHICKNESSES IN EACH DATASET
Dataset S.T ∗ (in pixels) W γ r , γ s , γ c t DRIVE 7 to 12 {
7, 8, . . . , 12 } {
4, 5, 6, 7, 8 } {
7, 8, . . . , 12 } {
4, 5, 6, 7, 8 } ∗ S.T stands for specified thicknesses of vessels that have to be extracted. γ s = γ c = { , . , . . . , } and t = { . , . , . . . , . } . Asdescribed in section II-C.1, since a value closer to 1 wouldresult in a sharper response, α was empirically set to 0.9. W was set based on the specified thicknesses (in pixels) ofvessels. For example, to extract vessels of thicknesses rangingfrom 4 to 8 pixels, W was set to { , , , , } . In Fig. 6, weillustrate selected resultant images from each dataset at givenintermediate steps of the proposed framework.The extracted vessels consist of only a smaller numberof foreground pixels compared to background pixels, there-fore, resulting in a significant class imbalance. Hence, apartfrom the accuracy, we use Matthew’s Correlation Coefficient ( M CC ) given in (12) which is widely used as an appropriateevaluation metric for class imbalanced binary classificationproblems [33].
M CC = T P/N − ( S × P ) p P × S × (1 − S ) × (1 − P ) (12)Where S = ( T P + F N ) /N , P = ( T P + F P ) /N and N = T P + F P + T N + F N . In Table II, we report the performancemetrics obtained for extracting vessels of specified thicknesseswith respect to each dataset. In the same table, we also presentresults for the following cases of computing V c ( W ) :(i) Using only vessel redness maps ( V r ) : V c ( x , W ) = max w i ∈W V r ( x , w i ) (ii) Using only vessel structural maps ( V s ) : V c ( x , W ) = max w i ∈W V s ( x , w i ) (iii) Using the combined vessel maps ( V c ) : V c ( x , W ) = max w i ∈W V c ( x , w i ) Out of above three cases, it was observed that the highestaccuracy score and
M CC values were obtained when thecombined vessel maps were utilized (case (iii)) to compute V c ( W ) , rather than using only vessel structural maps (case (ii))or only vessel redness maps (case (i)) individually. Therefore, Original Image Vessel Probability MapGreen Channelafter CLAHE& Inversion Extraced Vesselsof Interest Ground Truth of the Vessels ofInterest Skeletonized and Branching Points Removed Vessels S B V P I R E I DA - EE D R I V E R E I DA - R Fig. 6. Resultant of selected images from each dataset at different steps of the proposed framework for SBVPI (MCC = 0.8060), REIDA-EE (MCC= 0.7432), DRIVE (MCC = 0.8147) and REIDA-R (MCC = 0.7481) datasets. Note that we have not preserved the original aspect ratios of theseimages for the purpose of illustration.
TABLE IIP
ERFORMANCE E VALUATION OF E XTRACTING V ESSELS OF S PECIFIED T HICKNESSES
Dataset Cases
Acc MCCV r ± ± V s ± ± V c ± ± V r ± ± V s ± ± V c ± ± V r ± ± V s ± ± V c ± ± V r ± ± V s ± ± V c ± ± the combination of vessel structural and vessel redness mapsgiven by (9) is better suited for accurate extraction of vesselsof specified thicknesses.Moreover, we provide qualitative results in Fig. 7 whichillustrates several closeup visualizations of extracted vesselsbelonging to different sets of thicknesses other than thosementioned in Table I. E. Determination of the Association of Retinal andConjunctival Vascular Tortuosity with Diabetes
As a potential application, we used our proposed method onREIDA-R and REIDA-EE datasets to extract vessels of spec-ified thicknesses given in Table I, and attempted to determinethe association of retinal and conjunctival vascular tortuositywith diabetes.The vascular tortuosity values based on a total of 12tortuosity indices described in [18], [29] were computed forretinal and conjunctival vessels of both diabetic and non-diabetic subjects. When a subject was having more than oneretinal/external eye image, the mean tortuosity value of thosewas attributed to that particular subject.Since sample sizes of diabetic ( n = 14 ) and non-diabetic( n = 18 ) groups were not equal and the tortuosity values ofthe two groups were not distributed normally, we used theunpaired Mann-Whitney test to compare vascular tortuosityvalues between diabetic and non-diabetic groups.This test yielded that retinal vascular tortuosity calculatedwith Eccentricity based Tortuosity Index (ETI) [18] of thediabetic group (median = . ) was significantly higher( U = 74 , p = . ) than that of the non-diabetic group(median = . ), and conjunctival vascular tortuosity calcu-lated with Total Curvature normalised by Arc Length (TCAL)[34] of the diabetic group (median = . ) is significantlylower ( U = 80 , p = . ) than that of the non-diabeticgroup (median = . ). Fig. 8 (a) and (b) illustrate thecomparison between two groups in terms of ETI and TCALvalues for retinal and conjunctival vessels respectively. Itwas also observed that there are no significant differencesbetween diabetic and non-diabetic groups in terms of the other SBVPIDRIVE
Vessels of thicknesses
Fig. 7. Closeup visualizations of extracted vessels of specifiedthicknesses from DRIVE and SBVPI. We extracted retinal vessels ofthicknesses 1-6 pixels ( γ r = 0 , γ s = 0 . , γ c = 0 . , t = 0 . )and 7-12 pixles ( γ r = 0 . , γ s = 0 . , γ c = 0 . , t = 0 . )from the DRIVE dataset. Similarly, conjunctival vessels of 1-3 pixels( γ r = 0 , γ s = 0 . , γ c = 0 . , t = 0 . ) and 4-8 pixels ( γ r =0 . , γ s = 0 . , γ c = 0 . , t = 0 . ) were extracted from the SBVPIdataset. considered tortuosity indices for both retinal and conjunctivalvessels. Non Diabetic Diabetic0.010.020.030.04 ET I * Retinal Tortuosity
NonDiabetic Diabetic0.060.080.100.12 T C A L Conjunctival Tortuosity * Fig. 8. The comparison between diabetic and non-diabetic groups withrespect to (a) retinal vascular tortuosity in terms of ETI. (b) conjunctivalvascular tortuosity in terms of TCAL. “ ∗ ” represents the statisticalsignificance of the difference between two groups as measured byMann-Whitney test. The horizontal bar in each box plot denotes themedian of the respective group. E SILVA et al. : A THICKNESS SENSITIVE VESSEL EXTRACTION FRAMEWORK FOR RETINAL AND CONJUNCTIVAL VASCULAR TORTUOSITY ANALYSIS 9
In addition to comparing the tortuosity values, we alsoanalyzed the relationship between tortuosities of retinal vesselsand conjunctival vessels. The correlation between retinal andconjunctival vascular tortuosity values for each tortuosityindex was computed separately for non-diabetic and diabeticgroups using Spearman’s rank correlation. However there wereno statistically significant correlation ( p > . betweenretinal and conjunctival vascular tortuosity values measuredin terms of any of the considered tortuosity indices in bothnon-diabetic and diabetic groups. IV. D
ISCUSSION
In this study, the proposed framework included an Iter-Net based CNN to obtain probability maps of the entireretinal/conjunctival vasculature which were then subjectedto a series of post-processing steps based on a multi-scalevessel enhancement method, that exploits both fine and coarsestructural vascular details of these probability maps in orderto extract vessels of specified thicknesses. This way, we couldincorporate the power of CNNs into the framework withoutrequiring multiple datasets with explicitly annotated vessels ofdifferent thicknesses and training multiple models. The CNNsof this framework were only needed to be trained once, whilethe vessel extraction steps could be conveniently configuredto suit the nature of the vessels of interest.The proposed framework achieved
M CC values of . , . , . and . for SBVPI, DRIVE,REIDA-R and REIDA-EE respectively in extracting the ves-sels of the specified thicknesses given in Table I. This sug-gests that the framework attains sufficient vessel extractionperformance for the selected sets of thicknesses. Since noprevious study has introduced a similar framework that focuseson extracting retinal and conjunctival vessels of specifiedthicknesses, we did not compare the performance of theproposed framework with a previous study.In the framework, we employed a U-Net (referred toas ScleraUNet) to perform the segmentation of the scleralregion of the external eye images. Accurate scleral regionsegmentation positively contributes to the overall conjunctivalvessel extraction performance of the framework. As evidentfrom section III-B, the ScleraUNet attained nearly perfectsegmentation accuracy values for SBVPI ( Acc = 0 . ) andREIDA-EE ( Acc = 0 . ), thus demonstrating its suitabilityfor this task.Within the proposed framework, we used an IterNet ar-chitecture to generate the vessel probability maps for bothretinal and conjunctival vessels. Since IterNet currently holdsthe state-of-the–art performance in retinal vessel segmentationat the time of this study, we hypothesized that a similarappropriately trained network can be employed to generateaccurate conjunctival vessel probability maps. The testingsegmentation accuracy of . achieved by the C-IterNetfor SBVPI dataset justified the validity of this hypothesis. Amajor advantage of the proposed framework is the fact thatan IterNet has to be trained only once on a single datasetwith annotations of the complete vasculature, rather thantraining it on multiple datasets with explicit vessel annotations of different thicknesses. Thereafter, the vessels of interestare extracted from the vessel probability maps generated bythe trained IterNet, using a multi-scale vessel enhancementmethod that does not require any prior training.The vessel extraction method of the proposed frameworkensures that the vessels of desired sets of thicknesses areextracted with the combined vessel maps ( V c ) computedusing vessel structural maps ( V s ) and vessel redness maps( V r ). Based on results in the Table II, the highest M CC values were obtained for both retinal and conjunctival vessels,when we used these V c in the framework instead of using V s or V r individually. In all the datasets considered in theevaluation, the vessels of interest had pronounced shadesof red. Therefore, the combination of vessel redness mapswith vessel structural maps provided a better enhancement ofvessels of specified thicknesses. However, in an applicationwhere vessel redness is less pronounced (for example, whentrying to enhance thinner or faded vessels), γ r should be setto a value closer to zero, in order to obtain a better vesselenhancement, as illustrated in Fig. 7. However, due to theunavailability of manual annotations we did not perform aquantitative evaluation for extracting vessel thicknesses lesserthan the thicknesses specified in Table I.As a potential application, we used the proposed frame-work to determine the association of retinal and conjunctivalvascular tortuosity with diabetes. From the Mann-Whitneytests performed on the retinal tortuosity values as described insection III-E, the tortuosity index ETI, showed a significantlyhigher value ( p < . ) for diabetic subjects. This is consistentwith the previous study by Ramos et al. [21] in which the tortu-osities were calculated with thick vessels. Moreover, the resultsfrom our study showed that conjunctival vascular tortuositycalculated with the TCAL index is significantly lower ( p <. ) in diabetic subjects than that of non-diabetic subjects.This result also agrees with the findings of a study conductedby Owen et al. [8] which observed that the tortuosity ofconjunctival macro-vessels are lower in diabetic subjects.There were no significant correlation between retinal andconjunctival vascular tortuosities for neither diabetic nor non-diabetic subjects. We speculate that, even though this resultindicates that there is no apparent relationship between retinaland conjunctival vascular tortuosities, the lack of representa-tion power of the existing tortuosity indices may have affectedthe outcome of our correlation study. This may be due tothe fact that most of the widely used tortuosity indices inthe literature do not take into consideration, the differencesbetween the structural properties pertaining to these retinaland conjunctival vessels. Thus, existing indices might not besuitable for studies that aim to determine the relationshipbetween retinal and conjunctival vascular tortuosities. Hence,a novel tortuosity index that takes into account the differencesbetween the vessel structures would be required in order toperform a better comparison between retinal and conjunctivalvascular tortuosity values. V. C
ONCLUSION
In this paper, we proposed a novel framework which en-compass a CNN paired with a multiscale vessel enhance- ment method to extract vessels of specified thicknesses fromretina and/or conjunctiva which can be applied in studiesrelated to vascular tortuosity. Since the framework achieved M CC values greater than 0.71 for the considered datasetsin extracting the vessels of specified thicknesses, it can beconcluded that the framework performs well in extractingboth retinal and conjunctival vessels of different thicknesses.In addition, we also applied the framework to determine theassociation of the tortuosity of relatively thick vessels in retinaand conjunctiva, with diabetes and found that the obtainedtortuosity comparisons were in agreement with the existingliterature, thus strengthening the applicability of the proposedframework in vascular tortuosity related studies.
VI. A
CKNOWLEDGEMENTS
We express our sincere gratitude to Dr. Mahen A. Wi-jesuriya, Dr. Chamari L. Warnapura and other staff membersof the National Diabetes Centre, Rajagiriya, Sri Lanka for theimmense support provided for the data collection procedure.Also, we thank Mr. Achintha Iroshan and Mr. Dulara de Zoysafor their support in refining this work. A PPENDIX R ELATIONSHIP BETWEEN W IDTH w AND S CALE σ The second order derivative of a Gaussian kernel at scale σ creates a probe kernel as illustrated in Fig. 9. Fig. 9. The second order derivative of Gaussian probe kernel
It is of interest to obtain an analytical relationship betweenthe vessel thickness of interest w and σ when vessel thicknessis matched with the middle lobe width of the probe kernelat the surround size ratio α ( ≤ α < . This analyticalrelationship is introduced in Theorem 1. Theorem 1.
The relationship between the vessel width w and the scale σ of the second derivative of Gaussian kernel ∂ G ( x,σ ) ∂x , when α (cid:12)(cid:12)(cid:12)(cid:12) ∂ G (0 , σ ) ∂x (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) ∂ G ( w/ , σ ) ∂x (cid:12)(cid:12)(cid:12)(cid:12) , ≤ α < (13) is given by the following equation. σ = w s − W (cid:18)
12 exp (cid:16)
12 + log( α ) (cid:17)(cid:19) − where W is the Lambert-W function.Proof. Substituting G ( x , σ ) = √ πσ exp (cid:0) − x / σ (cid:1) in to(13), we arrive at, exp (cid:0) − w / σ (cid:1)(cid:0) − w /σ + 1 (cid:1) = α (14)Taking the natural logarithm in both sides of (14), we get log( − y + 1) = y + log( α ) (15)where y = w /σ . By arranging the terms of (15), we arrive ata az + b log( z )+ c = 0 type equation where a = − / , b = − , c = 1 / − α ) and z = − y + 1 = − w /σ + 1 . Thegeneral solution of az + b log( z ) + c = 0 is z = ( b/a ) W (cid:0) ( a/b ) exp( − c/b ) (cid:1) (16)where W is the Lambert-W function.Substituting for a , b , c and z in (16) yields the following. σ = w s − W (cid:18)
12 exp (cid:16)
12 + log( α ) (cid:17)(cid:19) − (17) R EFERENCES [1] M. B. Sasongko, T. Y. Wong, T. T. Nguyen, C. Y. Cheung, J. E. Shaw,and J. J. Wang, “Retinal vascular tortuosity in persons with diabetes anddiabetic retinopathy,”
Diabetologia , vol. 54, pp. 2409–2416, 2011.[2] H. C. Han, “Twisted blood vessels: Symptoms, etiology and biomechan-ical mechanisms,”
Journal of Vascular Research , vol. 49, pp. 185–197,2012.[3] C. G. Owen, A. R. Rudnicka, C. M. Nightingale, R. Mullen, S. A.Barman, N. Sattar, D. G. Cook, and P. H. Whincup, “Retinal arteriolartortuosity and cardiovascular risk factors in a multi-ethnic populationstudy of 10-year-old children; the child heart and health study in england(chase).”
Arteriosclerosis, thrombosis, and vascular biology , vol. 31 8,pp. 1933–8, 2011.[4] M. M. Khansari, S. L. Garvey, S. Farzad, Y. Shi, and M. Shahidi,“Relationship between retinal vessel tortuosity and oxygenation in sicklecell retinopathy,”
International Journal of Retina and Vitreous , vol. 5,no. 1, p. 47, 2019.[5] S. Yasuda, S. Kachi, M. Kondo, S. Ueno, H. Kaneko, and H. Terasaki,“Significant correlation between retinal venous tortuosity and aqueousvascular endothelial growth factor concentration in eyes with centralretinal vein occlusion,”
PloS one , vol. 10, no. 7, 2015.[6] T. Akagi, A. Uji, A. S. Huang, R. N. Weinreb, T. Yamada, M. Miy-ata, T. Kameda, H. O. Ikeda, and A. Tsujikawa, “Conjunctival andintrascleral vasculatures assessed using anterior segment optical coher-ence tomography angiography in normal eyes,”
American Journal ofOphthalmology , vol. 196, pp. 1 – 9, 2018.[7] K. A. Iroshan, A. D. N. D. Zoysa, C. L. Warnapura, M. A. Wijesuriya,S. Jayasinghe, N. D. Nanayakkara, and A. C. D. Silva, “Detection ofdiabetes by macrovascular tortuosity of superior bulbar conjunctiva,” in , July 2018, pp. 1–4.[8] C. G. Owen, R. S. Newsom, A. R. Rudnicka, S. A. Barman, E. G.Woodward, and T. J. Ellis, “Diabetes and the tortuosity of vessels ofthe bulbar conjunctiva,”
Ophthalmology , vol. 115, no. 6, pp. e27– e32,2008.[9] A. Sodi, C. Lenzetti, D. Bacherini, L. Finocchio, T. Verdina, I. Borg,F. Cipollini, F. U. Patwary, I. Tanini, C. Zoppetti et al. , “Quantitativeanalysis of conjunctival and retinal vessels in fabry disease,”
Journal ofophthalmology , vol. 2019, 2019.[10] G. Azzopardi, N. Strisciuglio, M. Vento, and N. Petkov, “Trainablecosfire filters for vessel delineation with application to retinal images,”
Medical Image Analysis , vol. 19, no. 6, pp. 46–57, 01 2015.
E SILVA et al. : A THICKNESS SENSITIVE VESSEL EXTRACTION FRAMEWORK FOR RETINAL AND CONJUNCTIVAL VASCULAR TORTUOSITY ANALYSIS 11 [11] K. BahadarKhan, A. A Khaliq, and M. Shahid, “A morphological hessianbased approach for retinal blood vessels segmentation and denoisingusing region based otsu thresholding,”
PLOS ONE , vol. 11, no. 7, pp.1–19, 07 2016.[12] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networksfor biomedical image segmentation,” in
Medical Image Computing andComputer-Assisted Intervention – MICCAI 2015 , 2015, pp. 234–241.[13] Q. Jin, Z. Meng, T. D. Pham, Q. Chen, L. Wei, and R. Su, “Dunet: Adeformable network for retinal vessel segmentation,”
Knowledge-BasedSystems , vol. 178, pp. 149–162, 2019.[14] L. Li, M. Verma, Y. Nakashima, H. Nagahara, and R. Kawasaki, “Iternet:Retinal image segmentation utilizing structural redundancy in vesselnetworks,” in
The IEEE Winter Conference on Applications of ComputerVision , 2020, pp. 3656–3665.[15] M. Z. Alom, C. Yakopcic, M. Hasan, T. M. Taha, and V. K. Asari,“Recurrent residual u-net for medical image segmentation,”
Journal ofMedical Imaging , vol. 6, no. 1, p. 014006, 2019.[16] X. Li, Y. Jiang, M. Li, and S. Yin, “Lightweight attention convolutionalneural network for retinal vessel segmentation,”
IEEE Transactions onIndustrial Informatics , 2020.[17] W. Dong, H. Zhou, and D. Xu, “A new sclera segmentation and vesselsextraction method for sclera recognition,” in , 2018,pp. 552–556.[18] D. De Zoysa, A. Kondarage, C. Warnapura, M. Wijesuriya, S. Jayas-inghe, N. Nanayakkara, and A. De Silva, “Eccentricity based quan-tification of retinal vascular tortuosity for early detection of diabetesand diabetic retinopathy,” in , Oct 2018, pp. 3280–3284.[19] E. Trucco, H. Azegrouz, and B. Dhillon, “Modeling the tortuosity ofretinal vessels: Does caliber play a role?”
IEEE transactions on bio-medical engineering , vol. 57, pp. 2239–47, 09 2010.[20] R. Sharma, V. Gurunadh, and S. Shankar, “Studies on conjunctival vesselmorphology in diabetic patients: a short review,”
Journal of BiomedicalResearch & Environmental Sciences , vol. 3, no. 7, pp. 016–019, 2017.[21] L. Ramos, J. Novo, J. Rouco, S. Romeo, M. ´Alvarez, and M. Ortega,“computational assessment of the retinal vascular tortuosity integratingdomain-related information,”
Scientific Reports , vol. 9, no. 1, pp. 1–12,2019.[22] R. Frangi, W. Niessen, K. Vincken, and M. Viergever, “Multiscale vesselenhancement filtering,”
Med. Image Comput. Comput. Assist. Interv. , vol.1496, 02 2000.[23] Y. Sato, S. Nakajima, H. Atsumi, T. Koller, G. Gerig, S. Yoshida, andR. Kikinis, “3d multi-scale line filter for segmentation and visualizationof curvilinear structures in medical images,” in
CVRMed-MRCAS’97 .Springer, 1997, pp. 213–222.[24] C. Steger, “An unbiased detector of curvilinear structures,”
IEEE Trans-actions on pattern analysis and machine intelligence , vol. 20, no. 2, pp.113–125, 1998.[25] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
International Conference on Learning Representations , 12 2014.[26] K. Drechsler and C. O. Laura, “Comparison of vesselness functionsfor multiscale analysis of the liver vasculature,” in
Proceedings of the10th IEEE International Conference on Information Technology andApplications in Biomedicine . IEEE, 2010, pp. 1–5.[27] A. M. Reza, “Realization of the contrast limited adaptive histogramequalization (clahe) for real-time image enhancement,”
Journal of VLSIsignal processing systems for signal, image and video technology ,vol. 38, no. 1, pp. 35–44, 2004.[28] L. Jianzhuang, L. Wenqing, and T. Yupeng, “Automatic thresholding ofgray-level pictures using two-dimension otsu method,” in
China., 1991International Conference on Circuits and Systems . IEEE, 1991, pp.325–327.[29] M. Abdalla, A. Hunter, and B. Al-Diri, “Quantifying retinal bloodvessels’ tortuosity-review,” in , 2015, pp. 687–693.[30] P. Rot, M. Vitek, K. Grm, ˇZ. Emerˇsiˇc, P. Peer, and V. ˇStruc, “Deep sclerasegmentation and recognition,” in
Handbook of Vascular Biometrics .Springer, 2020, pp. 395–432.[31] P. Rot, ˇZ. Emerˇsiˇc, V. Struc, and P. Peer, “Deep multi-class eyesegmentation for ocular biometrics,” in . IEEE, 2018, pp. 1–8.[32] J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and B. vanGinneken, “Ridge-based vessel segmentation in color images of theretina,”
IEEE Transactions on Medical Imaging , vol. 23, no. 4, pp. 501–509, 2004. [33] D. Chicco and G. Jurman, “The advantages of the matthews correlationcoefficient (mcc) over f1 score and accuracy in binary classificationevaluation,”
BMC genomics , vol. 21, no. 1, p. 6, 2020.[34] E. Grisan, M. Foracchia, and A. Ruggeri, “A novel method for theautomatic evaluation of retinal vessel tortuosity,” in