Automated Optic Nerve Head Detection Based on Different Retinal Vasculature Segmentation Methods and Mathematical Morphology
Meysam Tavakoli, Mahdieh Nazar, Alireza Golestaneh, Faraz Kalantari
11 Automated Optic Nerve Head Detection Based onDifferent Retinal Vasculature Segmentation Methodsand Mathematical Morphology
Meysam Tavakoli, Mahdieh Nazar, Alireza Golestaneh, Faraz Kalantari
To appear in: 2017 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)DOI: 10.1109/NSSMIC.2017.8532764
Abstract —Computer vision and image processing techniquesprovide important assistance to physicians and relieve theirwork load in different tasks. In particular, identifying objectsof interest such as lesions and anatomical structures from theimage is a challenging and iterative process that can be doneby using computer vision and image processing approaches ina successful manner. Optic Nerve Head (ONH) detection is acrucial step in retinal image analysis algorithms. The goal ofONH detection is to find and detect other retinal landmarks andlesions and their corresponding diameters, to use as a lengthreference to measure objects in the retina. The objective ofthis study is to apply three retinal vessel segmentation methods,Laplacian-of-Gaussian edge detector, Canny edge detector, andMatched filter edge detector for detection of the ONH either inthe normal fundus images or in the presence of retinal lesions(e.g. diabetic retinopathy). The steps for the segmentation areas following: 1) Smoothing: suppress as much noise as possible,without destroying the true edges, 2) Enhancement: apply a filterto enhance the quality of the edges in the image (sharpening), 3)Detection: determine which edge pixels should be discarded asnoise and which should be retained by thresholding the edgestrength and edge size, 4) Localization: determine the exactlocation of an edge by edge thinning or linking. To evaluate theaccuracy of our proposed method, we compare the output of ourproposed method with the ground truth data that collected byophthalmologists on retinal images belonging to a test set of 120images. As shown in the results section, by using the Laplacian-of-Gaussian vessel segmentation, our automated algorithm finds18 ONHs in true location for 20 color images in the CHASE-DBdatabase and all images in the DRIVE database. For the Cannyvessel segmentation, our automated algorithm finds 16 ONHsin true location for 20 images in the CHASE-DB database and32 out of 40 images in the DRIVE database. And lastly, usingmatched filter in the vessel segmentation, our algorithm finds 19ONHs in true location for 20 images in CHASE-DB databaseand all images in the DRIVE.
Index Terms —Diabetic retinopathy, image processing, OpticNerve Head, retinal blood vessel, Canny edge detector, Laplacian-of-Gaussian edge detector, Match filter
I. I
NTRODUCTIONM. Tavakoli is with the Dept. of Physics, Indiana University-PurdueUniversity, Indianapolis, IN, USA.A. Golestaneh is with the Electrical Engineering Department, Arizona StateUniversity, Tempe, AZ, USAM. Nazar is with the Biomedical Sciences Department, Shahid BeheshtiMedical Sciences, Tehran, IRANF. Kalantari is with Department of Radiation Oncology, University of TexasSouthwestern Medical Center, Dallas, TX, USA T HE The computer techniques are applied for providingphysicians assistance at any time and to relieve their workload or iterative works as well, to identify the object of interestsuch as lesions and anatomical structures from the image [1]–[3]. The identification of the optic nerve head (ONH) or opticdisk (OD) is important in retinal image analysis, to locateanatomical components in fundus images, for vessel tracking,as a reference length for measuring distances in retinal images,and for registering changes within the ONH region because ofsome diseases Diabetic retinopathy (DR) or glaucoma. ONHis yellowish region in color retinal image that usually coveredone seventh of fundus image [4], [5]. The main characteristicof ONH is rapid intensity changing due to dark thick bloodvessels that are in vicinity of bright ONH. This intensityflactuation is the characteristic of interest for ONH recognition.In the other words, ONH is usually the brightest componenton the fundus, and therefore a collection of high intensitypixels with a high grey-scale value will identify the locationof ONH [6], [7] (See Fig. .1). ONH have three propertiesin other to be detected: (1) ONH appears as a bright disknearly 1600 µ m in diameter (2) the large blood vessels entersfrom its above and blew (3) blood vessel diverge from ONH[4], [8], [9]. Some advantages of detecting ONH are: (1) theidentification of ONH is critical for automatic detection ofsome anatomical structures and retinal lesions. One of theelements for extraction in this situation is vessel tree especiallylarge vessels that are located in adjacent of the ONH [10], [11].(2) In other hand, detecting and masking ONH can decreasefalse positive rate of lesions that detected in identification ofsome diseases like DR [12], [14].II. P REVIOUS W ORK
There are several algorithms that detect the location ofONH, center of that or its boundary [5], [10], [15]–[27].However, identification of ONH is difficult because of dis-continuity of its boundary, due to crossing large vessels aswell as considerable color or intensity conversion in someparts of retinal image because of some pathologies (suchas exudates in color image). The study introduced effectiveapproach based on active contour model has been reported byOsareh et al. that was complex and time consuming. Firstly,the image was normalized by using histogram specification,and then the ONH region was averaged from 25 color-normalized images, to determine a gray- level template. Then a r X i v : . [ phy s i c s . m e d - ph ] A p r Fig. 1. Fundus image from MUMS-DB (a) normal color image (b) greenchannel. the normalized correlation coefficient was applied to find thefinest match between the template and all the candidate pixelsin the given image [16]. Another appropriate technique isbelonging to Abdel-ghafar et al [15]. In this study they usedthe circular Hough transform for detecting the ONH which hasa roughly circular shape, Hough transform make it possibleto find geometric shapes within an image. The retinal vesselnetwork in the green-channel image was suppressed by usingthe closing morphological operator. For extracting the edgesin the image the Sobel operator and a simple threshold werethen used. Finally circular Hough transform was employedto the edge points, and the largest circle was determinedconsistently to correspond to the ONH. All of these studiesdetected ONH based on its shape and color. On the otherhand, some algorithms recognized ONH according to trackingvessels until their origin [17], [18]. Also, Foracchia et al. havebeen reported on a new technique for detecting the ONH usinga geometrical parametric model (retinal vessels originatingfrom the ONH and their path follows a similar directionalpattern in all images) to describe the typical direction of retinalvessels as they converge on the ONH [19]. Sinthanayothin etal. identified the location of the ONH employing the varianceof intensity between the ONH and adjacent blood vessels [4].At first they preprocessed by using an adaptive local contrastenhancement method that was used to the intensity compo-nent. Instead of applying the average variance in intensityand assuming that the bright appearing retinopathies (e.g.,exudates) are far from the ONH size, Walter and Klein [20],estimated the ONH center as the center of the biggest brightestconnected object in a fundus image. They obtained a binaryimage that consisted all the bright regions by thresholding theintensity image. Moreover, Hoover and Goldbaum correctlyidentify ONH location by using a fuzzy convergence algorithm(finds the strongest vessel network convergence as the primaryfeature for detection using blood vessel binary segmentation,the disc being located at the point of vessel convergence.Brightness of the optic disc was used as a secondary feature)[21]. Poureza and Tavakoli [5] located the ONH using Radontransform combine with some overlapping sliding window. Atfirst, they chose the blue channel of the color retinal imageand using the proposed method tried to detect the locationof ONH. In another study, Tavakoli, et al. tried to detect theONH using fluorescein angiography retinal images [22]. They first used preprocessing method and after that using Radontransform locat the center of the ONH.The objective of this study is to apply three retinal vesselsegmentation methods, 1) Laplacian-of-Gaussian edge detector(using second-order spatial differentiation), 2) Canny edgedetector (estimating the gradient intensity), and 3) Matchedfilter edge detector for detection of ONH either in normalfundus images or in presence of retinal lesion like in DR.III. P
ROPOSED M ETHOD
The overall scheme of the methods used in this study isshown in Fig.2.
Fig. 2. The overall scheme of the methods for detection of the ONH
A. Materials
To detect the ONH, three databases (one rural and two pub-licly available databases) were used. The first rural databasewas named Mashhad University Medical Science Database(MUMS-DB). The MUMS-DB provided 100 retinal imagesincluding 80 images with DR and 20 without DR. The fundusimages were captured via a TOPCON (TRC-50EX) retinalcamera at 50 field of view (FOV) and mostly obtained fromthe posterior pole view including ONH and macula with ofresolution × pixels [28], [29]. The second datasetwas the DRIVE database consisting of 40 images with imageresolution of × pixels in which 33 cases did not haveany sign of DR and 7 ones showed signs of early or mild DRwith a 45 FOV. This database is divided into two sets: testingand training set, each of them containing 20 images [30]. Thelast database, CHASE DB1 dataset includes 28 retinal imageswith image resolution of × pixels, acquired from boththe left and right eye [31]–[33]. B. Preprocessing and Image Enhancement
The preprocessing step provides us with an image withhigh possible vessel and background contrast and also unifiesthe histogram of the images. Although retinal images havethree components (R, G, B), their green channel has the bestcontrast between vessel and background; so the green channelis selected as input image (I). First, we used mathematicalmorphology operators. Mathematical morphology has beenwidely used in image processing and pattern recognition.Morphological operations work with two parts. The first oneis the image to be processed and the second is called structureelement. Erosion is used to reduce the objects in the imagewith the structure element, also known as the kernel. Incontrast, dilation is used to increase the objects in the image.
Secondary operations that depend on erosion and dilationare opening and closing operations. Opening, denoted as f ◦ b , is applying an erosion followed by a dilation operation.The b represents the structure element. On the other hand,closing is first applying dilation then erosion. It is denotedas f • b . Building from opening and closing operations, thetop-hat transform is defined as the difference between theinput image and its opening or closing. The top-hat transformis one of the important morphological operators. Based ondilation and erosion, opening and closing denoted by f ◦ b and f • b are defined. The top-hat transform is defined as thedifference between the input image and its opening. The top-hat transform includes white top-hat transform (WTH) andblack top-hat transform (BTH) are defined by: (cid:40) W T H ( x, y ) = f ( x, y ) − f ◦ b ( x, y ) BT H ( x, y ) = f • b ( x, y ) − f ( x, y ) (1)In our pre-processing the basic idea is increasing thecontrast between the vessels and background regions of theimage. WTH or BTH extract bright and dim image regionscorresponding to the used structure element. Using the con-cept of WTH or BTH is one way of image enhancementthrough contrast enlarging based on top-hat transform. In thefundus images, the background brightness is not the samein the whole image. This background variation would leadto missed vessels or false vessel detection in the followingsteps. Moreover, in I, background is brighter than the details,however the vessels and other components are preferred toappear brighter than background. To deal with the problem, Iis inverted as shown in I= 255 - I. Since we need a uniformbackground, to decrease the intensity variations in vesselsbackground, we were firstly applied WTH(x,y) on image. Itgave a high degree of differentiation between these featuresand background. A top-hat transformation was based on a diskstructure element whose diameter was empirically found thatthe best compromise between the features and background.The disk diameter depended on the input image resolution.After top-hat transformation, we used contrast stretching tochange the contrast or brightness of an image. The result wasa linear mapping of a subset of pixel values to the entire rangeof grays, from the black to the white, producing an image withmuch higher contrast [29]. The result of first step is shown inFig. 3 Fig. 3. (a) Fundus image from (b) top-hat result and contrast stretching (c)result of subtraction of top-hat and filtered top-hat image.
C. Procedure of ONH detection
ONH can be explained as bright oval object placed versusa darker background with ill-defined boarder because of thickvessel come out and go in through it. When the image qualitywill be improved using preprocessed the candidate region ofthe ONH is obtained, the position of the ONH is identifiedusing our model. Our approach addresses the image locallyand regionally where homogeneity of the ONH is more likelyto happen. The algorithm is composed of 3 steps: Generationof sub-images, vessel segmentation, and ONH detection. Inorder to extract ONH, it should be extracted in local windows.
1) Multi-overlapping window:
In the proposed algorithm,each fundus image was partitioned into overlapping widowsin the first step. To find objects on border of sub-images,an overlapping pattern of sliding windows was defined. Fordetermining the size of each sub-image or sliding windowour knowledge database was used. In this regard, minimumand maximum sizes of targeted object specify the size ofthe windows (n). The window size (n) has a direct effect onthe extraction accuracy. Another important parameter whichaffects the algorithm’s accuracy is the windows overlapping[5]. In Fig. 4 we have shown some sample sub-images in aretinal image.
Fig. 4. Simple example of window size and overlapping ratio.
2) Vessel Segmentation:
Here we applied three retinal ves-sel segmentation methods, 1) Laplacian of Gaussian edgedetector [34], 2) Canny edge detector [36], [37], and 3)Matched filter edge detector [38] for detection of ONH eitherin normal fundus images or in presence of retinal lesion likein diabetic retinopathy (DR). In general, the steps for theedge detection are in following: (1) Smoothing: suppress asmuch noise as possible, without destroying the true edges, (2)Enhancement: apply a filter to enhance the quality of the edgesin the image (sharpening), (3) Detection: determine whichedge pixels should be discarded as noise and which shouldbe retained by thresholding the edge strength and edge size,(4) Localization: determine the exact location of an edge byedge thinning or linking. In the Laplacian-of-Gaussian (LoG)edge detector uses the second-order spatial differentiation. (cid:53) f = ∂ f∂x + ∂ f∂y (2)The Laplacian is usually combined with smoothing as a pre-cursor to finding edges via zero-crossings. The 2-D Gaussianfunction: h ( x, y ) = e − ( x + y )2 σ (3) Where σ is the standard deviation, blurs the image with thedegree of blurring being determined by the value of σ . If animage is pre-smoothed by a Gaussian filter, then we have theLoG operation that is defined: ( (cid:53) G σ ) ∗ I (4)Where (cid:53) G σ ( x, y ) = ( 12 πσ )( x + y σ − e − ( x + y )2 σ In Canny edge detection, we estimate the gradient magnitude,and use this estimate to determine the edge positions anddirections. f x = ∂f∂x = K (cid:53) x ∗ ( G σ ∗ I ) = ( (cid:53) x G σ ) ∗ I f y = ∂f∂y = K (cid:53) y ∗ ( G σ ∗ I ) = ( (cid:53) y G σ ) ∗ I (5)Where (cid:53) x G σ = ( − x πσ ) e − ( x + y )2 σ (cid:53) y G σ = ( − y πσ ) e − ( x + y )2 σ (6)The algorithm runs in 4 separate steps: (1) Smooth imagewith a Gaussian: optimizes the trade-off between noise filteringand edge localization, (2) Compute the Gradient magnitudeusing approximations of partial derivatives, (3) Thin edges byapplying non-maxima suppression to the gradient magnitude,and (4) Detect edges by double thresholding. We can computethe magnitude and orientation of the gradient for each pixelbased two filtered images. | (cid:53) f ( x, y ) | = √ f x + f y = rate of change of f(x,y) (cid:54) (cid:53) f ( x, y ) = tan − f y f x ) = orientqtion of rate of f(x,y)The matched filter has been widely used in the detection ofblood vessels of the human retina digital image. In this paper,the matched filter response to the detection of blood vessels isincreased by proposing better filter parameters. The MatchedFilter was first proposed in to detect vessels in retinal images.It makes use of the prior knowledge that the cross-sectionof the vessels can be approximated by a Gaussian function.Therefore, a Gaussian-shaped filter can be used to match thevessels for detection. The Matched Filter is defined as G ( x, y ) = ( 1 √ πσ ) e − ( x + y )2 σ − m (7)Where m is chosen to make kernel G(x,y) has zero mean.
3) ONH detection:
To detect the ONH we compare allsub-images with maximum density of vessels. To do this wecompare all sub-images which have a peak profile, higherthan a predefined threshold. In better word, we are lookingfor the ONH among those candidates which have maximumentropy of vessels. In fact when we segment the vessel in thenext step we are looking for maximum entropy of the thickvessels. So we try to check the maximum entropy in eachoverlapping window. When we select our candidates the laststep is comparing them using the intensity variation. In thiscase we pick that candidate with maximum intensity. In the next step, to pick correct candidate which has the ONH,we look for the sub-images with highest intensity. Because aswe mentioned in the introduction, ONH appears as a brightdisk nearly 1600 µ m in diameter in the retinal images. Fig. 5. (a) Input image from MUMS-DB; (b) result of vessel segmentationusing Match filter; (c) final ONH detection.
IV. E
XPERIMENTAL R ESULTS
To calculate the efficiency of the current methods in ONHdetection and also to compare the results with other reportedstudies, it is necessary to compare all pixels of the finalautomated segmentation images with the manual segmentationor gold standard (GS) files. For the evaluation, we used theconcept of sensitivity (Se) and specificity (Sp). The results forthe automated method compared to the GS were calculated foreach image. The higher the sensitivity and specificity values,the better the procedure. These metrics are defined as: (cid:40)
Sensitivity = T PT P + F N
Specif icity = T NT N + F P (8)Where TP is true positive, TN is true negative, FP is falsepositive and FN is false negative.
A. Training and Test Set for the Image Database
In this study, we used 48 images for a training set (learningpurpose). This consisted of 20 images from MUMS-DB, andDRIVE, and 8 images from CHASE-DB Databases. The testset (test purposes) consisted of 120 fundus images of which 80,20, and 20 images from MUMS-DB, DRIVE, and CHASE-DB respectively. After fixing the parameters of our algorithmusing training set, our algorithm was tested in each image ofthe databases in test set. Some results are shown in Fig. 5,Fig. 6, and Fig. 7 from MUMS-DB and CHASE-DB.
Fig. 6. (a) Input image from MUMS-DB; (b) result of vessel segmentationusing Match filter; (c) final ONH detection.
Fig. 7. (a) Input image from CHASE-DB; (b) result of vessel segmentationusing Match filter; (c) final ONH detection.
B. Comparing the Statistics Results of ONH detetction inThree Databases
The sensitivity of the threshold was also characterized alongthe Equation (8). A ROC curve is a plot o (Se) versus (1-Sp). A ROC curve, plotted to show the effect of a varyingthreshold, shows the presence or absence of sub-vessels ineach sub-image, denoted by the
T h parameter, in our datasets.Parameters used for plotting ROC are shown in Table I.
TABLE INUMBER OF PARAMETERS IN OUR ALGORITHM IN VESSELSEGMENTATION FOR THE THREE DATABASES
Database No. of Images Window Size (n) Step ThMUMS-DB 100 62 5 [0,5]DRIVE 40 15 6 [0,5]CHASE DB1 40 30 5 [0,5]
Fig. 8. (a) Input image from MUMS-DB; (b) result of vessel segmentationusing Match filter; (c) final ONH detection Which is not correct).
Statistical information about the sensitivity and specificitymeasures is extracted. The higher the sensitivity and specificityvalues, the better the procedure. For all retinal images of testset (120 images), our reader labeled the ONH on the imagesand the result of this manual detections were saved to beanalyzed further. According to manual ONH detection usingthe Laplacian-of-Gaussian vessel segmentation our automatedalgorithm finds 90% of the ONHs (18 of ONH in true locationfor 20 color images) in CHASE-DB database and all images inDRIVE database (100%). For the local database, MUMS-DB,the method detected the ONH correctly in 90% of the ONH(72 images out of 80 images). The Canny vessel segmentationour automated algorithm finds 15 of ONH in true location for20 color images in CHASE-DB database (75%) and 16 out of20 images in DRIVE database (80%). For the local database,MUMS-DB, our method detected the ONH correctly in 70images out of 80 images (87.5%). At last, using Matched filterin the vessel segmentation our algorithm found the ONH withaccuracy of 95% (19 of ONH in true location for 20 color images) in CHASE-DB database and all images in DRIVEdatabase (100%). For the local database, MUMS-DB, themethod detected the ONH correctly in 93.75% of all fundusimages (75 images out of 80 images).
Fig. 9. (a) Input image from DRIVE; (b) result of vessel segmentationusing Canny; (c) result of vessel segmentation using LoG; (d) result of vesselsegmentation using Match filter; (e) final ONH detection.Fig. 10. (a) Input image from DRIVE; (b) result of vessel segmentationusing Canny; (c) result of vessel segmentation using LoG; (d) result of vesselsegmentation using Match filter; (e) final ONH detection.
V. D
ISCUSSION AND C ONCLUSION
A potential use of fundal digital image analysis is theability to analyze large fundal images in a short period oftime without tiredness. The identification of fundal landmarkfeatures such as the ONH, fovea and the retinal vessels asreference coordinates is a prerequisite before systems can domore complex tasks identifying pathological entities. Reliabletechniques exist for identification of these structures in retinalphotographs [5], [28], [39]–[41].Since fundus images are nowadays in digital format, it ispossible to create a computer-based system that automaticallydetects abnormal lesions from fundus images [42], [43]. Anautomatic screening system would save the time of well-paidclinicians, letting eye clinics to use their ophthalmologists inother important tasks. It could also be possible to screen morepeople and more often with the help of an automatic screeningsystem, since it would be more inexpensive than screening byhumans. In this study an automated algorithm were utilized forONH detection without intervention of any ophthalmologist.We evaluate three vessel segmentation methods for detectionof ONH. We presented the segmentation of the ONH byseparately using of combination of Canny edge detector, LoGedge detector, and Match filtert and multi-overlapping window.The quality of the ONH detection depends on some parameterssuch as the window size (n), number of step, thresholdingvalidation, etc. Besides training set of images is completelyindependent and they selected randomly. Our automated sys-tem has been developed using color retinal images providedby DRIVE and CHASE-DB public databases of color fundusimages and one local database MUMS-DB consist of normaland diabetic retinal images. DRIVE consists of 40 images,divided into training and a test set, both containing 20 images.The CHASE-DB consist of 28 fundus images from both left and right eyes. And the MUMS-DB consists of 100 fundusimages all images are obtained using 45 Non-Mydriatic retinalcamera.In the final section, statistical information about the sensitivityand specificity measures is extracted. The higher the sensitivityand specificity values, the better the procedure. For all retinalimages of test set (120 images), our reader labeled the ONHon the images and the result of this manual detections weresaved to be analyzed further. According to manual ONHdetection using the Laplacian-of-Gaussian vessel segmentationour automated algorithm finds 18 of ONH in true locationfor 20 color images in CHASE-DB database and all imagesin DRIVE database. For the local database, MUMS-DB, ourmethod detected the ONH correctly in 72 images out of80 images. The Canny vessel segmentation our automatedalgorithm finds 15 of ONH in true location for 20 color imagesin CHASE-DB database and 16 out of 20 images in DRIVEdatabase. For the local database, MUMS-DB, our methoddetected the ONH correctly in 70 images out of 80 images.At last, using matched filter in the vessel segmentation ouralgorithm finds 19 of ONH in true location for 20 color imagesin CHASE-DB database and all images in DRIVE database.For the local database, MUMS-DB, our method detected theONH correctly in 75 images out of 80 images.From sensitivity and specificity view point, our ONH detec-tion algorithm in comparing with our three different vesselsegmentation methods we received more than 90%, and 95%for both sensitivity and specificity for all color retinal imagesrespectively in three databases. It is better than some reportswhich were concentrated on ONH detection. One limitation ofthe current methods is that, when we have a lesion nearly samesize and same intesity of the ONH the algorithm take that asthe ONH like Fig. 8. The reason for that probably becauseof inaccurate vessel segmentation or preprocessing steps. Onfuture work we will work on this rpoblem to solve it.Although we didn’t work on ONH boundary detection andonly use a template to mask the ONH but our results isacceptable in covering the ONH more completely while saveother retinal area unmasked. On other hand, as we saidSinthanayothin et al. [4], detected ONH but, others havefound that their algorithm often fails for fundus images witha large number of white lesions, light artifacts or stronglyvisible choroidal vessels [34]. Others have exploited the Houghtransform (a general technique for identifying the locations andorientations of certain types of shapes within a digital image;[44], to locate the ONH [45]. However, Hough spaces tendto be sensitive to the chosen image resolution [21]. Even theONH detection is useful for pattern of some diseases such asglaucoma [46], [47]. The goal of this work was to developalgorithms for detecting different abnormal vascular lesionsrelated to DR.The results show that all the three segmentation methodsdid acceptable results in ONH detection. Among them usingMatch filter worked better than the others. From sensitivityand specificity view point, our ONH detection algorithm incomparing with our three different vessel segmentation meth-ods we received more than 90%, and 95% for both sensitivityand specificity for all color retinal images respectively in three databases. Our algorithm also has some important character-istics in the detection of vascular structure in retinal imagesthat include: its robustness to noise, acceptable performance inthe detection of both thick and thin vessels by the combinedmethods and multi-overlapping windows, and last but not least,simplicity of the whole method in comparison with othermethods mentioned in this paper.R