Device for ECG prediction based on retinal vasculature analysis
DD EVICE FOR
ECG
PREDICTION BASED ON RETINALVASCULATURE ANALYSIS
Pranjal Rai [email protected]
Sachin Jangir [email protected]
Tanvir Singh Bal [email protected]
Pulsatile changes in retinal vascular geometry over the cardiac cycle have clinical implications for the diagnosis ofocular and systemic vascular diseases, including ischemia, coronary heart diseases, and diabetes mellitus and itscomplications. Thus, analysis of the pulsatile changes over the cardiac cycle is a potential non-invasive assessmentfor the presence of ocular and systemic vascular diseases. The cardiac rhythm influences these pulsatile changes inthe retina, is a result of the change in blood volumetric flow entering the ophthalmic-vascular system under a certainlevel of intraocular pressure during the peak systolic and diastolic phases of the cardiac cycle. An example of pulsatilechange observable in the retina is the spontaneous venous pulsation (SVP), which is available in approximately 90percent of the patients. It is caused by the variation in the pressure gradient between the intraocular retinal veins and theretrolaminar portion of the central retinal vein, visible as rhythmic changes in the diameter of one or more veins nearor on the optic nerve head. In addition to SVP, pulsation of veins outside the optic disk (OD), such as the serpentinemovement of principal arteries, the pulsatile motion of small arterioles, and movement of optic nerve head are otherfeatures that can be visualized with dynamic fundoscopy.Assessment of fundus images generally requires ophthalmologic expertise. However, the availability of an expert is notalways guaranteed, and even if an expert is available, the assessment is performed manually. Thus, there is also a needfor an automated device that can analyze the fundus images and keep track of the pulsations. Such a device can besynchronized with the Electrocardiogram and can be programmed to assess the presence of various ocular and systemicvascular diseases. In this project, we proposed and worked on a portable embedded system device that automaticallycaptures retinal images and analyzes them for the estimation of vessel diameters. The device is hand-held and attachesto the ophthalmoscope as an extension. Using the device, the time variation of the vessel diameters can be estimated,which can further be used to predict ECG and the presence of various other systemic vascular diseases.
The device is integrable with an ophthalmoscope and acts as an extension to the ophthalmoscope. The functionsperformed by it are to capture and sample the retinal images, process them for vessel diameter estimation and output thetime variation of those estimations, which are to be used further for ECG prediction.
A Raspberry Pi processor was used as the processing part of the device, whose function is to initiate the image capturingand process the images for vessel diameter estimation. The main sensing element of the device is the camera, and theRaspberry Pi camera module provided a Sony IMX219 8-megapixel sensor for the device which can support 1080p30, 720p 60, and VGA 90 video modes. The device also needed an input-output tool, and a 3.5 inch LCD screen waschosen to be pertinent with the device. Apart from the above main parts, various secondary parts like connecting battery,connecting wires, and screws were used. Details of the components are also summarised in Table 1. All the componentsmentioned above work hand-in-hand.The ophthalmoscope focuses on the retina, and the Raspberry Pi camera captures the retina’s focussed image. Theimages are passed on to a Raspberry Pi powered by a battery. The Raspberry Pi processes the images and tracks thetime variations in the vessel diameters. The processed output is passed on to the user via the LCD screen. The device a r X i v : . [ ee ss . I V ] S e p an also be used synchronized with the ECG to visualize retinal pulsations. For this, a portable ECG machine is used tocontinuously capture the ECG and passes it to the Raspberry Pi. The processor uses those measurements to detect peaksand trigger image capture at the ECG’s specified peaks. The images are then processed to obtain the time variation ofvessel diameters, which can be used to study the pulsations. The block diagram is shown in Figure 1Table 1: Components of the deviceComponent StatusOphthalmoscope Present in the labRaspberry Pi Issued from the labRaspberry Pi camera module Issued from the lab3.5 inch LCD screen Rs. 30003.4V 4000mAh Li-ion battery Rs. 600Misc. (wires, screws, etc.) Rs. 200Figure 1: Block diagram for the device All the components will function only when there is harmony in their physical arrangement, and that is where modelingthe device comes into the picture. The 3D model of the device was made using the Autodesk Inventor software. Themodel was designed to provide space for every component and the connections between the components. Ports likeUSB, HDMI, and micro USB were incorporated into the model. A unique cylindrical protrusion was designed to attachto the eye-piece of the ophthalmoscope and thus provide a pseudo-continuum between the ophthalmoscope and thedevice. The stability of the model was also taken into consideration, and support was designed to provide the devicerobustness when connected with an ophthalmoscope. The 3D model and the internal breakout are shown in the Figures2, 3 and 4. Figure 2: 3D model of the device2igure 3: Device integrated with an ophthalmoscopeFigure 4: Internal breakout of the device
The Raspberry Pi triggers image capture when input is given through the LCD screen. The captured image is processedfor the estimation of vessel diameters. The state of the art algorithms for diameter estimation generally use ConvolutionalNeural Networks. The dataset these algorithms are trained on is DRIVE, STARE, and REVIEW. These datasets offerhigh-quality retinal images but because the device needs to work for images captured through an ophthalmoscope which,are of much lower quality. Using such supervised algorithms offer minimal accuracy when used on images capturedthrough an ophthalmoscope because the distribution of the testing images is entirely different from that of the trainingimages. Thus, an unsupervised approach is used in this project. Most unsupervised algorithms have a segmentation stepbefore the diameter estimation step. The segmentation step segments the blood vessels from the background, and thissegmented vessel map is further used to estimate the diameters. Estimating the diameters can be done in various ways;one of the ways is to track the vessel edges and evaluate the shortest distance to the other parallel edge, for each pointon the vessel edge. The other approach involves finding vessel centrelines and clustering the pixels perpendicular to the3entreline based on their intensities; the vessels offer high contrast with the background and thus correspond to thelow-intensity cluster. We used the latter approach.The processing thus involves two steps:1. Vessel segmentation and centreline extraction2. Interpolation normal to the centrelines and clustering
The segmentation step is crucial to locate the vessels in the image. Most segmentation algorithms are supervisedin nature and use architectures like RetinaNet trained on the DRIVE and STARE dataset. Again, these algorithmsare not suitable for images captured through an ophthalmoscope because of the low field of view and low qualityof the ophthalmoscopic images. Various unsupervised algorithms for segmentation based on match filtering, globalthresholding, and morphological operations, etc., can be found in the literature. All the unsupervised algorithmsgenerally found to attain almost similar accuracies on the DRIVE and STARE dataset. We have chosen a globalthresholding based algorithm for segmentation.a) b) c)d) e) f)Figure 5:
Vessel segmentation: a)
Original image from the REVIEW dataset, b) Green channel of the image, c) Greenchannel enhanced using CLAHE, d) Result of background subtraction, e) Result of global thresholding, f) Segmentedimage after noise removalAs most studies have shown that the green channel offers the most contrast with the background, the green channel ofthe image was selected, and to further enhance the image Contrast Limited Adaptive Histogram Equalization (CLAHE)was applied. To obtain a field of view(FOV) mask for the image, thresholding was used with a low threshold. To counterthe boundary effects of processing the FOV mask was morphologically eroded using an elliptic structuring element.The CHALE was applied using a grid of size 9x9, and the contrast clip limit value of 3.0. Background subtraction isfurther carried out to extract the features from the background. The background was estimated by applying a smallmedian filter (size=5) and a large Gaussian filter (size=55) on the enhanced image. The enhanced image was thensubtracted from the estimated background, and the negative pixels were truncated to a value of 0. Global thresholdingwas further applied with a low threshold value ( ≈ . The result was a segmented binary map of the retinal vessels. The result of various steps on the imagecan be seen in Figure 5The segmented vessel map was used to estimate an upper limit for the vessel diameters, and this was done using distancetransform. Distance transform gives maps each white pixel of a binary image to its least distance from the black pixels.The twice of the maximum value computed by the distance transform corresponds to the maximum vessel diameter.This estimate was very crucial for successive steps to work, as is explained further. Once we have obtained a segmentedvessel map, the centreline can be obtained using skeletonization. We have used the Zhang-Seun’s thinning algorithm toobtain the centrelines from the segmented vessel map. The morphological closing operation was used with a linearstructuring element applied at every 15 ◦ for filling broken centrelines. The resulting centreline map contains bifurcation4) b) c)d) e) f)Figure 7: Centreline extraction: a)
Segmented vessel map, b) Skeletonization using Zhang-Seun’s thinning algorithm, c) Filled broken centrelines, d) Detected bifurcation points in green, e) Removing bifurcation points and noise, f) Resuting vessel centrelinespoints, and removing these points was an essential step because calculating normals at these points was ambiguous.The bifurcation points were detected using template kernels, as shown in Figure 6, applied at every 15 ◦ . The detectedbifurcation points were removed, and the resulting image was the centreline map. Due to the removal of bifurcationpoints, some noise was expected to be induced in the image and thus must be removed. To remove the noise, lines thatwere less than 25px in length were removed. The vessel centrelines were randomly colored for better visualization. Theresults at each step ate shown in Figure 7.a) b) c) d) e)Figure 6: Tempelate kernels to detect bifurcation points To select a particular vessel centreline from the centreline map, input from the user was used. The user was prompt toclick on the screen. The centreline closest to the clicked point was selected. The points on the centreline were storedand normals were to be drawn at each of these points. To draw a normal at a point, we need the slope at that point, andto calculate the slope, a window of 6 pixels centered at that point was used. The earlier computed maximum diameterestimate from the distance transform was the most crucial element in drawing the normals. Let the maximum theestimated diameter was "d," then normals drawn were of the length "1.5xd". This ensured that no part of the vesselalong the normal was missed. The points along this normal were linearly interpolated and intensity value each ofthese points was computed using bilinear interpolation. Thus, we can plot intensity values along any of the normaland atypical such variation is shown in Figure 8,c). All such normal variations can be stacked together to obtain a 3Dintensity plot of intensity variations aligned along a common center as shown in Figure 8, d). One can observe from theplot that the vessels correspond to the 3D valley. Another way to visualize these stacked-aligned intensity variations isas an image, which is shown in Figure 9, a). The vascular pixels are of lower intensity in such an image. To obtainthe diameters pixels in such an image can be clustered together, and the lowest intensity cluster will correspond to thevessels. Thus, K-Means clustering was used to form 3 clusters in the image, and the resulting clustered image is shownin Figure 9, b). To separate the vascular pixels, thresholding was performed about the lowest intensity cluster, and theresulting thresholded image is shown in Figure 9, c).This resulting image presented a profile of the vessel aligned horizontally. However, it was noisy and had breaks.Central Light Reflux(CLR) in vessels was also a hurdle as in images with CLR; the centreline is considerably bright5nd, thus, usually get clustered as non-vascular. To make our algorithm immune to CLR, we filled broken verticle linesin the profile, filling the false-negative pixels. Further, the vessel is supposed to be symmetric about the centreline. Thus,to exploit this symmetry, the image was flipped about the horizontal axis, and ’logical and operation’ was performed tofind the common vascular region between the flipped and the original image. The resulting image is shown in Figure 9,d). To further smoothen the profile, horizontal points separated by a distance of less than 20pixels were joined, and theresulting final estimated vessel profile is shown in Figure 9, e). To compute the diameters, the number of white pixelsalong each vertical line (corresponds to the earlier normals) was calculated. These computed diameters are shown in theFigure 8, e). a) b) c)d) e)Figure 8:
Diameter estimation: a)
Selected vessel in white, b) Pixels interpolated normal to the selected vessel usingthe maximum diameter estimated earlier using the distance transform, c) Intensity variations along the normal to aparticular single pixel on the selected vessel centreline, d) Intensity variations along the points on the selected centreline, e) estimated diameters a) b)Figure 9: Diameter estimation: a)
Intensity variations along the normals visualized as an image, b) Result of clusteringthe intensity variation image using into 3 clusters using k-Means clustering , c) Result of thresholding about the lowestintensity cluster, d) Exploiting symmetry about the centreline using logical and operation between the image and it’smirror image abou the centreline, e) Filling broken horizontal lines to smoothen the vessel profile
The global thresholding based algorithm was tested on the DRIVE dataset and the diameter estimation algorithm on theCLRIS section of the STARE dataset. To test the segmentation algoritm, 20 manually annotated images were takenfrom the dataset. Predictions were made for each image and compared with the ground truth to find the number of truepositives(TP), false positives(FP), true negatives(TN) and false negatives(FN). The metrics used to test the predictions6ere Pixel accuracy, sensitivity and specificity. The algorithm achieved an average pixel accuracy of 94.22% with69.20% sensitivity and 96.63% specificity. Detailed results are shown in the Table 2.
Accuracy = T P + T NT P + T N + F P + F N (1)
Sensitivity = T PT P + F N (2)
Specif icity = T NT N + F P (3)For the diameter estimation algorithm, CLRIS section of the REVIEW dataset was used to test the algorithm. Predictionswere made for various annotated vessel segments and the standard-deviation error ( σ error ) the metric was used to judgethe algorithm. Let at a given vessel profile i, χ i = ω i − ψ i , where ω i is the estimated width and ψ i is the correspondentground truth. Let the total number of points be n p then the standard deviation of the width differences is given by σ error = (cid:118)(cid:117)(cid:117)(cid:116) n p − n p (cid:88) ( χ i − µ error ) (4)where, µ error = 1 n p n p (cid:88) χ i (5)The average σ error achieved by the algorithm is 0.40, which is in the range 1.2 - 0.2, achieved by the algorithms in theliterature. Detailed results are shown in the Table 3.A 2-second long video of fundoscopy, captured through an ophthalmoscope, was sampled at a sample rate of 30 toobtain 60 images. Each of the images was individually processed using the above algorithms. The diameters of aparticular artery and a vein were tracked. The results are shown in Figure 10. The blue curve is estimated using thealgorithm. The orange curve is the smoothened version of the blue curve. Savitzky-Golay filter with a window sizeof 0.1 × window-size (5) was used to smoothen the curve, and a low pass filter was applied with a cutoff frequency of2Hz. The resulting curve clearly shows the sinusoidal variations corresponding to the SVP, as per expectations. Due tothe stretchy nature of the SVP, when the arteriole diameter increase, the venule diameters are expected to decrease.Thus, the arteriole and venue diameters variation curves are expected to be a 180 ◦ out-of-phase, which is apparent inthe estimated curve.Figure 10: Time variation of vessel diameters, Blue: original data and Orange: smoothened data7igure 11: Estimating heart-rate by computing local maximas and minimas.Table 2: Performance of the segmentation algorithm on the DRIVE datasetImage Accuracy Sensitivity SpecificityImage-1 95.45% 73.25% 97.24%Image-2 95.01% 65.36% 97.96%Image-3 88.13% 74.71% 89.08%Image-4 94.37% 68.61% 97.75%Image-5 94.32% 54.47% 98.55%Image-6 95.15% 66.75% 97.75%Image-7 94.79% 67.40% 97.43%Image-8 94.56% 65.58% 97.70%Image-9 94.81% 69.22% 97.16%Image-10 95.77% 62.89% 98.57%Image-11 93.50% 76.17% 94.61%Image-12 95.72% 70.89% 97.93%Image-13 95.55% 71.74% 97.65%Image-14 87.41% 72.51% 89.03%Image-15 95.20% 71.17% 97.48%Image-16 94.14% 66.59% 97.51%Image-17 94.31% 69.32% 96.70%Image-18 95.14% 72.87% 97.24%Image-19 95.16% 71.15% 97.42%Image-20 95.91% 73.27% 97.77%Average 94.22% 69.20% 96.63%To estimate heart rate from the resulting pulsation plot, first the local-maxima and local-minima of the smoothened curvewere estimated. The average time-separation between each consecutive minima and maxima was computed, as shownin Figure 11. The estimated time-period of the pulse is then the twice of the computed time-separation. The averagetime-separation for the vein came out to be and for the artery came out to be . The heart-rate/pulse-rate isthe number of periodic-cycles occurring in a minute (60 sec). Thus, the heart-rate can be obtained by dividing 60 bythe estimated time-periods. The heart-rate, for the vein, came out to be , and for the artery came out to be .Thus, the average heart-rate was found to be CONCLUSIONS
The proposed device attached to an ophthalmoscope fulfills the need for funduscopy with in-situ vasculature analysis.The results discussed above are harmonious with the predictions. The accuracy achieved by the algorithms is comparableto that of the state-of-the-art. The efficiency of the algorithms is the topic of focus in the future. Complete automation ofthe device is also currently a bottleneck because the automatic alignment is susceptible to noise, and thus, the pulsationsget lost in the noise, and no meaningful data is extractable. This problem can be solved by employing commercialregistration programs like i2k-Retina-align, but, still, these programs cannot process the images in real-time. Thus,complete automated estimation in real-time with high efficiency is currently being worked on and will be a part of afuture project.Table 3: Performance of the diameter estimation algorith on the REVIEW/CLRIS datasetImage Segment µ mean (px) σ mean (px) µ error (px) σ error (px)Image-1 Segment-1 14.54 1.21 -1.74 Image-1 Segment-2 14.00 0.89 -2.22
Image-1 Segment-1 17.08 0.79 -1.89
Image-2 Segment-1 11.62 0.96 0.03
Image-2 Segment-2 18.20 0.68 -2.58
Image-2 Segment-3 17.79 0.70 -2.97
Image-2 Segment-4 17.18 1.66 -2.64
Image-2 Segment-5 14.70 1.16 -1.84
Image-2 Segment-6 12.57 1.99 1.48
Image-2 Segment-7 13.67 1.15 1.79
Image-2 Segment-8 12.00 1.41 0.28
Image-2 Segment-9 12.20 0.84 -1.11
Image-2 Segment-10 10.5 0.93 0.49
Image-2 Segment-11 11.09 2.81 -2.26
Image-2 Segment-12 14.10 1.66 0.73
Image-2 Segment-13 14.44 0.70 -1.90
Image-2 Segment-14 11.32 1.42 -0.48
Image-2 Segment-15 8.29 0.96 0.07
Image-2 Segment-16 17.43 4.59 4.07
Image-2 Segment-17 8.31 2.46 0.56
Image-2 Segment-18 7.00 1.75 -0.76
Average
References [1] Dinesh Kant Kumar, Behzad Aliahmad, Hao Hao, Mohd Zulfaezal Che Azemin, and Ryo Kawasaki. A method forvisualization of fine retinal vascular pulsation Using nonmydriatic fundus camera synchronized with Electrocardio-gram. In
ISRN Ophthalmology , 2013.[2] Teresa Araújo, Ana Maria Mendonça and Aurélio Campilho. Parametric model fitting-based approach for retinalblood vessel caliber estimation in eye fundus images. In
Plos One , 2018.[3] Samiksha Pachade, Prasanna Porwal, Manesh Kokare, Luca Giancardo, and Fabrice Meriaudeau. Retinal vasculaturesegmentation and measurement framework for color fundus and SLO images.
Biocybernetics and BiomedicalEngineering Volume 40, Issue 3, Pages 865-900 , 2020.[4] Bashir Al-Diri, Andrew Hunter, David Steel, Maged Habib, Taghread Hudaib, and Simon Berry. REVIEW - Areference data set for retinal vessel profiles. In the 30th Annual International Conference of the IEEE Engineeringin Medicine and Biology Society in Vancouver, British Columbia, Canadathe 30th Annual International Conference of the IEEE Engineeringin Medicine and Biology Society in Vancouver, British Columbia, Canada