Pratul P. Srinivasan
Duke University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pratul P. Srinivasan.
Biomedical Optics Express | 2014
Pratul P. Srinivasan; Leo A. Kim; Priyatham S. Mettu; Scott W. Cousins; Grant M. Comer; Joseph A. Izatt; Sina Farsiu
We present a novel fully automated algorithm for the detection of retinal diseases via optical coherence tomography (OCT) imaging. Our algorithm utilizes multiscale histograms of oriented gradient descriptors as feature vectors of a support vector machine based classifier. The spectral domain OCT data sets used for cross-validation consisted of volumetric scans acquired from 45 subjects: 15 normal subjects, 15 patients with dry age-related macular degeneration (AMD), and 15 patients with diabetic macular edema (DME). Our classifier correctly identified 100% of cases with AMD, 100% cases with DME, and 86.67% cases of normal subjects. This algorithm is a potentially impactful tool for the remote diagnosis of ophthalmic diseases.
Biomedical Optics Express | 2014
Pratul P. Srinivasan; Stephanie J. Heflin; Joseph A. Izatt; Vadim Y. Arshavsky; Sina Farsiu
Accurate quantification of retinal layer thicknesses in mice as seen on optical coherence tomography (OCT) is crucial for the study of numerous ocular and neurological diseases. However, manual segmentation is time-consuming and subjective. Previous attempts to automate this process were limited to high-quality scans from mice with no missing layers or visible pathology. This paper presents an automatic approach for segmenting retinal layers in spectral domain OCT images using sparsity based denoising, support vector machines, graph theory, and dynamic programming (S-GTDP). Results show that this method accurately segments all present retinal layer boundaries, which can range from seven to ten, in wild-type and rhodopsin knockout mice as compared to manual segmentation and has a more accurate performance as compared to the commercial automated Diver segmentation software.
Investigative Ophthalmology & Visual Science | 2013
Joo Yong Lee; Stephanie J. Chiu; Pratul P. Srinivasan; Joseph A. Izatt; Cynthia A. Toth; Sina Farsiu; Glenn J. Jaffe
PURPOSE To determine whether a novel automatic segmentation program, the Duke Optical Coherence Tomography Retinal Analysis Program (DOCTRAP), can be applied to spectral-domain optical coherence tomography (SD-OCT) images obtained from different commercially available SD-OCT in eyes with diabetic macular edema (DME). METHODS A novel segmentation framework was used to segment the retina, inner retinal pigment epithelium, and Bruchs membrane on images from eyes with DME acquired by one of two SD-OCT systems, Spectralis or Cirrus high definition (HD)-OCT. Thickness data obtained by the DOCTRAP software were compared with those produced by Spectralis and Cirrus. Measurement agreement and its dependence were assessed using intraclass correlation (ICC). RESULTS A total of 40 SD-OCT scans from 20 subjects for each machine were included in the analysis. Spectralis: the mean thickness in the 1-mm central area determined by DOCTRAP and Spectralis was 463.8 ± 107.5 μm and 467.0 ± 108.1 μm, respectively (ICC, 0.999). There was also a high level agreement in surrounding areas (out to 3 mm). Cirrus: the mean thickness in the 1-mm central area was 440.8 ± 183.4 μm and 442.7 ± 182.4 μm by DOCTRAP and Cirrus, respectively (ICC, 0.999). The thickness agreement in surrounding areas (out to 3 mm) was more variable due to Cirrus segmentation errors in one subject (ICC, 0.734-0.999). After manual correction of the errors, there was a high level of thickness agreement in surrounding areas (ICC, 0.997-1.000). CONCLUSIONS The DOCTRAP may be useful to compare retinal thicknesses in eyes with DME across OCT platforms.
computer vision and pattern recognition | 2015
Michael W. Tao; Pratul P. Srinivasan; Jitendra Malik; Szymon Rusinkiewicz; Ravi Ramamoorthi
Light-field cameras are now used in consumer and industrial applications. Recent papers and products have demonstrated practical depth recovery algorithms from a passive single-shot capture. However, current light-field capture devices have narrow baselines and constrained spatial resolution; therefore, the accuracy of depth recovery is limited, requiring heavy regularization and producing planar depths that do not resemble the actual geometry. Using shading information is essential to improve the shape estimation. We develop an improved technique for local shape estimation from defocus and correspondence cues, and show how shading can be used to further refine the depth. Light-field cameras are able to capture both spatial and angular data, suitable for refocusing. By locally refocusing each spatial pixel to its respective estimated depth, we produce an all-in-focus image where all viewpoints converge onto a point in the scene. Therefore, the angular pixels have angular coherence, which exhibits three properties: photo consistency, depth consistency, and shading consistency. We propose a new framework that uses angular coherence to optimize depth and shading. The optimization framework estimates both general lighting in natural scenes and shading to improve depth regularization. Our method outperforms current state-of-the-art light-field depth estimation algorithms in multiple scenarios, including real images.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017
Michael W. Tao; Pratul P. Srinivasan; Sunil Hadap; Szymon Rusinkiewicz; Jitendra Malik; Ravi Ramamoorthi
Light-field cameras are quickly becoming commodity items, with consumer and industrial applications. They capture many nearby views simultaneously using a single image with a micro-lens array, thereby providing a wealth of cues for depth recovery: defocus, correspondence, and shading. In particular, apart from conventional image shading, one can refocus images after acquisition, and shift ones viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. We present a principled algorithm for dense depth estimation that combines defocus and correspondence metrics. We then extend our analysis to the additional cue of shading, using it to refine fine details in the shape. By exploiting an all-in-focus image, in which pixels are expected to exhibit angular coherence, we define an optimization framework that integrates photo consistency, depth consistency, and shading consistency. We show that combining all three sources of information: defocus, correspondence, and shading, outperforms state-of-the-art light-field depth estimation algorithms in multiple scenarios.
Journal of Neuro-ophthalmology | 2015
Brian E. Goldhagen; M. Tariq Bhatti; Pratul P. Srinivasan; Stephanie J. Chiu; Sina Farsiu; Mays A. El-Dairi
Background: To apply automated spectral domain optical coherence tomography (SD-OCT) segmentation to eyes with resolving papilledema. Methods: Ninety-four patients with idiopathic intracranial hypertension seen at the Duke Eye Center neuro-ophthalmology clinic between November 2010 and October 2011 were reviewed. Excluded were eyes with papilledema with Frisén grade >2, other optic neuropathies or retinopathies, and those that did not have SD-OCT imaging. The remaining 43 patients were split into 2 groups: non-atrophic papilledema and atrophic papilledema. Automated SD-OCT segmentation was performed on patients with non-atrophic papilledema and age-matched controls for each of the 9 regions of the Early Treatment Diabetic Retinopathy Study map. Bonferroni correction was used for multiple comparisons. All SD-OCT scans were reviewed for retinal structural abnormalities. Results: Total macular thickness was significantly thinner within the fovea and inner macular ring in non-atrophic papilledema vs control eyes (266 vs 276 &mgr;m, P = 0.04; 333 vs 344 &mgr;m P < 0.01, n = 26 non-atrophic papilledema, 30 controls). SD-OCT demonstrated thinning within the fovea, inner macular ring, and outer macular ring of the outer plexiform layer plus nuclear layer in non-atrophic papilledema vs control (124 vs 131 &mgr;m, P < 0.01; 112 vs 118 &mgr;m, P = 0.03; 95 vs 100 &mgr;m, P = 0.03). Retinal structural changes were seen in 21/33 eyes with atrophic papilledema vs none of the eyes with non-atrophic papilledema or controls. Conclusions: SD-OCT shows qualitative and quantitative changes in the macula of eyes with resolved papilledema.
international conference on computer vision | 2015
Pratul P. Srinivasan; Michael W. Tao; Ren Ng; Ravi Ramamoorthi
2D spatial image windows are used for comparing pixel values in computer vision applications such as correspondence for optical flow and 3D reconstruction, bilateral filtering, and image segmentation. However, pixel window comparisons can suffer from varying defocus blur and perspective at different depths, and can also lead to a loss of precision. In this paper, we leverage the recent use of light-field cameras to propose alternative oriented light-field windows that enable more robust and accurate pixel comparisons. For Lambertian surfaces focused to the correct depth, the 2D distribution of angular rays from a pixel remains consistent. We build on this idea to develop an oriented 4D light-field window that accounts for shearing (depth), translation (matching), and windowing. Our main application is to scene flow, a generalization of optical flow to the 3D vector field describing the motion of each point in the scene. We show significant benefits of oriented light-field windows over standard 2D spatial windows. We also demonstrate additional applications of oriented light-field windows for bilateral filtering and image segmentation.
computer vision and pattern recognition | 2017
Pratul P. Srinivasan; Ren Ng; Ravi Ramamoorthi
We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions. By analyzing the motion-blurred light field in the primal and Fourier domains, we develop intuition into the effects of camera motion on the light field, show the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblurring, and derive simple analytical methods of motion deblurring in certain cases. We then present an algorithm to blindly deblur light fields of general scenes without any estimation of scene geometry, and demonstrate that we can recover both the sharp light field and the 3D camera motion path of real and synthetically-blurred light fields.
international conference on computer graphics and interactive techniques | 2017
Steven A. Cholewiak; Gordon D. Love; Pratul P. Srinivasan; Ren Ng; Martin S. Banks
Computer-graphics engineers and vision scientists want to generate images that reproduce realistic depth-dependent blur. Current rendering algorithms take into account scene geometry, aperture size, and focal distance, and they produce photorealistic imagery as with a high-quality camera. But to create immersive experiences, rendering algorithms should aim instead for perceptual realism. In so doing, they should take into account the significant optical aberrations of the human eye. We developed a method that, by incorporating some of those aberrations, yields displayed images that produce retinal images much closer to the ones that occur in natural viewing. In particular, we create displayed images taking the eyes chromatic aberration into account. This produces different chromatic effects in the retinal image for objects farther or nearer than current focus. We call the method ChromaBlur. We conducted two experiments that illustrate the benefits of ChromaBlur. One showed that accommodation (eye focusing) is driven quite effectively when ChromaBlur is used and that accommodation is not driven at all when conventional methods are used. The second showed that perceived depth and realism are greater with imagery created by ChromaBlur than in imagery created conventionally. ChromaBlur can be coupled with focus-adjustable lenses and gaze tracking to reproduce the natural relationship between accommodation and blur in HMDs and other immersive devices. It may thereby minimize the adverse effects of vergence-accommodation conflicts.
international conference on computer vision | 2017
Pratul P. Srinivasan; Tongzhou Wang; Ashwin Sreelal; Ravi Ramamoorthi; Ren Ng