Adithya Kumar Pediredla
Rice University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adithya Kumar Pediredla.
international conference on computer vision | 2015
Ryuichi Tadano; Adithya Kumar Pediredla; Ashok Veeraraghavan
Time of flight (ToF) cameras use a temporally modulated light source and measure correlation between the reflected light and a sensor modulation pattern, in order to infer scene depth. In this paper, we show that such correlational sensors can also be used to selectively accept or reject light rays from certain scene depths. The basic idea is to carefully select illumination and sensor modulation patterns such that the correlation is non-zero only in the selected depth range - thus light reflected from objects outside this depth range do not affect the correlational measurements. We demonstrate a prototype depth-selective camera and highlight two potential applications: imaging through scattering media and virtual blue screening. This depth-selectivity can be used to reject back-scattering and reflection from media in front of the subjects of interest, thereby significantly enhancing the ability to image through scattering media-critical for applications such as car navigation in fog and rain. Similarly, such depth selectivity can also be utilized as a virtual blue-screen in cinematography by rejecting light reflecting from background, while selectively retaining light contributions from the foreground subject.
ieee global conference on signal and information processing | 2015
J. R. Harish Kumar; Adithya Kumar Pediredla; Chandra Sekhar Seelamantula
Glaucoma is currently one of the major causes of vision loss worldwide. Though the lost vision capability cannot be recovered, preventive steps such as clinical diagnosis and necessary treatment can be taken in order to minimize visual impairment due to glaucoma. The process of clinical diagnosis needs manual examination and outlining of the optic disc and cup, which is subjective and time consuming. In this paper, we propose a methodology for automatic segmentation and outlining of the optic disc using an active disc formulation. The method uses a disc template as a prototype. The initialization is automated using a matched filtering technique. We choose the disc to be isotropic, and allow for translation, isotropic scaling, and optimization of the corresponding parameters. The active disc is evolved towards the boundary of the optic disc by minimizing a local energy function. We report validations on three publicly available databases MES-SIDOR, DRIONS-DB, and Drishti-GS containing 1200, 110, and 101 retinal fundus images, respectively. The corresponding F-scores 0.8456, 0.8380, and 0.9077, respectively, for the databases show robustness, and high correlation of the proposed algorithm with the expert segmentation.
international conference on 3d vision | 2015
Suren Jayasuriya; Adithya Kumar Pediredla; Sriram Sivaramakrishnan; Alyosha Molnar; Ashok Veeraraghavan
A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of lights plenoptic function.
international conference on computational photography | 2017
Adithya Kumar Pediredla; Mauro Buttafava; Alberto Tosi; Oliver Cossairt; Ashok Veeraraghavan
Can we reconstruct the entire internal shape of a room if all we can directly observe is a small portion of one internal wall, presumably through a window in the room? While conventional wisdom may indicate that this is not possible, motivated by recent work on ‘looking around corners’, we show that one can exploit light echoes to reconstruct the internal shape of hidden rooms. Existing techniques for looking around the corner using transient images model the hidden volume using voxels and try to explain the captured transient response as the sum of the transient responses obtained from individual voxels. Such a technique inherently suffers from challenges with regards to low signal to background ratios (SBR) and has difficulty scaling to larger volumes. In contrast, in this paper, we argue for using a plane-based model for the hidden surfaces. We demonstrate that such a plane-based model results in much higher SBR while simultaneously being amenable to larger spatial scales. We build an experimental prototype composed of a pulsed laser source and a single-photon avalanche detector (SPAD) that can achieve a time resolution of about 30ps and demonstrate high-fidelity reconstructions both of individual planes in a hidden volume and for reconstructing entire polygonal rooms composed of multiple planar walls.
international conference on acoustics, speech, and signal processing | 2017
Adithya Kumar Pediredla; Nathan Matsuda; Oliver Cossairt; Ashok Veeraraghavan
Light scattering on diffuse rough surfaces was long assumed to destroy geometry and photometry information about hidden (non line of sight) objects making ‘looking around the corner’ (LATC) and ‘non line of sight’ (NLOS) imaging impractical. Recent work pioneered by Kirmani et al. [1], Velten et al. [2] demonstrated that transient information (time of flight information) from these scattered third bounce photons can be exploited to solve LATC and NLOS imaging. In this paper, we quantify the geometric and photometric reconstruction limits of LATC and NLOS imaging for the first time using a classical linear systems approach. The relationship between the albedo of the voxels in a hidden volume to the third bounce measurements at the sensor is a linear system that is determined by the geometry and the illumination source. We study this linear system and employ empirical techniques to find the limits of the information contained in the third bounce photons as a function of various system parameters.
Optics Express | 2017
Fengqiang Li; Huaijin Chen; Adithya Kumar Pediredla; Chia-Kai Yeh; Kuan He; Ashok Veeraraghavan; Oliver Cossairt
Three-dimensional imaging using Time-of-flight (ToF) sensors is rapidly gaining widespread adoption in many applications due to their cost effectiveness, simplicity, and compact size. However, the current generation of ToF cameras suffers from low spatial resolution due to physical fabrication limitations. In this paper, we propose CS-ToF, an imaging architecture to achieve high spatial resolution ToF imaging via optical multiplexing and compressive sensing. Our approach is based on the observation that, while depth is non-linearly related to ToF pixel measurements, a phasor representation of captured images results in a linear image formation model. We utilize this property to develop a CS-based technique that is used to recover high resolution 3D images. Based on the proposed architecture, we developed a prototype 1-megapixel compressive ToF camera that achieves as much as 4× improvement in spatial resolution and 3× improvement for natural scenes. We believe that our proposed CS-ToF architecture provides a simple and low-cost solution to improve the spatial resolution of ToF and related sensors.
international conference on image processing | 2016
Sagar Honnungar; Jason Holloway; Adithya Kumar Pediredla; Ashok Veeraraghavan; Kaushik Mitra
Time-of-flight (ToF) imaging is an active method that utilizes a temporally modulated light source and a correlation-based (or lock-in) imager that computes the round-trip travel time from source to scene and back. Much like conventional imaging ToF cameras suffer from the trade-off between depth of field (DOF) and light throughput-larger apertures allow for more light collection but results in lower DoF. This trade-off is especially crucial in ToF systems since they require active illumination and have limited power, which limits performance in long-range imaging or imaging in strong ambient illumination (such as outdoors). Motivated by recent work in extended depth of field imaging for photography, we propose a focal sweep-based image acquisition methodology to increase depth-of-field and eliminate defocus blur. Our approach allows for a simple inversion algorithm to recover all-in-focus images. We validate our technique through simulation and experimental results. We demonstrate a proof-of-concept focal sweep time-of-flight acquisition system and show results for a real scene.
Journal of Biomedical Optics | 2016
Adithya Kumar Pediredla; Shizheng Zhang; Ben Avants; Fan Ye; Shin Nagayama; Ziying Chen; Caleb Kemere; Jacob T. Robinson; Ashok Veeraraghavan
Abstract. In most biological tissues, light scattering due to small differences in refractive index limits the depth of optical imaging systems. Two-photon microscopy (2PM), which significantly reduces the scattering of the excitation light, has emerged as the most common method to image deep within scattering biological tissue. This technique, however, requires high-power pulsed lasers that are both expensive and difficult to integrate into compact portable systems. Using a combination of theoretical and experimental techniques, we show that if the excitation path length can be minimized, selective plane illumination microscopy (SPIM) can image nearly as deep as 2PM without the need for a high-powered pulsed laser. Compared to other single-photon imaging techniques like epifluorescence and confocal microscopy, SPIM can image more than twice as deep in scattering media (∼10 times the mean scattering length). These results suggest that SPIM has the potential to provide deep imaging in scattering media in situations in which 2PM systems would be too large or costly.
international conference on image processing | 2016
Ryuichi Tadano; Adithya Kumar Pediredla; Kaushik Mitra; Ashok Veeraraghavan
ACS Photonics | 2016
Mehbuba Tanzid; Nathaniel J. Hogan; Ali Sobhani; Hossein Robatjazi; Adithya Kumar Pediredla; Adam Samaniego; Ashok Veeraraghavan; Naomi J. Halas