Kartik Venkataraman
Micron Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kartik Venkataraman.
international conference on computer graphics and interactive techniques | 2013
Kartik Venkataraman; Dan Lelescu; Jacques Duparre; Andrew Kenneth John McMahon; Gabriel Molina; Priyam Chatterjee; Robert H. Mullis; Shree K. Nayar
We present PiCam (Pelican Imaging Camera-Array), an ultra-thin high performance monolithic camera array, that captures light fields and synthesizes high resolution images along with a range image (scene depth) through integrated parallax detection and superresolution. The camera is passive, supporting both stills and video, low light capable, and small enough to be included in the next generation of mobile devices including smartphones. Prior works [Rander et al. 1997; Yang et al. 2002; Zhang and Chen 2004; Tanida et al. 2001; Tanida et al. 2003; Duparre et al. 2004] in camera arrays have explored multiple facets of light field capture - from viewpoint synthesis, synthetic refocus, computing range images, high speed video, and micro-optical aspects of system miniaturization. However, none of these have addressed the modifications needed to achieve the strict form factor and image quality required to make array cameras practical for mobile devices. In our approach, we customize many aspects of the camera array including lenses, pixels, sensors, and software algorithms to achieve imaging performance and form factor comparable to existing mobile phone cameras. Our contributions to the post-processing of images from camera arrays include a cost function for parallax detection that integrates across multiple color channels, and a regularized image restoration (superresolution) process that takes into account all the system degradations and adapts to a range of practical imaging conditions. The registration uncertainty from the parallax detection process is integrated into a Maximum-a-Posteriori formulation that synthesizes an estimate of the high resolution image and scene depth. We conclude with some examples of our array capabilities such as postcapture (still) refocus, video refocus, view synthesis to demonstrate motion parallax, 3D range images, and briefly address future work.
IEEE Transactions on Electron Devices | 2009
Junqing Chen; Kartik Venkataraman; Dmitry Bakin; Brian Rodricks; Robert Gravelle; Pravin Rao; Yongshen Ni
A digital camera is a complex system including a lens, a sensor (physics and circuits), and a digital image processor, where each component is a sophisticated system on its own. Since prototyping a digital camera is very expensive, it is highly desirable to have the capability to explore the system design tradeoffs and preview the system output ahead of time. An empirical digital imaging system simulation that aims to achieve such a goal is presented. It traces the photons reflected by the objects in a scene through the optics and color filter array, converts photons into electrons with consideration of noise introduced by the system, quantizes the accumulated voltage to digital counts by an analog-to-digital converter, and generates a Bayer raw image just as a real camera does. The simulated images are validated against real system outputs and show a close resemblance to the images captured under similar condition at all illumination levels.
electronic imaging | 2005
Brian G. Rodricks; Kartik Venkataraman
The new generation of digital still cameras (DSCs) is capable of capturing raw data that makes it possible to measure the fundamental metrics of the camera. Although CCDs are used in a majority of DSCs, the number of cameras with CMOS-based sensors is increasing. Using first principles, the performance of comparable CCD- and CMOS-based DSCs is measured. The performance metrics measured are electronic noise, signal-to-noise ratio, linearity, dynamic range, spatial resolution, and sensitivity. The dark noise and dark current are measured as a function of exposure time and ISO speed. The signal response and signal-to-noise response are measured as a function of intensity and ISO speed. The spatial resolution is measured in terms of the modulation transfer function (MTF) using both raw and rendered data. The spectral sensitivity is measured in terms of camera constants. Subjective image quality is also measured using scenes that exhibit limiting performance.
Rundbrief Der Gi-fachgruppe 5.10 Informationssystem-architekturen | 2014
Jacques Duparre; Kartik Venkataraman
We present the optical aspects of an ultra-thin high performance monolithic camera array, that captures light fields and synthesizes high resolution images along with a range image (scene depth) through integrated parallax detection and superresolution. The camera is passive, supporting both stills and video, low light capable, and small enough to be included in the next generation of mobile devices including smartphones.
Proceedings of SPIE | 2009
Dan Lelescu; Kartik Venkataraman; Rob Mullis; Pravin Rao; Cheng Lu; Junqing Chen; Brian Keelan
We describe a solution for image restoration in a computational camera known as an extended depth of field (EDOF) system. The specially-designed optics produce point spread functions that are roughly invariant with object distance in a range. However, this invariance involves a trade-off with the peak sharpness of the lens. The lens blur is a function of lens field-height, and the imaging sensor introduces signal-dependent noise. In this context, the principal contributions of this paper are: a) the modeling of the EDOF focus recovery problem; and b) the adaptive EDOF focus recovery approach, operating in signal-dependent noise. The focus recovery solution is adaptive to complexities of an EDOF imaging system, and performs a joint deblurring and noise suppression. It also adapts to imaging conditions by accounting for the state of the sensor (e.g., low-light conditions).
electronic imaging | 2006
Brian G. Rodricks; Kartik Venkataraman; Peter B. Catrysse; Brian A. Wandell
Precise simulation of digital camera architectures requires an accurate description of how the radiance image is transformed by optics and sampled by the image sensor array. Both for diffraction-limited imaging and for all practical lenses, the width of the optical-point-spread function differs at each wavelength. These differences are relatively small compared to coarse pixel sizes (6μm-8μm). But as pixel size decreases, to say 1.5μm-3μm, wavelength-dependent point-spread functions have a significant impact on the sensor response. We provide a theoretical treatment of how the interaction of spatial and wavelength properties influences the response of high-resolution color imagers. We then describe a model of these factors and an experimental evaluation of the models computational accuracy.
Archive | 2009
Kartik Venkataraman; Amandeep S. Jabbi; Robert H. Mullis
Archive | 2013
Florian Ciurea; Kartik Venkataraman; Gabriel Molina; Dan Lelescu
Archive | 2010
Dan Lelescu; Gabriel Molina; Kartik Venkataraman
Archive | 2014
Kartik Venkataraman; Semyon Nisenzon; Priyam Chatterjee; Gabriel Molina