Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sanjeev J. Koppal is active.

Publication


Featured researches published by Sanjeev J. Koppal.


international conference on computer vision | 2005

Structured light in scattering media

Srinivasa G. Narasimhan; Shree K. Nayar; Bo Sun; Sanjeev J. Koppal

Virtually all structured light methods assume that the scene and the sources are immersed in pure air and that light is neither scattered nor absorbed. Recently, however, structured lighting has found growing application in underwater and aerial imaging, where scattering effects cannot be ignored. In this paper, we present a comprehensive analysis of two representative methods - light stripe range scanning and photometric stereo - in the presence of scattering. For both methods, we derive physical models for the appearances of a surface immersed in a scattering medium. Based on these models, we present results on (a) the condition for object detectability in light striping and (b) the number of sources required for photometric stereo. In both cases, we demonstrate that while traditional methods fail when scattering is significant, our methods accurately recover the scene (depths, normals, albedos) as well as the properties of the medium. These results are in turn used to restore the appearances of scenes as if they were captured in clear air. Although we have focused on light striping and photometric stereo, our approach can also be extended to other methods such as grid coding, gated and active polarization imaging.


IEEE Computer Graphics and Applications | 2011

A Viewer-Centric Editor for 3D Movies

Sanjeev J. Koppal; Charles Lawrence Zitnick; Michael F. Cohen; Sing Bing Kang; Bryan Ressler; Alex Colburn

A proposed mathematical framework is the basis for a viewer-centric digital editor for 3D movies thats driven by the audiences perception of the scene. The editing tool allows both shot planning and after-the-fact digital manipulation of the perceived scene shape.


computer vision and pattern recognition | 2012

A low-power structured light sensor for outdoor scene reconstruction and dominant material identification

Christoph Mertz; Sanjeev J. Koppal; Solomon Sia; Srinivasa G. Narasimhan

We introduce a compact structured light device that utilizes a commercially available MEMS mirror-enabled hand-held laser projector. Without complex re-engineering, we show how to exploit the projectors high-speed MEMS mirror motion and laser light-sources to suppress ambient illumination, enabling low-cost and low-power reconstruction of outdoor scenes in sunlight. We discuss how the line-striping acts as a kind of “light-probe”, creating distinctive patterns of light scattered by different types of materials. We investigate visual features that can be computed from these patterns and can reliably identify the dominant material characteristic of a scene, i.e. where most of the objects consist of either diffuse (wood), translucent (wax), reflective (metal) or transparent (glass) materials.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Toward Wide-Angle Microvision Sensors

Sanjeev J. Koppal; Ioannis Gkioulekas; Travis Young; Hyunsung Park; Kenneth B. Crozier; Geoffrey Louis Barrows; Todd E. Zickler

Achieving computer vision on microscale devices is a challenge. On these platforms, the power and mass constraints are severe enough for even the most common computations (matrix manipulations, convolution, etc.) to be difficult. This paper proposes and analyzes a class of miniature vision sensors that can help overcome these constraints. These sensors reduce power requirements through template-based optical convolution, and they enable a wide field-of-view within a small form through a refractive optical design. We describe the tradeoffs between the field-of-view, volume, and mass of these sensors and we provide analytic tools to navigate the design space. We demonstrate milliscale prototypes for computer vision tasks such as locating edges, tracking targets, and detecting faces. Finally, we utilize photolithographic fabrication tools to further miniaturize the optical designs and demonstrate fiducial detection onboard a small autonomous air vehicle.


computer vision and pattern recognition | 2011

Wide-angle micro sensors for vision on a tight budget

Sanjeev J. Koppal; Ioannis Gkioulekas; Todd E. Zickler; Geoffrey Louis Barrows

Achieving computer vision on micro-scale devices is a challenge. On these platforms, the power and mass constraints are severe enough for even the most common computations (matrix manipulations, convolution, etc.) to be difficult. This paper proposes and analyzes a class of miniature vision sensors that can help overcome these constraints. These sensors reduce power requirements through template-based optical convolution, and they enable a wide field-of-view within a small form through a novel optical design. We describe the trade-offs between the field of view, volume, and mass of these sensors and we provide analytic tools to navigate the design space. We also demonstrate milli-scale prototypes for computer vision tasks such as locating edges, tracking targets, and detecting faces.


computer vision and pattern recognition | 2015

Privacy preserving optics for miniature vision sensors

Francesco Pittaluga; Sanjeev J. Koppal

The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.


IEEE Transactions on Image Processing | 2015

Generalized Assorted Camera Arrays: Robust Cross-Channel Registration and Applications

Jason Holloway; Kaushik Mitra; Sanjeev J. Koppal; Ashok Veeraraghavan

One popular technique for multimodal imaging is generalized assorted pixels (GAP), where an assorted pixel array on the image sensor allows for multimodal capture. Unfortunately, GAP is limited in its applicability because of the need for multimodal filters that are amenable with semiconductor fabrication processes and results in a fixed multimodal imaging configuration. In this paper, we advocate for generalized assorted camera (GAC) arrays for multimodal imaging-i.e., a camera array with filters of different characteristics placed in front of each camera aperture. The GAC provides us with three distinct advantages over GAP: ease of implementation, flexible application-dependent imaging since filters are external and can be changed and depth information that can be used for enabling novel applications (e.g., postcapture refocusing). The primary challenge in GAC arrays is that since the different modalities are obtained from different viewpoints, there is a need for accurate and efficient cross-channel registration. Traditional approaches such as sum-of-squared differences, sum-of-absolute differences, and mutual information all result in multimodal registration errors. Here, we propose a robust cross-channel matching cost function, based on aligning normalized gradients, which allows us to compute cross-channel subpixel correspondences for scenes exhibiting nontrivial geometry. We highlight the promise of GAC arrays with our cross-channel normalized gradient cost for several applications such as low-light imaging, postcapture refocusing, skin perfusion imaging using color + near infrared, and hyperspectral imaging.


european conference on computer vision | 2016

Focal Flow: Measuring Distance and Velocity with Defocus and Differential Motion

Emma Alexander; Qi Guo; Sanjeev J. Koppal; Steven J. Gortler; Todd E. Zickler

We present the focal flow sensor. It is an unactuated, monocular camera that simultaneously exploits defocus and differential motion to measure a depth map and a 3D scene velocity field. It does so using an optical-flow-like, per-pixel linear constraint that relates image derivatives to depth and velocity. We derive this constraint, prove its invariance to scene texture, and prove that it is exactly satisfied only when the sensor’s blur kernels are Gaussian. We analyze the inherent sensitivity of the ideal focal flow sensor, and we build and test a prototype. Experiments produce useful depth and velocity information for a broader set of aperture configurations, including a simple lens with a pillbox aperture.


international conference on computer vision | 2007

Novel Depth Cues from Uncalibrated Near-field Lighting

Sanjeev J. Koppal; Srinivasa G. Narasimhan

We present the first method to compute depth cues from images taken solely under uncalibrated near point lighting. A stationary scene is illuminated by a point source that is moved approximately along a line or in a plane. We observe the brightness profile at each pixel and demonstrate how to obtain three novel cues: plane-scene intersections, depth ordering and mirror symmetries. These cues are defined with respect to the line/plane in which the light source moves, and not the camera viewpoint. Plane-Scene Intersections are detected by finding those scene points that are closest to the light source path at some time instance. Depth Ordering for scenes with homogeneous BRDFs is obtained by sorting pixels according to their shortest distances from a plane containing the light source path. Mirror Symmetry pairs for scenes with homogeneous BRDFs are detected by reflecting scene points across a plane in which the light source moves. We show analytic results for Lambertian objects and demonstrate empirical evidence for a variety of other BRDFs.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017

Pre-Capture Privacy for Small Vision Sensors

Francesco Pittaluga; Sanjeev J. Koppal

The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.

Collaboration


Dive into the Sanjeev J. Koppal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge