Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manoj Aggarwal is active.

Publication


Featured researches published by Manoj Aggarwal.


international conference on computer vision | 2001

Split aperture imaging for high dynamic range

Manoj Aggarwal; Narendra Ahuja

Most imaging sensors have limited dynamic range and hence are sensitive to only a part of the illumination range present in a natural scene. The dynamic range can be improved by acquiring multiple images of the same scene under different exposure settings and then combining them. In this paper, we describe a camera design for simultaneously acquiring multiple images. The cross-section of the incoming beam from a scene point is partitioned into as many parts as the required number of images. This is done by splitting the aperture into multiple parts and directing the beam exiting from each in a different direction using an assembly of mirrors. A sensor is placed in the path of each beam and exposure of each sensor is controlled either by appropriately setting its exposure parameter, or by splitting the incoming beam unevenly. The resulting multiple exposure images are used to construct a high dynamic range image. We have implemented a video-rate camera based on this design and the results obtained are presented.


computer vision and pattern recognition | 2005

Real-time wide area multi-camera stereo tracking

Tao Zhao; Manoj Aggarwal; Rakesh Kumar; Harpreet S. Sawhney

We present a fully integrated real-time system to track humans with a network of stereo sensors over a wide area. The processing includes single camera tracking and multi-camera fusion. Each single camera detects and tracks humans in its own view and a multi-camera fusion module combines all the local tracks of the same human into a global track. We propose stereo segmentation and tracking techniques to handle multiple humans moving in groups in cluttered environments. We have developed a ground-based fusion method for camera handoff using space-time constraint. We show results and performance evaluation on very challenging data from a 12-camera system.


international conference on computer vision | 2001

On cosine-fourth and vignetting effects in real lenses

Manoj Aggarwal; Hong Hua; Narendra Ahuja

This paper has been prompted by observations of disparities between the observed fall-off in irradiance for off-axis points and that accounted for by the cosine-fourth and vignetting effects. A closer examination of the image formation process for real lenses revealed that even in the absence of vignetting a point light source does not uniformly illuminate the aperture, an effect known as pupil aberration. For example, we found the variation for a 16 mm lens to be as large as 31% for a field angle of 10/spl deg/. In this paper, we critically evaluate the roles of cosine-fourth and vignetting effects and demonstrate the significance of the pupil aberration on the fall-off in irradiance away from image center. The pupil aberration effect strongly depends on the aperture size and shape and this dependence has been demonstrated through two sets of experiments with three real lenses. The effect of pupil aberration is thus a third important cause of fall in irradiance away from the image center in addition to the familiar cosine-fourth and vignetting effects, that must be taken into account in applications that rely heavily on photometric variation such as shape from shading and mosaicing.


international conference on image processing | 2000

Efficient Huffman decoding

Manoj Aggarwal; Ajai Narayan

Huffman (1952) codes are being widely used in image and video compression. We propose a decoding scheme coding for Huffman codes, which requires only a few computations per codeword, independent of the number of codewords n, the height of the Huffman tree h, or the length of a codeword. The memory requirement for the proposed scheme depends on the Huffman tree, for sparse Huffman trees (JPEG, H.263, MPEG), it is O(n).


international conference on computer vision | 2001

High dynamic range panoramic imaging

Manoj Aggarwal; Narendra Ahuja

Most imaging sensors have a limited dynamic range and hence can satisfactorily respond to only a part of illumination levels present in a scene. This is particularly disadvantageous for omnidirectional and panoramic cameras since larger fields of view have larger brightness ranges. We propose a simple modification to existing high resolution omnidirectional/panoramic cameras in which the process of increasing the dynamic range is coupled with the process of increasing the field of view. This is achieved by placing a graded transparency (mask) in front of the sensor which allows every scene point to be imaged under multiple exposure settings as the camera pans, a process anyway required to capture large fields of view at high resolution. The sequence of images are then mosaiced to construct a high resolution, high dynamic range panoramic/omnidirectional image. Our method is robust to alignment errors between the mask and the sensor grid and does not require the mask to be placed on the sensing surface. We have designed a panoramic camera with the proposed modifications and have discussed various theoretical and practical issues encountered in obtaining a robust design. We show with an example of high resolution, high dynamic range panoramic image obtained from the camera we designed.


International Journal of Computer Vision | 2002

A Pupil-Centric Model of Image Formation

Manoj Aggarwal; Narendra Ahuja

This paper has been prompted by observations of some anomalies in the performance of the standard imaging models (pin-hole, thin-lens and Gaussian thick-lens), in the context of composing omnifocus images and estimating depth maps from a sequence of images. A closer examination of the models revealed that they assume a position of the aperture that conflicts with the designs of many available lenses. We have shown in this paper that the imaging geometry and photometric properties of an image are significantly influenced by the position of the aperture. This is confirmed by the discrepancies between observed mappings and those predicted by the models. We have therefore concluded that the current imaging models do not adequately represent practical imaging systems. We have proposed a pupil-centric model of image formation, which overcomes these deficiencies and have given the associated mappings. The impact of this model on some common imaging scenariosis described, along with experimental verification of the better performance of the model on three real lenses.


international conference on computer vision | 2001

A new imaging model

Manoj Aggarwal; Narendra Ahuja

This paper has been prompted by observations of some anomalies in the performance of the standard imaging models (pin-hole, thin-lens and Gaussian thicklens), in the context of composing omnifocus images and estimating depth maps from a sequence of images. A closer examination of the models revealed that they assume a position of the aperture that conflicts with the designs of many available lenses. We have shown in this paper that the imaging geometry and photometric properties of an image are significantly influenced by the position of the aperture. This is confirmed by the discrepancies between observed mappings and those predicted by the models. We have therefore concluded that the current imaging models do not adequately represent practical imaging systems. We have proposed a new imaging model which overcomes these deficiencies and have given the associated mappings. The impact of this model on some common imaging scenarios is described, along with experimental verification of the better performance of the model on three real lenses.


international conference on pattern recognition | 2000

On generating seamless mosaics with large depth of field

Manoj Aggarwal; Narendra Ahuja

Imaging cameras have only finite depth of field and only those objects within that depth range are simultaneously in focus. The depth of field of a camera can be improved by mosaicing a sequence of images taken under different focal settings. In conventional mosaicing schemes, a focus measure is computed for every scene point across the image sequence and the point is selected from that image where the focus measure is highest. We have, however, proved in this paper that the focus measure is not the highest in the best focussed frame for a certain class of scene points. The incorrect selection of image frames for these points, causes visual artifacts to appear in the resulting mosaic. We have also proposed a method to isolate such scene points, and an algorithm to compose large depth of field mosaics without the undesirable artifacts.


machine vision applications | 2008

Toward a sentient environment: real-time wide area multiple human tracking with identities

Tao Zhao; Manoj Aggarwal; Thomas Germano; Ian Roth; Alexandar Knowles; Rakesh Kumar; Harpreet S. Sawhney; Supun Samarasekera

In this paper, we presented a fully integratedreal-time computer vision system that can detect and track multiple humans in a wide-area using a network of stereo cameras. Continuous human identities are achieved by fusing video tracking with different kinds of biometric devices. The system also provides immersive visualization which enables the users to conveniently navigate through space and time and query useful events. The key innovations include stereo-based multi-object detection and tracking, a unified approach for fusing multiple sensors of different modalities, visualization and user interface design.


international conference on pattern recognition | 2000

Estimating sensor orientation in cameras

Manoj Aggarwal; Narendra Ahuja

For most imaging cameras, it is desirable that the sensor plane be perpendicular to the optical axis. Such an orientation ensures that the imaging configuration is perspective and planar scene objects perpendicular to the optical axis can be focussed in their entirety. In this paper, we present an image processing method to estimate and subsequently correct the sensor tilt with precision. We propose to measure the tilt by measuring the variation of defocus in an image of a planar calibration chart placed perpendicular to the optical axis. We show that the proposed defocusing based method is inherently more accurate than geometry based techniques which estimate tilt by measuring the deviation in the geometry of a scene pattern. We analyze the sensitivity of the tilt estimates to errors in the experimental setup and show that the proposed technique is quite robust to errors even as large as 1 degree in the orientation of the calibration chart.

Collaboration


Dive into the Manoj Aggarwal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tao Zhao

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge