Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manu Parmar is active.

Publication


Featured researches published by Manu Parmar.


electronic imaging | 2008

Sensor Calibration and Simulation

Joyce E. Farrell; Michael Okincha; Manu Parmar

We describe a method for simulating the output of an image sensor to a broad array of test targets. The method uses a modest set of sensor calibration measurements to define the sensor parameters; these parameters are used by an integrated suite of Matlab software routines that simulate the sensor and create output images. We compare the simulations of specific targets to measured data for several different imaging sensors with very different imaging properties. The simulation captures the essential features of the images created by these different sensors. Finally, we show that by specifying the sensor properties the simulations can predict sensor performance to natural scenes that are difficult to measure with a laboratory apparatus, such as natural scenes with high dynamic range or low light levels.


international conference on image processing | 2008

Spatio-spectral reconstruction of the multispectral datacube using sparse recovery

Manu Parmar; Steven Lansel; Brian A. Wandell

Multispectral scene information is useful for radiometric graphics, material identification and imaging systems simulation. The multispectral scene can be described as a datacube, which is a 3D representation of energy at multiple wavelength samples at each scene spatial location. Typically, multispectral scene data are acquired using costly methods that either employ tunable filters or light sources to capture multiple narrow-bands of the spectrum at each spatial point. In this paper, we present new computational methods that estimate the datacube from measurements with a conventional digital camera. Existing methods reconstruct spectra at single locations independently of their neighbors. In contrast, we present a method that jointly recovers the spatio-spectral datacube by exploiting the data sparsity in a transform representation.


electronic imaging | 2008

A database of high dynamic range visible and near-infrared multispectral images

Manu Parmar; Francisco Imai; Sung Ho Park; Joyce E. Farrell

Simulation of the imaging pipeline is an important tool for the design and evaluation of imaging systems. One of the most important requirements for an accurate simulation tool is the availability of high quality source scenes. The dynamic range of images depends on multiple elements in the imaging pipeline including the sensor, digital signal processor, display device, etc. High dynamic range (HDR) scene spectral information is critical for an accurate analysis of the effect of these elements on the dynamic range of the displayed image. Also, typical digital imaging sensors are sensitive well beyond the visible range of wavelengths. Spectral information with support across the sensitivity range of the imaging sensor is required for the analysis and design of imaging pipeline elements that are affected by IR energy. Although HDR scene data information with visible and infrared content are available from remote sensing resources, there are scarcity of such imagery representing more conventional everyday scenes. In this paper, we address both these issues and present a method to generate a database of HDR images that represent radiance fields in the visible and near-IR range of the spectrum. The proposed method only uses conventional consumer-grade equipment and is very cost-effective.


Proceedings of SPIE | 2012

An LED-based lighting system for acquiring multispectral scenes

Manu Parmar; Steven Lansel; Joyce E. Farrell

The availability of multispectral scene data makes it possible to simulate a complete imaging pipeline for digital cameras, beginning with a physically accurate radiometric description of the original scene followed by optical transformations to irradiance signals, models for sensor transduction, and image processing for display. Certain scenes with animate subjects, e.g., humans, pets, etc., are of particular interest to consumer camera manufacturers because of their ubiquity in common images, and the importance of maintaining colorimetric fidelity for skin. Typical multispectral acquisition methods rely on techniques that use multiple acquisitions of a scene with a number of different optical filters or illuminants. Such schemes require long acquisition times and are best suited for static scenes. In scenes where animate objects are present, movement leads to problems with registration and methods with shorter acquisition times are needed. To address the need for shorter image acquisition times, we developed a multispectral imaging system that captures multiple acquisitions during a rapid sequence of differently colored LED lights. In this paper, we describe the design of the LED-based lighting system and report results of our experiments capturing scenes with human subjects.


asilomar conference on signals, systems and computers | 2009

A case for denoising before demosaicking color filter array data

Sung Hee Park; Hyung-Suk Kim; Steven Lansel; Manu Parmar; Brian A. Wandell

Denoising algorithms are well developed for grayscale and color images, but not as well for color filter array (CFA) data. Consequently, the common color imaging pipeline demosaics CFA data before denoising. In this paper we explore the noise-related properties of the imaging pipeline that demosaics CFA data before denoising. We then propose and explore a way to transform CFA data to a form that is amenable to existing grayscale and color denoising schemes. Since CFA data are a third as many as demosaicked data, we can expect to reduce processing time and power requirements to about a third of current requirements.


Proceedings of SPIE | 2010

Using visible SNR (vSNR) to compare the image quality of pixel binning and digital resizing

Joyce E. Farrell; Mike Okincha; Manu Parmar; Brian A. Wandell

We introduce a new metric, the visible signal-to-noise ratio (vSNR), to analyze how pixel-binning and resizing methods influence noise visibility in uniform areas of an image. The vSNR is the inverse of the standard deviation of the SCIELAB representation of a uniform field; its units are 1/ΔE. The vSNR metric can be used in simulations to predict how imaging system components affect noise visibility. We use simulations to evaluate two image rendering methods: pixel binning and digital resizing. We show that vSNR increases with scene luminance, pixel size and viewing distance and decreases with read noise. Under low illumination conditions and for pixels with relatively high read noise, images generated with the binning method have less noise (high vSNR) than resized images. The binning method has noticeably lower spatial resolution. The binning method reduces demands on the ADC rate and channel throughput. When comparing binning and resizing, there is an image quality tradeoff between noise and blur. Depending on the application users may prefer one error over another.


Proceedings of SPIE | 2009

Interleaved imaging: an imaging system design inspired by rod-cone vision

Manu Parmar; Brian A. Wandell

Under low illumination conditions, such as moonlight, there simply are not enough photons present to create a high quality color image with integration times that avoid camera-shake. Consequently, conventional imagers are designed for daylight conditions and modeled on human cone vision. Here, we propose a novel sensor design that parallels the human retina and extends sensor performance to span daylight and moonlight conditions. Specifically, we describe an interleaved imaging architecture comprising two collections of pixels. One set of pixels is monochromatic and high sensitivity; a second, interleaved set of pixels is trichromatic and lower sensitivity. The sensor implementation requires new image processing techniques that allow for graceful transitions between different operating conditions. We describe these techniques and simulate the performance of this sensor under a range of conditions. We show that the proposed system is capable of producing high quality images spanning photopic, mesopic and near scotopic conditions.


Proceedings of SPIE | 2009

Dictionaries for sparse representation and recovery of reflectances

Steven Lansel; Manu Parmar; Brian A. Wandell

The surface reflectance function of many common materials varies slowly over the visible wavelength range. For this reason, linear models with a small number of bases (5-8) are frequently used for representation and estimation of these functions. In other signal representation and recovery applications, it has been recently demonstrated that dictionary based sparse representations can outperform linear model approaches. In this paper, we describe methods for building dictionaries for sparse estimation of reflectance functions. We describe a method for building dictionaries that account for the measurement system; in estimation applications these dictionaries outperform the ones designed for sparse representation without accounting for the measurement system. Sparse recovery methods typically outperform traditional linear methods by 20-40% (in terms of RMSE).


asilomar conference on signals, systems and computers | 2009

A low-complexity location estimation scheme for indoor wireless local area networks

Manu Parmar; Santosh Pandey

The ability to estimate the location of wireless device users has led to many applications that use location information to provide users and service providers with commercially valuable real time information. In this paper, we propose a low-complexity method for location estimation in an indoor wireless local area network (WLAN) deployment. The proposed method relies on measurements of signal strength (SS) of multiple access-points (APs) in a WLAN and is based on a piecewise-linear model that represents distances from APs in terms of SS measurements. The proposed scheme has significantly lower computational and storage requirements as compared to conventional location estimation methods.


color imaging conference | 2008

Estimating printer misregistration from color shifts: a new paradigm

Jon S. McElvain; Vishal Monga; Charles M. Hains; Manu Parmar

Inherent to most multi-color printing systems is the inability to achieve perfect registration between the primary separations. Because of this, dot-on-dot or dot-off-dot halftone screen sets are generally not used, due to the significant color shift observed in the presence of even the slightest misregistration. Much previous work has focused on characterizing these effects, and it is well known that dot-off-dot printed patterns result in a higher chroma (C*) relative to dot-on-dot. Rotated dot sets are used instead for these systems, as they exhibit a much greater robustness against misregistration. In this paper, we make the crucial observation that while previous work has used color shifts caused by misregistration to design robust screens, we can infact exploit these color shifts to obtain estimates of misregistration. In particular, we go on to demonstrate that even low resolution macroscopic color measurements of a carefully designed test patch can yield misregistration estimates that are accurate up-to the sub-pixel level. The contributions of our work are as follows: 1.) a simple methodology to construct test patches that may be measured to obtain misregistration estimates, 2.) derivation of a reflectance printer model for the test patch so that color deviations in the spectral or reflectance space can be mapped to misregistration estimates, and 3.) a practical method to estimate misregistration via scanner RGB measurements. Experimental results show that our method achieves accuracy comparable to the state-of-the art but expensive geometric methods that are currently used by high-end color printing devices to estimate misregistration.

Collaboration


Dive into the Manu Parmar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge