Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven Lansel is active.

Publication


Featured researches published by Steven Lansel.


international conference on image processing | 2008

Spatio-spectral reconstruction of the multispectral datacube using sparse recovery

Manu Parmar; Steven Lansel; Brian A. Wandell

Multispectral scene information is useful for radiometric graphics, material identification and imaging systems simulation. The multispectral scene can be described as a datacube, which is a 3D representation of energy at multiple wavelength samples at each scene spatial location. Typically, multispectral scene data are acquired using costly methods that either employ tunable filters or light sources to capture multiple narrow-bands of the spectrum at each spatial point. In this paper, we present new computational methods that estimate the datacube from measurements with a conventional digital camera. Existing methods reconstruct spectra at single locations independently of their neighbors. In contrast, we present a method that jointly recovers the spatio-spectral datacube by exploiting the data sparsity in a transform representation.


Proceedings of SPIE | 2012

An LED-based lighting system for acquiring multispectral scenes

Manu Parmar; Steven Lansel; Joyce E. Farrell

The availability of multispectral scene data makes it possible to simulate a complete imaging pipeline for digital cameras, beginning with a physically accurate radiometric description of the original scene followed by optical transformations to irradiance signals, models for sensor transduction, and image processing for display. Certain scenes with animate subjects, e.g., humans, pets, etc., are of particular interest to consumer camera manufacturers because of their ubiquity in common images, and the importance of maintaining colorimetric fidelity for skin. Typical multispectral acquisition methods rely on techniques that use multiple acquisitions of a scene with a number of different optical filters or illuminants. Such schemes require long acquisition times and are best suited for static scenes. In scenes where animate objects are present, movement leads to problems with registration and methods with shorter acquisition times are needed. To address the need for shorter image acquisition times, we developed a multispectral imaging system that captures multiple acquisitions during a rapid sequence of differently colored LED lights. In this paper, we describe the design of the LED-based lighting system and report results of our experiments capturing scenes with human subjects.


asilomar conference on signals, systems and computers | 2009

A case for denoising before demosaicking color filter array data

Sung Hee Park; Hyung-Suk Kim; Steven Lansel; Manu Parmar; Brian A. Wandell

Denoising algorithms are well developed for grayscale and color images, but not as well for color filter array (CFA) data. Consequently, the common color imaging pipeline demosaics CFA data before denoising. In this paper we explore the noise-related properties of the imaging pipeline that demosaics CFA data before denoising. We then propose and explore a way to transform CFA data to a form that is amenable to existing grayscale and color denoising schemes. Since CFA data are a third as many as demosaicked data, we can expect to reduce processing time and power requirements to about a third of current requirements.


Proceedings of SPIE | 2009

Dictionaries for sparse representation and recovery of reflectances

Steven Lansel; Manu Parmar; Brian A. Wandell

The surface reflectance function of many common materials varies slowly over the visible wavelength range. For this reason, linear models with a small number of bases (5-8) are frequently used for representation and estimation of these functions. In other signal representation and recovery applications, it has been recently demonstrated that dictionary based sparse representations can outperform linear model approaches. In this paper, we describe methods for building dictionaries for sparse estimation of reflectance functions. We describe a method for building dictionaries that account for the measurement system; in estimation applications these dictionaries outperform the ones designed for sparse representation without accounting for the measurement system. Sparse recovery methods typically outperform traditional linear methods by 20-40% (in terms of RMSE).


Rundbrief Der Gi-fachgruppe 5.10 Informationssystem-architekturen | 2011

Local Linear Learned Image Processing Pipeline

Steven Lansel; Brian A. Wandell

The local linear learned (L3) algorithm is presented that simultaneously performs the demosaicking, denoising, and color transform calculations of an image processing pipeline for a digital camera with any color filter array.


Proceedings of SPIE | 2014

Automating the design of image processing pipelines for novel color filter arrays: local, linear, learned (L3) method

Qiyuan Tian; Steven Lansel; Joyce E. Farrell; Brian A. Wandell

The high density of pixels in modern color sensors provides an opportunity to experiment with new color filter array (CFA) designs. A significant bottleneck in evaluating new designs is the need to create demosaicking, denoising and color transform algorithms tuned for the CFA. To address this issue, we developed a method(local, linear, learned or L3) for automatically creating an image processing pipeline. In this paper we describe the L3 algorithm and illustrate how we created a pipeline for a CFA organized as a 2×2 RGB/Wblock containing a clear (W) pixel. Under low light conditions, the L3 pipeline developed for the RGB/W CFA produces images that are superior to those from a matched Bayer RGB sensor. We also use L3 to learn pipelines for other RGB/W CFAs with different spatial layouts. The L3 algorithm shortens the development time for producing a high quality image pipeline for novel CFA designs.


electronic imaging | 2015

Efficient illuminant correction in the local, linear, learned (L3) method

François G. Germain; Iretiayo A. Akinola; Qiyuan Tian; Steven Lansel; Brian A. Wandell

To speed the development of novel camera architectures we proposed a method, L3 (Local, Linear and Learned),that automatically creates an optimized image processing pipeline. The L3 method assigns each sensor pixel into one of 400 classes, and applies class-dependent local linear transforms that map the sensor data from a pixel and its neighbors into the target output (e.g., CIE XYZ rendered under a D65 illuminant). The transforms are precomputed from training data and stored in a table used for image rendering. The training data are generated by camera simulation, consisting of sensor responses and rendered CIE XYZ outputs. The sensor and rendering illuminant can be equal (same-illuminant table) or different (cross-illuminant table). In the original implementation, illuminant correction is achieved with cross-illuminant tables, and one table is required for each illuminant. We find, however, that a single same-illuminant table (D65) effectively converts sensor data for many different same-illuminant conditions. Hence, we propose to render the data by applying the same-illuminant D65 table to the sensor data, followed by a linear illuminant correction transform. The mean color reproduction error using the same-illuminant table is on the order of 4▵E units, which is only slightly larger than the cross-illuminant table error. This approach reduces table storage requirements significantly without substantially degrading color reproduction accuracy.


electronic imaging | 2015

Automatically designing an image processing pipeline for a five-band camera prototype using the local, linear, learned (L3) method

Qiyuan Tian; Henryk Blasinski; Steven Lansel; Haomiao Jiang; Munenori Fukunishi; Joyce E. Farrell; Brian A. Wandell

The development of an image processing pipeline for each new camera design can be time-consuming. To speed camera development, we developed a method named L3 (Local, Linear, Learned) that automatically creates an image processing pipeline for any design. In this paper, we describe how we used the L3 method to design and implement an image processing pipeline for a prototype camera with five color channels. The process includes calibrating and simulating the prototype, learning local linear transforms and accelerating the pipeline using graphics processing units (GPUs).


Archive | 2012

Learning of image processing pipeline for digital imaging devices

Steven Lansel; Brian A. Wandell


Archive | 2014

Image sensor for depth estimation

Steven Lansel

Collaboration


Dive into the Steven Lansel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge