Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where André Zaccarin is active.

Publication


Featured researches published by André Zaccarin.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007

Learning and Removing Cast Shadows through a Multidistribution Approach

Nicolas Martel-Brisson; André Zaccarin

Moving cast shadows are a major concern for foreground detection algorithms. The processing of foreground images in surveillance applications typically requires that such shadows be identified and removed from the detected foreground. This paper presents a novel pixel-based statistical approach to model moving cast shadows of nonuniform and varying intensity. This approach uses the Gaussian mixture model (GMM) learning ability to build statistical models describing moving cast shadows on surfaces. This statistical modeling can deal with scenes with complex and time-varying illumination, including light saturated areas, and prevent false detection in regions where shadows cannot be detected. The proposed approach can be used with pixel-based descriptions of shadowed surfaces found in the literature. It significantly reduces their false detection rate without increasing the missed detection rate. Results obtained with different scene types and shadow models show the robustness of the approach.


computer vision and pattern recognition | 2005

Moving cast shadow detection from a Gaussian mixture shadow model

Nicolas Martel-Brisson; André Zaccarin

Moving cast shadows are a major concern for foreground detection algorithms. Processing of foreground images in surveillance applications typically requires that such shadows have been identified and removed from the detected foreground. This paper presents a novel pixel-based statistical approach to model moving cast shadows of non-uniform and varying intensity. This approach uses the Gaussian mixture model (GMM) learning ability to build statistical models describing moving cast shadows on surfaces. This statistical modeling can deal with scenes with complex and time-varying illumination, and prevent false detection in regions where shadows cannot be detected. Gaussian mixture shadow models (GMSM) are automatically constructed and updated over time, are easily added to GMM architecture for foreground detection, and require only a small number of parameters. Results obtained with different scene types show the robustness of the approach.


computer vision and pattern recognition | 2008

Kernel-based learning of cast shadows from a physical model of light sources and surfaces for low-level segmentation

Nicolas Martel-Brisson; André Zaccarin

In background subtraction, cast shadows induce silhouette distortions and object fusions hindering performance of high level algorithms in scene monitoring. We introduce a nonparametric framework to model surface behavior when shadows are cast on them. Based on physical properties of light sources and surfaces, we identify a direction in RGB space on which background surface values under cast shadows are found. We then model the posterior distribution of lighting attenuation under cast shadows and foreground objects, which allows differentiation of foreground and cast shadow values with similar chromaticity. The algorithms are completely unsupervised and take advantage of scene activity to learn model parameters. Spatial gradient information is also used to reinforce the learning process. Contributions are two-fold. Firstly, with a better model describing cast shadows on surfaces, we achieve a higher success rate in segmenting moving cast shadows in complex scenes. Secondly, obtaining such models is a step toward a full scene parametrization where light source properties, surface reflectance models and scene 3D geometry are estimated for low-level segmentation.


Proceedings of the 1st ACM workshop on Vision networks for behavior analysis | 2008

Unsupervised approach for building non-parametric background and foreground models of scenes with significant foreground activity

Nicolas Martel-Brisson; André Zaccarin

Kernel-based density estimation have been successful for background subtraction in complex environments where background statistics at the pixel level cannot be described parametrically. These methods, however, typically requires a training sequence free or mostly free of foreground activity in order to get a good initial estimate of the background distribution. We present an approach for non-parametric statistical modeling of both foreground and background in complex and busy environments without any restrictions or constraints on the scene foreground activity at initialization. Our unsupervised approach uses the difference in relative frequency and probability mass between background and foreground modes to generate foreground and background likelihood functions as well as estimates of foreground and background priors. For each frame, the output is a non-binary mask of foreground probabilities which can be easily combined with spatial and temporal constraints in an intelligent decision process. Results show that our approach performs well in a variety of complex scenarios where foreground probabilities can be as high as 80%.


digital identity management | 2007

A Cable-driven Parallel Mechanism for Capturing Object Appearance from Multiple Viewpoints

Jean-Daniel Deschênes; Philippe Lambert; Simon Perreault; Nicolas Martel-Brisson; Nathaniel Zoso; André Zaccarin; Patrick Hebert; Samuel Bouchard; Clément Gosselin

This paper presents the full proof of concept of a system for capturing the light field of an object. It is based on a single high resolution camera that is moved all around the object on a cable-driven end-effector. The main advantages of this system are its scalability and low interference with scene lighting. The camera is accurately positioned along hemispheric trajectories by observing target features. From the set of gathered images, the visual hull is extracted and can be used as an approximate geometry for mapping a surface light field. The paper describes the acquisition system as well as the modeling process. The ability of the system to produce models is validated with four different objects whose sizes range from 20 cm to 3 m.


international conference on acoustics, speech, and signal processing | 1993

Coding of interlaced sequences using a decoder-based adaptive deinterlacer

André Zaccarin; Bede Liu

Motion-compensated coding techniques developed for progressively scanned sequences are not entirely suitable for coding interlaced sequences. Direct approaches for coding interlaced sequences use the last two decoded fields for the motion-compensated prediction of the present field. The authors propose to deinterlace adaptively the last decoded field to combine in a single frame the information needed for the motion-compensated prediction. The adaptation is performed at the decoder and requires no overhead information. The proposed approach allows the use of true half-pixel accuracy motion estimates, facilitates the use of fast search algorithms when compared with the direct approaches, and has comparable performance.<<ETX>>


international conference on acoustics speech and signal processing | 1998

Adaptive thresholding for detection of nonsignificant vectors in noisy image sequences

Luc Martel; André Zaccarin

In noisy image sequences, block matching motion estimation generates erroneous motion vectors since the algorithm tries to correlate noise. We present an adaptive threshold test to detect blocks for which only nonsignificant motion vectors can be estimated. Vectors of these blocks are then assigned the zero vector before any block motion estimation is performed. By nonsignificant, we refer to motion vectors of nonmoving areas as well as vectors of moving areas for which the noise level is too high to allow a good estimation of the motion. The detection of these vectors reduces the computational complexity of the BMA and the entropy of the motion field. The algorithm is embedded in a hierarchical BMA and takes advantage of their different spectral characteristics to discriminate between the frame difference energy due to noise and due to motion. The algorithm is also efficient for low noise sequences where it can be used to initialize a segmentation of moving objects from the background.


Signal Processing-image Communication | 1993

Block motion compensated coding of interlaced sequences using adaptively deinterlaced fields

André Zaccarin; Bede Liu

Abstract Video coding algorithms using block motion compensation were first developed for progressively scanned sequences and as such, are not entirely suitable for interlaced sequences In this paper we present a new approach for block-based coding of interlaced sequences. This proposed algorithm processes the interlaced sequence as a sequence of even and odd fields by using the last decoded field, adaptively deinterlaced, for the motion compensated prediction of the current field. The deinterlacing is performed at the decoder and no extra information has to be sent to guide the adaptation. The algorithm is a simple and efficient alternative to algorithms using the last two decoded fields for the motion compensated prediction of the current field. The new approach can easily incorporate the use of fast search algorithms and allows the use of true half-pixel accuracy in the estimates of the vertical component of the motion vectors. In HDTV sequences tested, this algorithm achieves superior performance due to this half-pixel accuracy.


international conference on image processing | 2001

Scene break detection and classification using a block-wise difference method

Mehran Yazdi; André Zaccarin

We introduce a new approach to the detection and the classification of effects in video sequences. We deal with effects such as cuts, fades, dissolves, and camera motion. Global motion compensation based on block matching and a measure of block mean intensities are used to detect all the effects. We compute the dominant motion vectors to detect the camera motion for each shot and we use the percentage of blocks with sudden intensity variations or gradual intensity change to detect change effects between video shots. The approach can handle complex motion during gradual effects as well as the precise detection of effect lengths. Both synthetic and real evidence is presented to demonstrate how this approach can efficiently classify effects in video sequences involving significant motion.


international conference on image processing | 1994

Transmission of the color information using quad-trees and segmentation-based approaches for the compression of color images with limited palette

Marc Tremblay; André Zaccarin

The compression of color images is usually performed independently along each of the 3 axis of a luminance-chrominance color space. When applied to images using a limited color palette, it generates images which take on more values than those found in the original color palette. These images must be color quantized before they can be displayed with a limited palette. In this paper, we present two new approaches for the lossy compression of color quantized images that does not require color quantization of the decoded images. The algorithms restrict the pixels of the decoded image to take values only in the original color palette. The first algorithm does so by using lists of colors taken by pixels in variable block sizes. The second one uses a color segmentation of the image. These two approaches improve the performance of previously proposed algorithms. For comparable quality and similar bit rate, the proposed approaches have lower decoding complexity than standard DCT-based coding algorithms.<<ETX>>

Collaboration


Dive into the André Zaccarin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bede Liu

Princeton University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge