Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pierre-Marc Jodoin is active.

Publication


Featured researches published by Pierre-Marc Jodoin.


computer vision and pattern recognition | 2012

Changedetection.net: A new change detection benchmark dataset

Nil Goyette; Pierre-Marc Jodoin; Fatih Porikli; Janusz Konrad; Prakash Ishwar

Change detection is one of the most commonly encountered low-level tasks in computer vision and video processing. A plethora of algorithms have been developed to date, yet no widely accepted, realistic, large-scale video dataset exists for benchmarking different methods. Presented here is a unique change detection benchmark dataset consisting of nearly 90,000 frames in 31 video sequences representing 6 categories selected to cover a wide range of challenges in 2 modalities (color and thermal IR). A distinguishing characteristic of this dataset is that each frame is meticulously annotated for ground-truth foreground, background, and shadow area boundaries - an effort that goes much beyond a simple binary label denoting the presence of change. This enables objective and precise quantitative comparison and ranking of change detection algorithms. This paper presents and discusses various aspects of the new dataset, quantitative performance metrics used, and comparative results for over a dozen previous and new change detection algorithms. The dataset, evaluation tools, and algorithm rankings are available to the public on a website1 and will be updated with feedback from academia and industry in the future.


international conference on pattern recognition | 2008

Review and evaluation of commonly-implemented background subtraction algorithms

Yannick Benezeth; Pierre-Marc Jodoin; Bruno Emile; Hélène Laurent; Christophe Rosenberger

Locating moving objects in a video sequence is the first step of many computer vision applications. Among the various motion-detection techniques, background subtraction methods are commonly implemented, especially for applications relying on a fixed camera. Since the basic inter-frame difference with global threshold is often a too simplistic method, more elaborate (and often probabilistic) methods have been proposed. These methods often aim at making the detection process more robust to noise, background motion and camera jitter. In this paper, we present commonly-implemented background subtraction algorithms and we evaluate them quantitatively. In order to gauge performances of each method, tests are performed on a wide range of real, synthetic and semi-synthetic video sequences representing different challenges.


Journal of Electronic Imaging | 2010

Comparative study of background subtraction algorithms

Yannick Benezeth; Pierre-Marc Jodoin; Bruno Emile; Hélène Laurent; Christophe Rosenberger

In this paper, we present a comparative study of several state of the art background subtraction methods. Approaches ranging from simple background subtraction with global thresholding to more sophisticated statistical methods have been implemented and tested on different videos with ground truth. The goal of this study is to provide a solid analytic ground to underscore the strengths and weaknesses of the most widely implemented motion detection methods. The methods are compared based on their robustness to different types of video, their memory requirement, and the computational effort they require. The impact of a Markovian prior as well as some post-processing operators are also evaluated. Most of the videos used in this study come from state-of-the-art benchmark databases and represent different challenges such as poor signal-to-noise ratio, multimodal background motion and camera jitter. Overall, this study not only helps better understand to which type of videos each method suits best but also estimate how better sophisticated methods are, compared to basic background subtraction methods.


international conference on computer graphics and interactive techniques | 2004

Fast hierarchical importance sampling with blue noise properties

Victor Ostromoukhov; Charles Donohue; Pierre-Marc Jodoin

This paper presents a novel method for efficiently generating a good sampling pattern given an importance density over a 2D domain. A Penrose tiling is hierarchically subdivided creating a sufficiently large number of sample points. These points are numbered using the Fibonacci number system, and these numbers are used to threshold the samples against the local value of the importance density. Pre-computed correction vectors, obtained using relaxation, are used to improve the spectral characteristics of the sampling pattern. The technique is deterministic and very fast; the sampling time grows linearly with the required number of samples. We illustrate our technique with importance-based environment mapping, but the technique is versatile enough to be used in a large variety of computer graphics applications, such as light transport calculations, digital halftoning, geometry processing, and various rendering techniques.


computer vision and pattern recognition | 2014

CDnet 2014: An Expanded Change Detection Benchmark Dataset

Yi Wang; Pierre-Marc Jodoin; Fatih Porikli; Janusz Konrad; Yannick Benezeth; Prakash Ishwar

Change detection is one of the most important lowlevel tasks in video analytics. In 2012, we introduced the changedetection.net (CDnet) benchmark, a video dataset devoted to the evalaution of change and motion detection approaches. Here, we present the latest release of the CDnet dataset, which includes 22 additional videos (70; 000 pixel-wise annotated frames) spanning 5 new categories that incorporate challenges encountered in many surveillance settings. We describe these categories in detail and provide an overview of the results of more than a dozen methods submitted to the IEEE Change DetectionWorkshop 2014. We highlight strengths and weaknesses of these methods and identify remaining issues in change detection.


IEEE Signal Processing Letters | 2009

Foreground-Adaptive Background Subtraction

J.M. McHugh; Janusz Konrad; Venkatesh Saligrama; Pierre-Marc Jodoin

Background subtraction is a powerful mechanism for detecting change in a sequence of images that finds many applications. The most successful background subtraction methods apply probabilistic models to background intensities evolving in time; nonparametric and mixture-of-Gaussians models are but two examples. The main difficulty in designing a robust background subtraction algorithm is the selection of a detection threshold. In this paper, we adapt this threshold to varying video statistics by means of two statistical models. In addition to a nonparametric background model, we introduce a foreground model based on small spatial neighborhood to improve discrimination sensitivity. We also apply a Markov model to change labels to improve spatial coherence of the detections. The proposed methodology is applicable to other background models as well.


computer vision and pattern recognition | 2009

Abnormal events detection based on spatio-temporal co-occurences

Yannick Benezeth; Pierre-Marc Jodoin; Venkatesh Saligrama; Christophe Rosenberger

We explore a location based approach for behavior modeling and abnormality detection. In contrast to the conventional object based approach where an object may first be tagged, identified, classified, and tracked, we proceed directly with event characterization and behavior modeling at the pixel(s) level based on motion labels obtained from background subtraction. Since events are temporally and spatially dependent, this calls for techniques that account for statistics of spatiotemporal events. Based on motion labels, we learn co-occurrence statistics for normal events across space-time. For one (or many) key pixel(s), we estimate a co-occurrence matrix that accounts for any two active labels which co-occur simultaneously within the same spatiotemporal volume. This co-occurrence matrix is then used as a potential function in a Markov random field (MRF) model to describe the probability of observations within the same spatiotemporal volume. The MRF distribution implicitly accounts for speed, direction, as well as the average size of the objects passing in front of each key pixel. Furthermore, when the spatiotemporal volume is large enough, the co-occurrence distribution contains the average normal path followed by moving objects. The learned normal co-occurrence distribution can be used for abnormal detection. Our method has been tested on various outdoor videos representing various challenges.


IEEE Transactions on Circuits and Systems for Video Technology | 2007

Statistical Background Subtraction Using Spatial Cues

Pierre-Marc Jodoin; Max Mignotte; Janusz Konrad

Most statistical background subtraction techniques are based on the analysis of temporal color/intensity distribution. However, learning statistics on a series of time frames can be problematic, especially when no frame absent of moving objects is available or when the available memory is not sufficient to store the series of frames needed for learning. In this letter, we propose a spatial variation to the traditional temporal framework. The proposed framework allows statistical motion detection with methods trained on one background frame instead of a series of frames as is usually the case. Our framework includes two spatial background subtraction approaches suitable for different applications. The first approach is meant for scenes having a nonstatic background due to noise, camera jitter or animation in the scene (e.g.,waving trees, fluttering leaves). This approach models each pixel with two PDFs: one unimodal PDF and one multimodal PDF, both trained on one background frame. In this way, the method can handle backgrounds with static and nonstatic areas. The second spatial approach is designed to use as little processing time and memory as possible. Based on the assumption that neighboring pixels often share similar temporal distribution, this second approach models the background with one global mixture of Gaussians.


IEEE Signal Processing Magazine | 2010

Video Anomaly Identification

Venkatesh Saligrama; Janusz Konrad; Pierre-Marc Jodoin

This article describes a family of unsupervised approaches to video anomaly detection based on statistical activity analysis. Approaches based on activity analysis provide intriguing possibilities for region-of-interest (ROI) processing since relevant activities and their locations are detected prior to higher-level processing such as object tracking, tagging, and classification. This strategy is essential for scalability of video analysis to cluttered environments with a multitude of objects and activities. Activity analysis approaches typically do not involve object tracking, and yet they inherently account for spatiotemporal dependencies. They are robust to clutter arising from multiple activities and contamination arising from poor background subtraction or occlusions. They can sometimes also be employed for fusing activities from multiple cameras. We illustrate successful application of activity analysis to anomaly detection in various scenarios, including the detection of abandoned objects, crowds of people, and illegal U-turns.


Pattern Recognition Letters | 2011

Abnormality detection using low-level co-occurring events

Yannick Benezeth; Pierre-Marc Jodoin; Venkatesh Saligrama

We propose in this paper a method for behavior modeling and abnormal events detection which uses low-level features. In conventional object-based approaches, objects are identified, classified, and tracked to locate those with suspicious behavior. We proceed directly with event characterization and behavior modeling using low-level features. We first learn statistics about co-occurring events in a spatio-temporal volume in order to build the normal behavior model, called the co-occurrence matrix. The notion of co-occurring events is defined using mutual information between motion labels sequences. Then, in the second phase, the co-occurrence matrix is used as a potential function in a Markov random field framework to describe, as the video streams in, the probability of observing new volumes of activity. The co-occurrence matrix is thus used for detecting moving objects whose behavior differs from the ones observed during the training phase. Interestingly, the Markov random field distribution implicitly accounts for speed, direction, as well as the average size of the objects without any higher-level intervention. Furthermore, when the spatio-temporal volume is sufficiently large, the co-occurrence distribution contains the average normal path followed by moving objects. Our method has been tested on various indoor and outdoor videos representing various challenges.

Collaboration


Dive into the Pierre-Marc Jodoin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Mignotte

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hugo Larochelle

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar

Mohammad Havaei

Université de Sherbrooke

View shared research outputs
Researchain Logo
Decentralizing Knowledge