Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pierre-Luc St-Charles is active.

Publication


Featured researches published by Pierre-Luc St-Charles.


IEEE Transactions on Image Processing | 2015

SuBSENSE: A Universal Change Detection Method With Local Adaptive Sensitivity

Pierre-Luc St-Charles; Guillaume-Alexandre Bilodeau; Robert Bergevin

Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our methods internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.


computer vision and pattern recognition | 2014

Flexible Background Subtraction with Self-Balanced Local Sensitivity

Pierre-Luc St-Charles; Guillaume-Alexandre Bilodeau; Robert Bergevin

Most background subtraction approaches offer decent results in baseline scenarios, but adaptive and flexible solutions are still uncommon as many require scenario-specific parameter tuning to achieve optimal performance. In this paper, we introduce a new strategy to tackle this problem that focuses on balancing the inner workings of a non-parametric model based on pixel-level feedback loops. Pixels are modeled using a spatiotemporal feature descriptor for increased sensitivity. Using the video sequences and ground truth annotations of the 2012 and 2014 CVPR Change Detection Workshops, we demonstrate that our approach outperforms all previously ranked methods in the original dataset while achieving good results in the most recent one.


workshop on applications of computer vision | 2014

Improving background subtraction using Local Binary Similarity Patterns

Pierre-Luc St-Charles; Guillaume-Alexandre Bilodeau

Most of the recently published background subtraction methods can still be classified as pixel-based, as most of their analysis is still only done using pixel-by-pixel comparisons. Few others might be regarded as spatial-based (or even spatiotemporal-based) methods, as they take into account the neighborhood of each analyzed pixel. Although the latter types can be viewed as improvements in many cases, most of the methods that have been proposed so far suffer in complexity, processing speed, and/or versatility when compared to their simpler pixel-based counterparts. In this paper, we present an adaptive background subtraction method, derived from the low-cost and highly efficient ViBe method, which uses a spatiotemporal binary similarity descriptor instead of simply relying on pixel intensities as its core component. We then test this method on multiple video sequences and show that by only replacing the core component of a pixel-based method it is possible to dramatically improve its overall performance while keeping memory usage, complexity and speed at acceptable levels for online applications.


IEEE Transactions on Image Processing | 2016

Universal Background Subtraction Using Word Consensus Models

Pierre-Luc St-Charles; Guillaume-Alexandre Bilodeau; Robert Bergevin

Background subtraction is often used as the first step in video analysis and smart surveillance applications. However, the issue of inconsistent performance across different scenarios due to a lack of flexibility remains a serious concern. To address this, we propose a novel non-parametric, pixel-level background modeling approach based on word dictionaries that draws from traditional codebooks and sample consensus approaches. In this new approach, the importance of each background sample (or word) is evaluated online based on their recurrence among all local observations. This helps build smaller pixel models that are better suited for long-term foreground detection. Combining these models with a frame-level dictionary and local feedback mechanisms leads us to our proposed background subtraction method, coined “PAWCS.” Experiments on the 2012 and 2014 versions of the ChangeDetection.net data set show that PAWCS outperforms 26 previously tested and published methods in terms of overall F-Measure as well as in most categories taken individually. Our results can be reproduced with a C++ implementation available online.


international conference on image processing | 2015

Reproducible evaluation of Pan-Tilt-Zoom tracking

Gengjie Chen; Pierre-Luc St-Charles; Wassim Bouachir; Guillaume-Alexandre Bilodeau; Robert Bergevin

Tracking with a Pan-Tilt-Zoom (PTZ) camera has been a research topic in computer vision for many years. However, it is difficult to assess the progress that has been made because there is no standard evaluation methodology. The difficulty in evaluating PTZ tracking algorithms arises from their dynamic nature. In contrast to other forms of tracking, PTZ tracking involves both locating the target in the image and controlling the motors of the camera to aim it so that the target stays in its field of view. This type of tracking can only be performed online. In this paper, we propose a new evaluation framework based on a virtual PTZ camera. With this framework, tracking scenarios do not change for each experiment and we are able to replicate the main principles of online PTZ camera control and behavior including camera positioning delays, tracker processing delays, and numerical zoom. We tested our evaluation framework with the Camshift tracker to show its viability and to establish baseline results.


computer vision and pattern recognition | 2015

Online multimodal video registration based on shape matching

Pierre-Luc St-Charles; Guillaume-Alexandre Bilodeau; Robert Bergevin

The registration of video sequences captured using different types of sensors often relies on dense feature matching methods, which are very costly. In this paper, we study the problem of “almost planar” scene registration (i.e. where the planar ground assumption is almost respected) in multimodal imagery using target shape information. We introduce a new strategy for robustly aligning scene elements based on the random sampling of shape contour correspondences and on the continuous update of our transformation models parameters. We evaluate our solution on a public dataset and show its superiority by comparing it to a recently published method that targets the same problem. To make comparisons between such methods easier in the future, we provide our evaluation tools along with a full implementation of our solution online.


computer vision and pattern recognition | 2016

Fast Image Gradients Using Binary Feature Convolutions

Pierre-Luc St-Charles; Guillaume-Alexandre Bilodeau; Robert Bergevin

The recent increase in popularity of binary feature descriptors has opened the door to new lightweight computer vision applications. Most research efforts thus far have been dedicated to the introduction of new large-scale binary features, which are primarily used for keypoint description and matching. In this paper, we show that the side products of small-scale binary feature computations can efficiently filter images and estimate image gradients. The improved efficiency of low-level operations can be especially useful in time-constrained applications. Through our experiments, we show that efficient binary feature convolutions can be used to mimic various image processing operations, and even outperform Sobel gradient estimation in the edge detection problem, both in terms of speed and F-Measure.


computer vision and pattern recognition | 2016

Non-planar Infrared-Visible Registration for Uncalibrated Stereo Pairs

Dinh-Luan Nguyen; Pierre-Luc St-Charles; Guillaume-Alexandre Bilodeau

Thermal infrared-visible video registration for nonplanar scenes is a new area in visual surveillance. It allows the combination of information from two spectra for better human detection and segmentation. In this paper, we present a novel online framework for visible and thermal infrared registration for non-planar scenes that includes foreground segmentation, feature matching, rectification and disparity calculation. Our proposed approach is based on sparse correspondences of contour points. The key ideas of the proposed framework are the removal of spurious regions at the beginning of videos and a registration methodology for non-planar scenes. Besides, a new non-planar dataset with an associated evaluation protocol is also proposed as a standard assessment. We evaluate our method on both public planar and non-planar datasets. Experimental results reveal that the proposed method can not only successfully handle non-planar scenes but also gets state-of-the-art results on planar ones.


workshop on applications of computer vision | 2015

A Self-Adjusting Approach to Change Detection Based on Background Word Consensus

Pierre-Luc St-Charles; Guillaume-Alexandre Bilodeau; Robert Bergevin


Infrared Physics & Technology | 2014

Thermal-Visible Registration of Human Silhouettes: a Similarity Measure Performance Evaluation

Guillaume-Alexandre Bilodeau; Atousa Torabi; Pierre-Luc St-Charles; Dorra Riahi

Collaboration


Dive into the Pierre-Luc St-Charles's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Atousa Torabi

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar

Dorra Riahi

École Polytechnique de Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge