Thierry Bouwmans
University of La Rochelle
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thierry Bouwmans.
Computer Science Review | 2017
Thierry Bouwmans; Andrews Sobral; Sajid Javed; Soon Ki Jung; El-hadi Zahzah
Background/foreground separation is the first step in video surveillance system to detect moving objects. Recent research on problem formulations based on decomposition into low-rank plus sparse matrices shows a suitable framework to separate moving objects from the background. The most representative problem formulation is the Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit (PCP) which decomposes a data matrix into a low-rank matrix and a sparse matrix. However, similar robust implicit or explicit decompositions can be made in the following problem formulations: Robust Non-negative Matrix Factorization (RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal of these similar problem formulations is to obtain explicitly or implicitly a decomposition into low-rank matrix plus additive matrices. These formulation problems differ from the implicit or explicit decomposition, the loss function, the optimization problem and the solvers. As the problem formulation can be NP-hard in its original formulation, and it can be convex or not following the constraints and the loss functions used, the key challenges concern the design of efficient relaxed models and solvers which have to be with iterations as few as possible, and as efficient as possible. In the application of background/foreground separation, constraints inherent to the specificities of the background and the foreground as the temporal and spatial properties need to be taken into account in the design of the problem formulation. Practically, the background sequence is then modeled by a low-rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. Although, many efforts have been made to develop methods for the decomposition into low-rank plus additive matrices that perform visually well in foreground detection with reducing their computational cost, no algorithm today seems to emerge and to be able to simultaneously address all the key challenges that accompany real-world videos. This is due, in part, to the absence of a rigorous quantitative evaluation with synthetic and realistic large-scale dataset with accurate ground truth providing a balanced coverage of the range of challenges present in the real world. In this context, this work aims to initiate a rigorous and comprehensive review of the similar problem formulations in robust subspace learning and tracking based on decomposition into low-rank plus additive matrices for testing and ranking existing algorithms for background/foreground separation. For this, we first provide a preliminary review of the recent developments in the different problem formulations which allows us to define a unified view that we called Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine carefully each method in each robust subspace learning/tracking frameworks with their decomposition, their loss functions, their optimization problem and their solvers. Furthermore, we investigate if incremental algorithms and real-time implementations can be achieved for background/foreground separation. Finally, experimental results on a large-scale dataset called Background Models Challenge (BMC 2012) show the comparative performance of 32 different robust subspace learning/tracking methods.
international conference on computer vision theory and applications | 2015
Caroline Silva; Thierry Bouwmans; Carl Frélicot
In this paper, we propose an eXtended Center-Symmetric Local Binary Pattern (XCS-LBP) descriptor for background modeling and subtraction in videos. By combining the strengths of the original LBP and the similar CS ones, it appears to be robust to illumination changes and noise, and produces short histograms, too. The experiments conducted on both synthetic and real videos (from the Background Models Challenge) of outdoor urban scenes under various conditions show that the proposed XCS-LBP outperforms its direct competitors for the background subtraction task.
international conference on image processing | 2012
Charles Guyon; Thierry Bouwmans; El-hadi Zahzah
Foreground detection is the first step in video surveillance system to detect moving objects. Principal Components Analysis (PCA) shows a nice framework to separate moving objects from the background but without a mechanism of robust analysis, the moving objects may be absorbed into the background model. This drawback can be solved by recent researches on Robust Principal Component Analysis (RPCA). The background sequence is then modeled by a low rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. In this paper, we propose to use a RPCA method based on low-rank and block-sparse matrix decomposition to achieve foreground detection. This decomposition enforces the low-rankness of the background and the block-sparsity aspect of the foreground. Experimental results on different datasets show the pertinence of the proposed approach.
workshop on image analysis for multimedia interactive services | 2008
F. El Baf; Thierry Bouwmans; Bertrand Vachon
Foreground Detection is a key step in background subtraction problem. This approach consists in the detection of moving objects from static cameras through a classification process of pixels as foreground or background. The presence of some critical situations i.e noise, illumination changes and structural background changes produces an uncertainty in the classification of image pixels which can generate false detections. In this context, we propose a fuzzy approach using the Choquet integral to avoid the uncertainty in the classification. The experiments on different video datasets have been realized by testing different color space and by fusing color and texture features. The proposed method is characterized through robustness against illumination changes, shadows and little background changes, and it is validated with the experimental results.
multimedia signal processing | 2012
Zhenjie Zhao; Thierry Bouwmans; Xuebo Zhang; Yongchun Fang
Based on Type-2 Fuzzy Gaussian Mixture Model (T2-FGMM) and Markov Random Field (MRF), we propose a novel background modeling method for motion detection in dynamic scenes. The key idea of the proposed approach is the successful introduction of the spatial-temporal constraints into the T2-FGMM by a Bayesian framework. The evaluation results in pixel level demonstrate that the proposed method performs better than the sound Gaussian Mixture Model (GMM) and T2-FGMM in such typical dynamic backgrounds as waving trees and water rippling.
international conference on computer vision | 2012
Charles Guyon; Thierry Bouwmans; El-hadi Zahzah
Foreground detection is the first step in video surveillance system to detect moving objects. Robust Principal Components Analysis (RPCA) shows a nice framework to separate moving objects from the background. The background sequence is then modeled by a low rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. In this paper, we propose to use a low-rank matrix factorization with IRLS scheme (Iteratively reweighted least squares) and to address in the minimization process the spatial connexity and the temporal sparseness of moving objects (e.g. outliers). Experimental results on the BMC 2012 datasets show the pertinence of the proposed approach.
computer vision and pattern recognition | 2009
Fida El Baf; Thierry Bouwmans; Bertrand Vachon
Mixture of Gaussians (MOG) is the most popular technique for background modeling and presents some limitations when dynamic changes occur in the scene like camera jitter and movement in the background. Furthermore, the MOG is initialized using a training sequence which may be noisy and/or insufficient to model correctly the background. All these critical situations generate false classification in the foreground detection mask due to the related uncertainty. In this context, we present a background modeling algorithm based on Type-2 Fuzzy Mixture of Gaussians which is particularly suitable for infrared videos. The use of the Type-2 Fuzzy Set Theory allows to take into account the uncertainty. The results using the OTCBVS benchmark/test dataset videos show the robustness of the proposed method in presence of dynamic backgrounds.
international conference on image analysis and recognition | 2014
Andrews Sobral; Christopher G. Baker; Thierry Bouwmans; El-hadi Zahzah
Background subtraction (BS) is the art of separating moving objects from their background. The Background Modeling (BM) is one of the main steps of the BS process. Several subspace learning (SL) algorithms based on matrix and tensor tools have been used to perform the BM of the scenes. However, several SL algorithms work on a batch process increasing memory consumption when data size is very large. Moreover, these algorithms are not suitable for streaming data when the full size of the data is unknown. In this work, we propose an incremental tensor subspace learning that uses only a small part of the entire data and updates the low-rank model incrementally when new data arrive. In addition, the multi-feature model allows us to build a robust low-rank background model of the scene. Experimental results shows that the proposed method achieves interesting results for background subtraction task.
asian conference on computer vision | 2014
Sajid Javed; Seon Ho Oh; Andrews Sobral; Thierry Bouwmans; Soon Ki Jung
Accurate and efficient foreground detection is an important task in video surveillance system. The task becomes more critical when the background scene shows more variations, such as water surface, waving trees, varying illumination conditions, etc. Recently, Robust Principal Components Analysis (RPCA) shows a very nice framework for moving object detection. The background sequence is modeled by a low-dimensional subspace called low-rank matrix and sparse error constitutes the foreground objects. But RPCA presents the limitations of computational complexity and memory storage due to batch optimization methods, as a result it is difficult to apply for real-time system. To handle these challenges, this paper presents a robust foreground detection algorithm via Online Robust PCA (OR-PCA) using image decomposition along with continuous constraint such as Markov Random Field (MRF). OR-PCA with good initialization scheme using image decomposition approach improves the accuracy of foreground detection and the computation time as well. Moreover, solving MRF with graph-cuts exploits structural information using spatial neighborhood system and similarities to further improve the foreground segmentation in highly dynamic backgrounds. Experimental results on challenging datasets such as Wallflower, I2R, BMC 2012 and Change Detection 2014 dataset demonstrate that our proposed scheme significantly outperforms the state of the art approaches and works effectively on a wide range of complex background scenes.
IEEE Transactions on Circuits and Systems for Video Technology | 2018
Sajid Javed; Arif Mahmood; Thierry Bouwmans; Soon Ki Jung
Background modeling constitutes the building block of many computer-vision tasks. Traditional schemes model the background as a low rank matrix with corrupted entries. These schemes operate in batch mode and do not scale well with the data size. Moreover, without enforcing spatiotemporal information in the low-rank component, and because of occlusions by foreground objects and redundancy in video data, the design of a background initialization method robust against outliers is very challenging. To overcome these limitations, this paper presents a spatiotemporal low-rank modeling method on dynamic video clips for estimating the robust background model. The proposed method encodes spatiotemporal constraints by regularizing spectral graphs. Initially, a motion-compensated binary matrix is generated using optical flow information to remove redundant data and to create a set of dynamic frames from the input video sequence. Then two graphs are constructed, one between frames for temporal consistency and the other between features for spatial consistency, to encode the local structure for continuously promoting the intrinsic behavior of the low-rank model against outliers. These two terms are then incorporated in the iterative Matrix Completion framework for improved segmentation of background. Rigorous evaluation on severely occluded and dynamic background sequences demonstrates the superior performance of the proposed method over state-of-the-art approaches.