Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomas Crivelli is active.

Publication


Featured researches published by Tomas Crivelli.


International Journal of Computer Vision | 2011

Simultaneous Motion Detection and Background Reconstruction with a Conditional Mixed-State Markov Random Field

Tomas Crivelli; Patrick Bouthemy; Bruno Cernuschi-Frías; Jianfeng Yao

In this work we present a new way of simultaneously solving the problems of motion detection and background image reconstruction. An accurate estimation of the background is only possible if we locate the moving objects. Meanwhile, a correct motion detection is achieved if we have a good available background model. The key of our joint approach is to define a single random process that can take two types of values, instead of defining two different processes, one symbolic (motion detection) and one numeric (background intensity estimation). It thus allows to exploit the (spatio-temporal) interaction between a decision (motion detection) and an estimation (intensity reconstruction) problem. Consequently, the meaning of solving both tasks jointly, is to obtain a single optimal estimate of such a process. The intrinsic interaction and simultaneity between both problems is shown to be better modeled within the so-called mixed-state statistical framework, which is extended here to account for symbolic states and conditional random fields. Experiments on real sequences and comparisons with existing motion detection methods support our proposal. Further implications for video sequence inpainting will be also discussed.


european conference on computer vision | 2008

Simultaneous Motion Detection and Background Reconstruction with a Mixed-State Conditional Markov Random Field

Tomas Crivelli; Gwénaëlle Piriou; Patrick Bouthemy; Bruno Cernuschi-Frías; Jianfeng Yao

We consider the problem of motion detection by background subtraction. An accurate estimation of the background is only possible if we locate the moving objects; meanwhile, a correct motion detection is achieved if we have a good available background model. This work proposes a new direction in the way such problems are considered. The main idea is to formulate this class of problem as a joint decision-estimation unique step. The goal is to exploit the way two processes interact, even if they are of a dissimilar nature (symbolic-continuous), by means of a recently introduced framework called mixed-state Markov random fields. In this paper, we will describe the theory behind such a novel statistical framework, that subsequently will allows us to formulate the specific joint problem of motion detection and background reconstruction. Experiments on real sequences and comparisons with existing methods will give a significant support to our approach. Further implications for video sequence inpainting will be also discussed.


international conference on image processing | 2006

Mixed-State Markov Random Fields for Motion Texture Modeling and Segmentation

Tomas Crivelli; Bruno Cernuschi-Frías; Patrick Bouthemy; Jianfeng Yao

The aim of this work is to model the apparent motion in image sequences depicting natural dynamic scenes. We adopt the mixed-state Markov random fields (MRF) models recently introduced to represent so-called motion textures. The approach consists in describing the spatial distribution of some motion measurements which exhibit mixed-state nature: a discrete component related to the absence of motion and a continuous part for measurements different from zero. We propose several significative extensions to this model. We define an original motion texture segmentation method which does not assume conditional independence of the observations for each texture and normalizing factors are properly handled. Results on real examples demonstrate the accuracy and efficiency of our method.


international conference on image processing | 2012

From optical flow to dense long term correspondences

Tomas Crivelli; Pierre-Henri Conze; Philippe Robert; Patrick Pérez

Dense point matching and tracking in image sequences is an open issue with implications in several domains, from content analysis to video editing. We observe that for long term dense point matching, some regions of the image are better matched by concatenation of consecutive motion vectors, while for others a direct long term matching is preferred. We propose a method to optimally estimate the correspondence of a point w.r.t. a reference image from a set of input motion estimations over different temporal intervals. Results on texture insertion by point tracking in the context of video editing are presented and compared with a state-of-the-art approach.


IEEE Transactions on Image Processing | 2015

Robust Optical Flow Integration

Tomas Crivelli; Matthieu Fradet; Pierre-Henri Conze; Philippe Robert; Patrick Pérez

We analyze the problem of how to correctly construct dense point trajectories from optical flow fields. First, we show that simple Euler integration is unavoidably inaccurate, no matter how good is the optical flow estimator. Then, an inverse integration scheme is analyzed which is more robust to bias and input noise and shows better stability properties. Our contribution is threefold: 1) a theoretical analysis that demonstrates why and in what sense inverse integration is more accurate; 2) a rich experimental validation both on synthetic and real (image) data; and 3) an algorithm for approximate online inverse integration. This new technique is precious whether one is trying to propagate information densely available on a reference frame to the other frames in the sequence or, conversely, to assign information densely over each frame by pulling it from the reference.


Siam Journal on Imaging Sciences | 2013

Motion Textures: Modeling, Classification, and Segmentation Using Mixed-State Markov Random Fields

Tomas Crivelli; Bruno Cernuschi-Frías; Patrick Bouthemy; Jianfeng Yao

A motion texture is an instantaneous motion map extracted from a dynamic texture. We observe that such motion maps exhibit values of two types: a discrete component at zero (absence of motion) and continuous motion values. We thus develop a mixed-state Markov random field model to represent motion textures. The core of our approach is to show that motion information is powerful enough to classify and segment dynamic textures if it is properly modeled regarding its specific nature and the local interactions involved. A parsimonious set of 11 parameters constitutes the descriptive feature of a motion texture. The motivation of the proposed formulation runs toward the analysis of dynamic video contents, and we tackle two related problems. First, we present a method for recognition and classification of motion textures, by means of the Kullback--Leibler distance between mixed-state statistical models. Second, we define a two-frame motion texture maximum a posteriori (MAP)-based segmentation method applicable ...


british machine vision conference | 2012

Multi-step flow fusion: towards accurate and dense correspondences in long video shots

Tomas Crivelli; Pierre-Henri Conze; Philippe Robert; Matthieu Fradet; Patrick Pérez

The aim of this work is to estimate dense displacement fields over long video shots. Put in sequence they are useful for representing point trajectories but also for propagating (pulling) information from a reference frame to the rest of the video. Highly elaborated optical flow estimation algorithms are at hand, and they were applied before for dense point tracking by simple accumulation, however with unavoidable position drift. On the other hand, direct long-term point matching is more robust to such deviations, but it is very sensitive to ambiguous correspondences. Why not combining the benefits of both approaches? Following this idea, we develop a multi-step flow fusion method that optimally generates dense long-term displacement fields by first merging several candidate estimated paths and then filtering the tracks in the spatio-temporal domain. Our approach permits to handle small and large displacements with improved accuracy and it is able to recover a trajectory after temporary occlusions. Especially useful for video editing applications, we attack the problem of graphic element insertion and video volume segmentation, together with a number of quantitative comparisons on ground-truth data with state-of-the-art approaches.


international conference on computer vision | 2009

Learning mixed-state Markov models for statistical motion texture tracking

Tomas Crivelli; Patrick Bouthemy; B. Cernuschi-Frías; Jianfeng Yao

A motion texture is the instantaneous scalar map of apparent motion values extracted from a dynamic or temporal texture. It is mostly displayed by natural scene elements (fire, smoke, water) but also involves more general textured motion patterns (eg. a crowd of people, a flock). In this work we are interested in the modeling and tracking of motion textures. Experimentally we observe that such motion maps exhibit values of a mixed type: a discrete component at zero and a continuous component of non-null motion values. Thus, we propose a statistical characterization of motion textures based on a mixed-state causal modeling. Next, the problem of tracking is considered. A set of mixed-state model parameters is learned as a descriptive feature of the motion texture to track and displacement estimation is solved using the conditional Kullback-Leibler divergence for statistical window matching. Results and comparisons are presented on real sequences.


Proceedings of SPIE | 2006

Segmentation of motion textures using mixed-state Markov random fields

Tomas Crivelli; Bruno Cernuschi-Frías; Patrick Bouthemy; Jianfeng Yao

The aim of this work is to model the apparent motion in image sequences depicting natural dynamic scenes (rivers, sea-waves, smoke, fire, grass etc) where some sort of stationarity and homogeneity of motion is present. We adopt the mixed-state Markov Random Fields models recently introduced to represent so-called motion textures. The approach consists in describing the distribution of some motion measurements which exhibit a mixed nature: a discrete component related to absence of motion and a continuous part for measurements different from zero. We propose several extensions on the spatial schemes. In this context, Gibbs distributions are analyzed, and a deep study of the associated partition functions is addressed. Our approach is valid for general Gibbs distributions. Some particular cases of interest for motion texture modeling are analyzed. This is crucial for problems of segmentation, detection and classification. Then, we propose an original approach for image motion segmentation based on these models, where normalization factors are properly handled. Results for motion textures on real natural sequences demonstrate the accuracy and efficiency of our method.


Computer Vision and Image Understanding | 2016

Object-guided motion estimation

Juan-Manuel Prez-Ra; Tomas Crivelli; Patrick Prez

We provide a new not yet investigated approach for motion analysis in image sequences.We show the benefits of equipping a dense motion estimator with object-level tracking.We provide a new algorithm for object-aware and occlusion-aware point tracking.We evaluate quantitatively our method on available ground-truth trajectories. Motion estimation in image sequences is classically addressed under one of the two following forms: estimation of optical flow (instantaneous apparent motion over the whole image) and visual tracking (estimation of the motion of a certain scene region over time). Major progresses have been recently achieved on both fronts, with robust and accurate techniques available for each problem. However, these problems are mostly studied as if they were independent, while they address in fact two faces of the same reality. This paper analyzes the benefits and consequences of combining tracking methods for estimation of per-object dense motion trajectories. We show experimentally that studying the global translation of an object will benefit the motion estimation accuracy of sample points inside the larger structure.

Collaboration


Dive into the Tomas Crivelli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jianfeng Yao

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jian-Feng Yao

École normale supérieure de Cachan

View shared research outputs
Top Co-Authors

Avatar

Patrick Bouthemy

University of Buenos Aires

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge