Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lionel Oisel is active.

Publication


Featured researches published by Lionel Oisel.


international conference on image processing | 2006

A Video Fingerprint Based on Visual Digest and Local Fingerprints

Ayoub Massoudi; Frédéric Lefebvre; Claire-Hélène Demarty; Lionel Oisel; Bertrand Chupeau

A fingerprinting design extracts discriminating features, called fingerprints. The extracted features are unique and specific to each image/video. The visual hash is usually a global fingerprinting technique with crypto-system constraints. In this paper, we propose an innovative video content identification process which combines a visual hash function and a local fingerprinting. Thanks to a visual hash function, we observe the video content variation and we detect key frames. A local image fingerprint technique characterizes the detected key frames. The set of local fingerprints for the whole video summarizes the video or fragments of the video. The video fingerprinting algorithm identifies an unknown video or a fragment of video within a video fingerprint database. It compares the local fingerprints of the candidate video with all local fingerprints of a database even if strong distortions are applied to an original content.


Multimedia Tools and Applications | 2006

Audiovisual integration for tennis broadcast structuring

Ewa Kijak; Guillaume Gravier; Lionel Oisel; Patrick Gros

This paper focuses on the integration of multimodal features for sport video structure analysis. The method relies on a statistical model which takes into account both the shot content and the interleaving of shots. This stochastic modelling is performed in the global framework of Hidden Markov Models (HMMs) that can be efficiently applied to merge audio and visual cues. Our approach is validated in the particular domain of tennis videos. The model integrates prior information about tennis content and editing rules. The basic temporal unit is the video shot. Visual features are used to characterize the type of shot view. Audio features describe the audio events within a video shot. Two sets of audio features are used in this study: the first one is extracted from a manual segmentation of the soundtrack and is more reliable. The second one is provided by an automatic segmentation and classification process. As a result of the overall HMM process, typical tennis scenes are simultaneously segmented and identified. The experiments illustrate the improvement of HMM-based fusion over indexing using only the best single media, when both media are of similar quality.


international conference on computer vision | 2007

Probabilistic Color and Adaptive Multi-Feature Tracking with Dynamically Switched Priority Between Cues

Vijay Badrinarayanan; Patrick Pérez; F. Le Clerc; Lionel Oisel

We present a probabilistic multi-cue tracking approach constructed by employing a novel randomized template tracker and a constant color model based particle filter. Our approach is based on deriving simple binary confidence measures for each tracker which aid priority based switching between the two fundamental cues for state estimation. Thereby the state of the object is estimated from one of the two distributions associated to the cues at each tracking step. This switching also brings about interaction between the cues at irregular intervals in the form of cross sampling. Within this scheme, we tackle the important aspect of dynamic target model adaptation under randomized template tracking which, by construction, possesses the ability to adapt to changing object appearances. Further, to track the object through occlusions we interrupt sequential resampling and achieve relock using the color cue. In order to evaluate the efficacy of this scheme, we put it to test against several state of art trackers using the VIVID online evaluation program and make quantitative comparisons.


international conference on image processing | 2003

Hierarchical structure analysis of sport videos using HMMS

Ewa Kijak; Lionel Oisel; Patrick Gros

This paper focuses on the use of hidden Markov models (HMMs) for structure analysis of sport videos. The video structure parsing relies on the analysis of the temporal interleaving of video shots, with respect to a priori information about video content and editing rules. The basic temporal unit is the video shot and visual features are used to characterize its type of view. Our approach is validated in the particular domain of tennis videos. As a result, typical tennis scenes are identified. In addition, each shot is assigned to a level in the hierarchy described in terms of point, game and set.


IEEE Transactions on Image Processing | 2003

One-dimensional dense disparity estimation for three-dimensional reconstruction

Lionel Oisel; Etienne Mémin; Luce Morin; Franck Galpin

We present a method for fully automatic three-dimensional (3D) reconstruction from a pair of weakly calibrated images in order to deal with the modeling of complex rigid scenes. A two-dimensional (2D) triangular mesh model of the scene is calculated using a two-step algorithm mixing sparse matching and dense motion estimation approaches. The 2D mesh is iteratively refined to fit any arbitrary 3D surface. At convergence, each triangular patch corresponds to the projection of a 3D plane. The proposed algorithm relies first on a dense disparity field. The dense field estimation modelized within a robust framework is constrained by the epipolar geometry. The resulting field is then segmented according to homographic models using iterative Delaunay triangulation. In association with a weak calibration and camera motion estimation algorithm, this 2D planar model is used to obtain a VRML-compatible 3D model of the scene.


international conference on image processing | 1998

Planar facets segmentation using a multiresolution dense disparity field estimation

Lionel Oisel; Luce Morin; Etienne Mémin; Claude Labit

In the present paper we propose a new algorithm for planar facets segmentation of sequences of uncalibrated images in order to recover 3D models of complex scenes. This is performed using a two-step algorithm. First a robust and regularized dense disparity field is computed under the epipolar geometry constraint. The resulting field is segmented according to homographic models using an iterative Delaunay triangulation.


european conference on computer vision | 2000

Geometric Driven Optical Flow Estimation and Segmentation for 3D Reconstruction

Lionel Oisel; Etienne Mémin; Luce Morin

We present a method for fully automatic 3D reconstruction from a pair of uncalibrated images in order to deal with the modeling of complex rigid scenes. A 2D triangular mesh model of the scene is calculated using a two-step algorithm mixing sparse matching and dense motion estimation approaches. The 2D mesh is iteratively refined to fit any arbitrary 3D surface. At convergence, each triangular patch corresponds to the projection of a 3D plane. The algorithm proposed here relies first on a dense disparity field. The dense field estimation modelized within a robust framework is constrained by the epipolar geometry. The resulting field is then segmented according to homographic models using iterative Delaunay triangulation. In association with a simplified self-calibration algorithm, this 2D planar model is used to obtain a VRML-compatible 3D model of the scene.


visual communications and image processing | 1998

Epipolar constrained motion estimation for reconstruction from video sequences

Lionel Oisel; Etienne Mémin; Luce Morin; Claude Labit

In this paper we present a method for matching two different views of a static scene without any calibration information on the camera. To that end, we use a technique derived from optical flow estimation which takes into account epipolar constraint. The epipolar geometry is directly computed from image data. We assume the dense disparity field to be smooth in any planar region. Smoothness is realized using a regularization method. By adding a robust M-estimator on the smoothness term, the resulting model implicitly takes into account depth discontinuities. We use a multiresolution scheme that allows to recover large displacement. Results are shown on real pairs of images.


visual communications and image processing | 2003

Object detection in cinematographic video sequences for automatic indexing

Jiirgen Stauder; Bertrand Chupeau; Lionel Oisel

This paper presents an object detection framework applied to cinematographic post-processing of video sequences. Post-processing is done after production and before editing. At the beginning of each shot of a video, a slate (also called clapperboard) is shown. The slate contains notably an electronic audio timecode that is necessary for audio-visual synchronization. This paper presents an object detection framework to detect slates in video sequences for automatic indexing and post-processing. It is based on five steps. The first two steps aim to reduce drastically the video data to be analyzed. They ensure high recall rate but have low precision. The first step detects images at the beginning of a shot possibly showing up a slate while the second step searches in these images for candidates regions with color distribution similar to slates. The objective is to not miss any slate while eliminating long parts of video without slate appearance. The third and fourth steps are statistical classification and pattern matching to detected and precisely locate slates in candidate regions. These steps ensure high recall rate and high precision. The objective is to detect slates with very little false alarms to minimize interactive corrections. In a last step, electronic timecodes are read from slates to automize audio-visual synchronization. The presented slate detector has a recall rate of 89% and a precision of 97,5%. By temporal integration, much more than 89% of shots in dailies are detected. By timecode coherence analysis, the precision can be raised too. Issues for future work are to accelerate the system to be faster than real-time and to extend the framework for several slate types.


Archive | 2007

Method for computing a fingerprint of a video sequence

Frédéric Lefebvre; Claire-Hélène Demarty; Lionel Oisel; Ayoub Massoudi

Collaboration


Dive into the Lionel Oisel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frédéric Lefebvre

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luce Morin

Intelligence and National Security Alliance

View shared research outputs
Researchain Logo
Decentralizing Knowledge