Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jerome Meessen is active.

Publication


Featured researches published by Jerome Meessen.


international conference on image processing | 2005

Scene analysis for reducing motion JPEG 2000 video surveillance delivery bandwidth and complexity

Jerome Meessen; Christophe Parisot; Xavier Desurmont; Jean-Francois Delaigle

In this paper, we propose a new object-based video coding/transmission system using the emerging motion JPEG 2000 standard for the efficient storage and delivery of video surveillance over low bandwidth channels. Some recent papers deal with JPEG 2000 coding/transmission based on the region of interest (ROI) feature and the multi-layer capability provided by this coding system. Those approaches allow delivering more quality for mobile objects (or ROI) than for the background when bandwidth is too narrow for a sufficient video quality. The method proposed here provides the same features while significantly improving the average bitrate/quality ratio of delivered video when cameras are static. We transmit only ROIs of each frame as well as an automatic estimation of the background at a lower frame rate in two separate motion JPEG 2000 streams. The frames are then reconstructed at the client side without the need of other external data. Our method provides both better video quality and reduced client CPU usage with negligible storage overhead. Video surveillance streams stored on the server are fully compliant with existing motion JPEG 2000 decoders.


Multimedia Tools and Applications | 2010

Visual event recognition using decision trees

Cédric Simon; Jerome Meessen; Christophe De Vleeschouwer

This paper presents a classifier-based approach to recognize dynamic events in video surveillance sequences. The goal of this work is to propose a flexible event recognition system that can be used without relying on a long-term explicit tracking procedure. It is composed of three stages. The first one aims at defining and building a set of relevant features describing the shape and movements of the foreground objects in the scene. To this aim, we introduce new motion descriptors based on space-time volumes. Second, an unsupervised learning-based method is used to cluster the objects, thereby defining a set of coarse to fine local patterns of features, representing primitive events in the video sequences. Finally, events are modeled as a spatio-temporal organization of patterns based on an ensemble of randomized trees. In particular, we want this classifier to discover the temporal and causal correlations between the most discriminative patterns. Our system is experimented and validated both on simulated and real-life data.


international conference on acoustics, speech, and signal processing | 2007

A Flexible Video Transmission System Based on JPEG 2000 Conditional Replenishment with Multiple References

François-Olivier Devaux; Jerome Meessen; C. Parisot; Jean-Francois Delaigle; Benoît Macq; C. De Vleeschouwer

The image compression standard JPEG 2000 offers a high compression efficiency as well as a great flexibility in the way it accesses the content in terms of spatial location, quality level, and resolution. This paper explores how transmission systems conveying video surveillance sequences can benefit from this flexibility. Rather than transmitting each frame independently as it is generally done in the literature for JPEG 2000 based systems, we adopt a conditional replenishment scheme to exploit the temporal correlation of the video sequence. As a first contribution, we propose a rate-distortion optimal strategy to select the most profitable packets to transmit. As a second contribution, we provide the client with two references, the previous reconstructed frame and an estimation of the current scene background, which improves the transmission system performances.


computer vision and pattern recognition | 2007

Progressive Learning for Interactive Surveillance Scenes Retrieval

Jerome Meessen; Xavier Desurmont; Jean-Francois Delaigle; C. De Vleeschouwer; Benoît Macq

This paper tackles the challenge of interactively retrieving visual scenes within surveillance sequences acquired with fixed camera. Contrarily to todays solutions, we assume that no a-priori knowledge is available so that the system must progressively learn the target scenes thanks to interactive labelling of a few frames by the user. The proposed method is based on very low-cost features extraction and integrates relevance feedback, multiple-instance SVM classification and active learning. Each of these 3 steps runs iteratively over the session, and takes advantage of the progressively increasing training set. Repeatable experiments on both simulated and real data demonstrate the efficiency of the approach and show how it allows reaching high retrieval performances.


acm multimedia | 2006

Content-based retrieval of video surveillance scenes

Jerome Meessen; Matthieu Coulanges; Xavier Desurmont; Jean-Francois Delaigle

A novel method for content-based retrieval of surveillance video data is presented. The study starts from the realistic assumption that the automatic feature extraction is kept simple, i.e. only segmentation and low-cost filtering operations have been applied. The solution is based on a new and generic dissimilarity measure for discriminating video surveillance scenes. This weighted compound measure can be interactively adapted during a session in order to capture the users subjectivity. Upon this, a key-frame selection and a content-based retrieval system have been developed and tested on several actual surveillance sequences. Experiments have shown how the proposed method is efficient and robust to segmentation errors.


workshop on image analysis for multimedia interactive services | 2007

Using Decision Trees for Knowledge-Assisted Topologically Structured Data Analysis

Cédric Simon; Jerome Meessen; D. Tzovaras; C. De Vleeschouwer

Supervised learning of an ensemble of randomized trees is considered to recognize classes of events in topologically structured data (e.g. images or time series). We are primarily interested in classification problems that are characterized by severe scarcity of the training samples. The main idea of our paper consists in favoring the selection of attributes that are known to efficiently discriminate the minority class in those nodes of the tree that are close to the leaves and where classes are represented by a small number of training examples. In practice, the knowledge about the ability of an attribute to discriminate the classes represented in a particular node is either provided by an expert or inferred based on a pre-analysis of the entire initial training set. The experimental validation of our approach considers sign language and human behavior recognition. It reveals that the proposed knowledge- assisted tree induction mechanism efficiently compensates for the shortage of the training samples, and significantly improves the tree classifier accuracy in such scenarios.


conference on image and video communications and processing | 2005

Technologies for multimedia and video surveillance convergence

Didier Nicholson; Jerome Meessen

In this paper, we present an integrated system for video surveillance developed within the European IST WCAM project, using only standard multimedia and networking tools. The advantages of such a system, while allowing cost reduction and interoperability, is to benefit from the fast technological evolution of the video encoding and distribution tools.


conference on image and video communications and processing | 2005

WCAM: smart encoding for wireless surveillance

Jerome Meessen; C. Parisot; C. Le Barz; Didier Nicholson; Jean-Francois Delaigle

In this paper, we present an integrated system for smart encoding in video surveillance. This system, developed within the European IST WCAM project, aims at defining an optimized JPEG 2000 codestream organization directly based on the semantic content of the video surveillance analysis module. The proposed system produces a fully compliant Motion JPEG 2000 stream that contains regions of interest (typically mobile objects) data in a separate layer than regions of less interest (e.g. static background). First the system performs a real-time unsupervised segmentation of mobiles in each frame of the video. The smart encoding module uses these regions of interest maps in order to construct a Motion JPEG 2000 codestream that allows an optimized rendering of the video surveillance stream in low bandwidth wireless applications, allocating more quality to mobiles than for the background. Our integrated system improves the coding representation of the video content without data overhead. It can also be used in applications requiring selective scrambling of regions of interest as well as for any other application dealing with regions of interest.


electronic imaging | 2007

Real-Time 3D Video Conference on Generic Hardware

Xavier Desurmont; Jean-Luc Bruyelle; Diego Ruiz; Jerome Meessen; Benoît Macq

Nowadays, video-conference tends to be more and more advantageous because of the economical and ecological cost of transport. Several platforms exist. The goal of the TIFANIS immersive platform is to let users interact as if they were physically together. Unlike previous teleimmersion systems, TIFANIS uses generic hardware to achieve an economically realistic implementation. The basic functions of the system are to capture the scene, transmit it through digital networks to other partners, and then render it according to each partners viewing characteristics. The image processing part should run in real-time. We propose to analyze the whole system. it can be split into different services like central processing unit (CPU), graphical rendering, direct memory access (DMA), and communications trough the network. Most of the processing is done by CPU resource. It is composed of the 3D reconstruction and the detection and tracking of faces from the video stream. However, the processing needs to be parallelized in several threads that have as little dependencies as possible. In this paper, we present these issues, and the way we deal with them.


Proceedings of SPIE | 2009

Flexible user interface for efficient content-based video surveillance retrieval: design and evaluation

Jerome Meessen; Mathieu Coterot; Christophe De Vleeschouwer; Xavier Desurmont; Benoît Macq

The major drawback of interactive retrieval systems is the potential frustration of the user that is caused by an excessive labelling work. Active learning has proven to help solving this issue, by carefully selecting the examples to present to the user. In this context, the design of the user interface plays a critical role since it should invite the user to label the examples elected by the active learning. This paper presents the design and evaluation of an innovative user interface for image retrieval. It has been validate using real-life IEEE PETS video surveillance data. In particular, we investigated the most appropriate repartition of the display area between the retrieved video frames and the active learning examples, taking both objective and subjective user satisfaction parameters into account. The flexibility of the interface relies on a scalable representation of the video content such as Motion JPEG 2000 in our implementation.

Collaboration


Dive into the Jerome Meessen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Desurmont

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Benoît Macq

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Christophe De Vleeschouwer

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Cédric Simon

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

C. Parisot

University College London

View shared research outputs
Top Co-Authors

Avatar

Didier Nicholson

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

C. De Vleeschouwer

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Christophe Parisot

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge