Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Olivier Verscheure is active.

Publication


Featured researches published by Olivier Verscheure.


Digital Video Compression: Algorithms and Technologies 1996 | 1996

Perceptual quality measure using a spatiotemporal model of the human visual system

Christian J. van den Branden Lambrecht; Olivier Verscheure

This paper addresses the problem of quality estimation of digitally coded video sequences. The topic is of great interest since many products in digital video are about to be released and it is thus important to have robust methodologies for testing and performance evaluation of such devices. The inherent problem is that human vision has to be taken into account in order to assess the quality of a sequence with a good correlation with human judgment. It is well known that the commonly used metric, the signal-to-noise ratio is not correlated with human vision. A metric for the assessment of video coding quality is presented. It is based on a multi- channel model of human spatio-temporal vision that has been parameterized for video coding applications by psychophysical experiments. The visual mechanisms of vision are simulated by a spatio-temporal filter bank. The decomposition is then used to account for phenomena as contrast sensitivity and masking. Once the amount of distortions actually perceived is known, quality estimation can be assessed at various levels. The described metric is able to rate the overall quality of the decoded video sequence as well as the rendition of important features of the sequence such as contours or textures.


IEEE Transactions on Image Processing | 2001

Joint source/FEC rate selection for quality-optimal MPEG-2 video delivery

Pascal Frossard; Olivier Verscheure

This paper deals with the optimal allocation of MPEG-2 encoding and media-independent forward error correction (FEC) rates under a total given bandwidth. The optimality is defined in terms of minimum perceptual distortion given a set of video and network parameters. We first derive the set of equations leading to the residual loss process parameters. That is, the packet loss ratio (PLR) and the average burst length after FEC decoding. We then show that the perceptual source distortion decreases exponentially with the increasing MPEG-2 source rate. We also demonstrate that the perceptual distortion due to data loss is directly proportional to the number of lost macroblocks, and therefore decreases with the amount of channel protection. Finally, we derive the global set of equations that lead to the optimal dynamic rate allocation. The optimal distribution is shown to outperform classical FEC scheme, thanks to its adaptivity to the scene complexity, the available bandwidth and to the network performance. Furthermore, our approach holds for any standard video compression algorithms (i.e., MPEG-x, H.26x).


international conference on management of data | 2010

IBM infosphere streams for scalable, real-time, intelligent transportation services

Alain Biem; Eric Bouillet; Hanhua Feng; Anand Ranganathan; Anton V. Riabov; Olivier Verscheure; Haris N. Koutsopoulos; Carlos Moran

With the widespread adoption of location tracking technologies like GPS, the domain of intelligent transportation services has seen growing interest in the last few years. Services in this domain make use of real-time location-based data from a variety of sources, combine this data with static location-based data such as maps and points of interest databases, and provide useful information to end-users. Some of the major challenges in this domain include i) scalability, in terms of processing large volumes of real-time and static data; ii) extensibility, in terms of being able to add new kinds of analyses on the data rapidly, and iii) user interaction, in terms of being able to support different kinds of one-time and continuous queries from the end-user. In this paper, we demonstrate the use of IBM InfoSphere Streams, a scalable stream processing platform, for tackling these challenges. We describe a prototype system that generates dynamic, multi-faceted views of transportation information for the city of Stockholm, using real vehicle GPS and road-network data. The system also continuously derives current traffic statistics, and provides useful value-added information such as shortest-time routes from real-time observed and inferred traffic conditions. Our performance experiments illustrate the scalability of the system. For instance, our system can process over 120000 incoming GPS points per second, combine it with a map containing over 600,000 links, continuously generate different kinds of traffic statistics and answer user queries.


international conference on distributed computing systems | 2006

Adaptive Control of Extreme-scale Stream Processing Systems

Lisa Amini; Navendu Jain; Anshul Sehgal; Jeremy I. Silber; Olivier Verscheure

Distributed stream processing systems offer a highly scalable and dynamically configurable platform for time-critical applications ranging from real-time, exploratory data mining to high performance transaction processing. Resource management for distributed stream processing systems is complicated by a number of factors processing elements are constrained by their producer-consumer relationships, data and processing rates can be highly bursty, and traditional measures of effectiveness, such as utilization, can be misleading. In this paper, we propose a novel distributed, adaptive control algorithm that maximizes weighted throughput while ensuring stable operation in the face of highly bursty workloads. Our algorithm is designed to meet the challenges of extreme-scale stream processing systems, where overprovisioning is not an option, by making the best use of resources even when the proffered load is greater than available resources. We have implemented our algorithm in a real-world distributed stream processing system and a simulation environment. Our results show that our algorithm is not only self-stabilizing and robust to errors, but also outperforms traditional approaches over a broad range of buffer sizes, processing graphs, and burstiness types and levels.


Real-time Imaging | 1999

User-Oriented QoS Analysis in MPEG-2 Video Delivery

Olivier Verscheure; Pascal Frossard; Maber Hamdi

We address the problem of video quality prediction and control for high-resolution video transmitted over lossy packet networks. In packet video, the bitstream flows through several subsystems (coder, network, decoder); each of them can impair the information, either by data loss or by introducing some delay. However, each of these subsystems can be fine-tuned in order to minimize these problems and to optimize the quality of the delivered signal, taking into account the available bitrate. The assessment of the end-user quality is a non-trivial issue. We analyse how the user-perceived quality is related to the average encoding bitrate for variable bit rate MPEG-2 video. We then show why simple distortion metrics may lead to inconsistent interpretations. Furthermore, for a given coder setup, we analyse the effect of packet loss on the user-level quality. We then demonstrate that, when jointly studying the impact of coding bit rate and packet loss, the reachable quality is upperbound and exhibits one optimal coding rate for a given packet loss ratio.


knowledge discovery and data mining | 2008

Direct mining of discriminative and essential frequent patterns via model-based search tree

Wei Fan; Kun Zhang; Hong Cheng; Jing Gao; Xifeng Yan; Jiawei Han; Philip S. Yu; Olivier Verscheure

Frequent patterns provide solutions to datasets that do not have well-structured feature vectors. However, frequent pattern mining is non-trivial since the number of unique patterns is exponential but many are non-discriminative and correlated. Currently, frequent pattern mining is performed in two sequential steps: enumerating a set of frequent patterns, followed by feature selection. Although many methods have been proposed in the past few years on how to perform each separate step efficiently, there is still limited success in eventually finding highly compact and discriminative patterns. The culprit is due to the inherent nature of this widely adopted two-step approach. This paper discusses these problems and proposes a new and different method. It builds a decision tree that partitions the data onto different nodes. Then at each node, it directly discovers a discriminative pattern to further divide its examples into purer subsets. Since the number of examples towards leaf level is relatively small, the new approach is able to examine patterns with extremely low global support that could not be enumerated on the whole dataset by the two-step method. The discovered feature vectors are more accurate on some of the most difficult graph as well as frequent itemset problems than most recently proposed algorithms but the total size is typically 50% or more smaller. Importantly, the minimum support of some discriminative patterns can be extremely low (e.g. 0.03%). In order to enumerate these low support patterns, state-of-the-art frequent pattern algorithm either cannot finish due to huge memory consumption or have to enumerate 101 to 103 times more patterns before they can even be found. Software and datasets are available by contacting the author.


knowledge discovery and data mining | 2009

Cross domain distribution adaptation via kernel mapping

Erheng Zhong; Wei Fan; Jing Peng; Kun Zhang; Jiangtao Ren; Deepak S. Turaga; Olivier Verscheure

When labeled examples are limited and difficult to obtain, transfer learning employs knowledge from a source domain to improve learning accuracy in the target domain. However, the assumption made by existing approaches, that the marginal and conditional probabilities are directly related between source and target domains, has limited applicability in either the original space or its linear transformations. To solve this problem, we propose an adaptive kernel approach that maps the marginal distribution of target-domain and source-domain data into a common kernel space, and utilize a sample selection strategy to draw conditional probabilities between the two domains closer. We formally show that under the kernel-mapping space, the difference in distributions between the two domains is bounded; and the prediction error of the proposed approach can also be bounded. Experimental results demonstrate that the proposed method outperforms both traditional inductive classifiers and the state-of-the-art boosting-based transfer algorithms on most domains, including text categorization and web page ratings. In particular, it can achieve around 10% higher accuracy than other approaches for the text categorization problem. The source code and datasets are available from the authors.


IEEE Transactions on Circuits and Systems for Video Technology | 2001

AMISP: a complete content-based MPEG-2 error-resilient scheme

Pascal Frossard; Olivier Verscheure

We address a new error-resilient scheme for broadcast quality MPEG-2 video streams to be transmitted over lossy packet networks. A new scene-complexity adaptive mechanism, namely Adaptive MPEG-2 Information Structuring (AMIS) is introduced. AMIS modulates the number of resynchronization points (i.e., slice headers and intra-coded macroblocks) in order to maximize the perceived video quality, assuming that the encoder is aware of the underlying packetization scheme, the packet loss probability (PLR), and the error-concealment technique implemented at the decoding side. The end-to-end video quality depends both on the encoding quality and the degradation due to data loss. Therefore, AMIS constantly determines the best compromise between the rate allocated to encode pure video information and the rate aiming at reducing the sensitivity to packet loss. Experimental results show that AMIS dramatically outperforms existing structuring techniques, thanks to its efficient adaptivity. We then extend AMIS with a forward-error-correction (FEC)-based protection algorithm to become AMISP. AMISP triggers the insertion of FEC packets in the MPEG-2 video packet stream. Finally, the performances of the AMISP scheme in an MPEG-2 over RTP/UDP/IP scenario are evaluated.


Computer Communications | 2002

Joint server scheduling and proxy caching for video delivery

Olivier Verscheure; Chitra Venkatramani; Pascal Frossard; Lisa Amini

We consider the delivery of video assets over a best-effort network, possibly through a caching proxy located close to the clients generating the requests. We are interested in the joint server scheduling and prefix/partial caching strategy that minimizes the aggregate transmission rate over the backbone network (i.e. average output server rate) under a cache of given capacity. We present multiple schemes to address various service levels and client resources by enabling bandwidth and cache space tradeoffs. We also propose an optimization algorithm selecting the working set of asset prefixes. We detail algorithms for practical implementation of our schemes. Simulation results show that our scheme dramatically outperforms the full caching technique.


network and operating system support for digital audio and video | 2002

Optimal proxy management for multimedia streaming in content distribution networks

Chitra Venkatramani; Olivier Verscheure; Pascal Frossard; Kang-Won Lee

The widespread use of the Internet and the maturing of digital video technology have led to an increase in various streaming media applications. As broadband to the home becomes more prevalent, the bottleneck of delivering quality streaming media is shifting upstream to the backbone, peering links, and the best-effort Internet. In this paper, we address the problem of efficiently streaming video assets to the end clients over a distributed infrastructure consisting of origin servers and proxy caches. We build on earlier work and propose a unified mathematical framework under which various server scheduling and proxy cache management algorithms for video streaming can be analyzed. More precisely, we incorporate known server scheduling algorithms (batching/patching/batch-patching) and proxy caching algorithms (full/partial/no caching with or without caching patch bytes) in our framework and analyze the minimum backbone bandwidth consumption under the optimal joint scheduling and caching strategies. We start by studying the optimal policy for streaming a single video object and derive a simple gradient-descent-based cache allocation algorithm to enable management of multiple heterogeneous videos efficiently. We then show that the performance of our heuristic is close to that of the optimal scheme, under a wide range of parameters.

Collaboration


Dive into the Olivier Verscheure's collaboration.

Top Co-Authors

Avatar

Pascal Frossard

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean-Pierre Hubaux

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge