Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Axel Carlier is active.

Publication


Featured researches published by Axel Carlier.


acm multimedia | 2011

Combining content-based analysis and crowdsourcing to improve user interaction with zoomable video

Axel Carlier; Guntur Ravindra; Vincent Charvillat; Wei Tsang Ooi

This paper introduces a new paradigm for interacting with zoomable video. Our interaction technique reduces the number of zooms and pans required by providing recommended viewports to the users, and replaces multiple zoom and pan actions with a simple click on the recommended viewport. The usefulness of our technique is visible in the quality of the recommended viewport, which needs to match the user intention, track movement in the scene, and properly frame the scene in the video. To this end, we propose a hybrid method where content analysis is complimented by the implicit feedback of a community of users in order to recommend viewports. We first compute preliminary sets of recommended viewports by analyzing the content of the video. These viewports allow tracking of moving objects in the scene, and are framed without violating basic aesthetic rules. To improve the relevance of the recommended viewports, we collect viewing statistics as users view a video, and use the viewports they select to reinforce the importance of certain recommendations and penalize others. New recommendations that are not previously recognized by content analysis may also emerge. The resulting recommended viewports converge towards the regions in the video that are relevant to users. A user study involving 70 participants shows that an user interface incorporating with our paradigm leads to more number of zooms, into more informative regions with fewer interactions.


international conference on multimedia retrieval | 2014

VideoJot: A Multifunctional Video Annotation Tool

Michael Riegler; Mathias Lux; Vincent Charvillat; Axel Carlier; Raynor Vliegendhart; Martha Larson

Videos are becoming more and more a tool of communication. There are how-to videos, people are discussing actions of others based on their recorded performance, e.g., in soccer, or they simply record videos of great moments and show them to friends and family. In this paper we focus on very specific how-to videos and present a novel, web based annotation tool, that combines (i) zoom, (ii) drawing, and (iii) temporal social bookmarking in video streams. Moreover, we present a short study on the usefulness of the tool to communicate general concepts of a specific video game based on a captured game session.


Multimedia Tools and Applications | 2016

Assessment of crowdsourcing and gamification loss in user-assisted object segmentation

Axel Carlier; Amaia Salvador; Ferran Cabezas; Xavier Giro-i-Nieto; Vincent Charvillat; Oge Marques

There has been a growing interest in applying human computation – particularly crowdsourcing techniques – to assist in the solution of multimedia, image processing, and computer vision problems which are still too difficult to solve using fully automatic algorithms, and yet relatively easy for humans. In this paper we focus on a specific problem – object segmentation within color images – and compare different solutions which combine color image segmentation algorithms with human efforts, either in the form of an explicit interactive segmentation task or through an implicit collection of valuable human traces with a game. We use Click’n’Cut, a friendly, web-based, interactive segmentation tool that allows segmentation tasks to be assigned to many users, and Ask’nSeek, a game with a purpose designed for object detection and segmentation. The two main contributions of this paper are: (i) We use the results of Click’n’Cut campaigns with different groups of users to examine and quantify the crowdsourcing loss incurred when an interactive segmentation task is assigned to paid crowd-workers, comparing their results to the ones obtained when computer vision experts are asked to perform the same tasks. (ii) Since interactive segmentation tasks are inherently tedious and prone to fatigue, we compare the quality of the results obtained with Click’n’Cut with the ones obtained using a (fun, interactive, and potentially less tedious) game designed for the same purpose. We call this contribution the assessment of the gamification loss, since it refers to how much quality of segmentation results may be lost when we switch to a game-based approach to the same task. We demonstrate that the crowdsourcing loss is significant when using all the data points from workers, but decreases substantially (and becomes comparable to the quality of expert users performing similar tasks) after performing a modest amount of data analysis and filtering out of users whose data are clearly not useful. We also show that – on the other hand – the gamification loss is significantly more severe: the quality of the results drops roughly by half when switching from a focused (yet tedious) task to a more fun and relaxed game environment.


international conference on pattern recognition | 2016

A 2D shape structure for decomposition and part similarity

Kathryn Leonard; Géraldine Morin; Stefanie Hahmann; Axel Carlier

This paper presents a multilevel analysis of 2D shapes and uses it to find similarities between the different parts of a shape. Such an analysis is important for many applications such as shape comparison, editing, and compression. Our robust and stable method decomposes a shape into parts, determines a parts hierarchy, and measures similarity between parts based on a salience measure on the medial axis, the Weighted Extended Distance Function, providing a multi-resolution partition of the shape that is stable across scale and articulation. Comparison with an extensive user study on the MPEG-7 database demonstrates that our geometric results are consistent with user perception.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2014

Bandwidth adaptation for 3D mesh preview streaming

Shanghong Zhao; Wei Tsang Ooi; Axel Carlier; Géraldine Morin; Vincent Charvillat

Online galleries of 3D models typically provide two ways to preview a model before the model is downloaded and viewed by the user: (i) by showing a set of thumbnail images of the 3D model taken from representative views (or keyviews); (ii) by showing a video of the 3D model as viewed from a moving virtual camera along a path determined by the content provider. We propose a third approach called preview streaming for mesh-based 3D objects: by streaming and showing parts of the mesh surfaces visible along the virtual camera path. This article focuses on the preview streaming architecture and framework and presents our investigation into how such a system would best handle network congestion effectively. We present three basic methods: (a) stop-and-wait, where the camera pauses until sufficient data is buffered; (b) reduce-speed, where the camera slows down in accordance to reduce network bandwidth; and (c) reduce-quality, where the camera continues to move at the same speed but fewer vertices are sent and displayed, leading to lower mesh quality. We further propose two advanced methods: (d) keyview-aware, which trades off mesh quality and camera speed appropriately depending on how close the current view is to the keyviews, and (e) adaptive-zoom, which improves visual quality by moving the virtual camera away from the original path. A user study reveals that our keyview-aware method is preferred over the basic methods. Moreover, the adaptive-zoom scheme compares favorably to the keyview-aware method, showing that path adaptation is a viable approach to handling bandwidth variation.


acm sigmm conference on multimedia systems | 2013

3D mesh preview streaming

Shanghong Zhao; Wei Tsang Ooi; Axel Carlier; Géraldine Morin; Vincent Charvillat

Publishers of 3D models online typically provide two ways to preview a model before the model is downloaded and viewed by the user: (i) by showing a set of thumbnail images of the 3D model taken from representative views (or keyviews); (ii) by showing a video of the 3D model as viewed from a moving virtual camera along a path determined by the content provider. We propose a third approach called preview streaming for mesh-based 3D object: by streaming and showing parts of the mesh surfaces visible along the virtual camera path. This paper focuses on the preview streaming architecture and framework, and presents our investigation into how such a system would best handle network congestion effectively. We study three basic methods: (a) stop-and-wait, where the camera pauses until sufficient data is buffered; (b) reduce-speed, where the camera slows down in accordance to reduce network bandwidth; and (c) reduce-quality, where the camera continues to move at the same speed but fewer vertices are sent and displayed, leading to lower mesh quality. We further propose a keyview-aware method that trades off mesh quality and camera speed appropriately depending on how close the current view is to the keyviews. A user study reveals that our keyview-aware method is preferred over the basic methods.


acm multimedia | 2016

Impact of 3D bookmarks on navigation and streaming in a networked virtual environment

Thomas Forgione; Axel Carlier; Géraldine Morin; Wei Tsang Ooi; Vincent Charvillat

A 3D bookmark in a networked virtual environment (NVE) provides a navigation aid, allowing the user to move quickly from its current viewpoint to a bookmarked viewpoint by simply clicking on the bookmark. In this paper, we first validate the positive impact that 3D bookmarks have in easing navigation in a 3D scene. Then, we show that, in the context of a NVE that streams content on demand from server to client, navigating with bookmarks leads to lower rendering quality at the bookmarked viewpoint, due to lower locality of data. We then investigate into how prefetching the 3D data at the bookmarks and precomputation of visible faces at the bookmarks help to improve the rendering quality.


acm multimedia | 2014

3D Interest Maps From Simultaneous Video Recordings

Axel Carlier; Lilian Calvet; Duong Trung Dung Nguyen; Wei Tsang Ooi; Pierre Gurdjos; Vincent Charvillat

We consider an emerging situation where multiple cameras are filming the same event simultaneously from a diverse set of angles. The captured videos provide us with the multiple view geometry and an understanding of the 3D structure of the scene. We further extend this understanding by introducing the concept of 3D interest map in this paper. As most users naturally film what they find interesting from their respective viewpoints, the 3D structure can be annotated with the level of interest, naturally crowdsourced from the users. A 3D interest map can be understood as an extension of saliency maps in the 3D space that captures the semantics of the scene. We evaluate the idea of 3D interest maps on two real datasets, taken from the environment or the cameras that are equipped enough to have an estimation of the poses of cameras and a reasonable synchronization between them. We study two aspects of the 3D interest maps in our evaluation. First, by projecting them into 2D, we compare them to state-of-the-art saliency maps. Second, to demonstrate the usefulness of the 3D interest maps, we apply them to a video mashup system that automatically produces an edited video from one of the datasets.


acm multimedia | 2015

A Video Timeline with Bookmarks and Prefetch State for Faster Video Browsing

Axel Carlier; Vincent Charvillat; Wei Tsang Ooi

Reducing seek latency by predicting what the users will access is important for user experience, particularly during video browsing, where users seek frequently to skim through a video. Much existing research strived to predict user access pattern more accurately to improve the prefetching hit rate. This paper proposed a different approach whereby the prefetch hit rate is improved by biasing the users to seek to prefetched content with higher probability, through changing the video player user interface. Through a user study, we demonstrated that our player interface can lead to up to 4


acm multimedia | 2014

Jiku director 2.0: a mobile video mashup system with zoom and pan using motion maps

Duong Trung Dung Nguyen; Axel Carlier; Wei Tsang Ooi; Vincent Charvillat

\times

Collaboration


Dive into the Axel Carlier's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Tsang Ooi

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oge Marques

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Amaia Salvador

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emmanuel Faure

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Xavier Giro-i-Nieto

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge