Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pierre Gurdjos is active.

Publication


Featured researches published by Pierre Gurdjos.


acm multimedia | 2009

MARCH: mobile augmented reality for cultural heritage

Omar Choudary; Vincent Charvillat; Romulus Grigoras; Pierre Gurdjos

We present MARCH, a complete solution for enhanced cultural heritage discovery using the mobile phone. Simply point the camera of a mobile device at prehistoric cave engravings. Then MARCH augments the captured images with the experts drawings, highlighting in real time the animal engravings, which are almost impossible to observe with the naked eye. We have created a mobile augmented reality application which runs at 14 FPS for 320x240 frames on a Nokia N95 smartphone. We describe the optimizations and the requirements needed to obtain these results on mobile devices.


network and operating system support for digital audio and video | 2014

Interactive Zoom and Panning from Live Panoramic Video

Vamsidhar Reddy Gaddam; Ragnar Langseth; Sigurd Ljødal; Pierre Gurdjos; Vincent Charvillat; Carsten Griwodz; Pål Halvorsen

Panorama video is becoming increasingly popular, and we present an end-to-end real-time system to interactively zoom and pan into high-resolution panoramic videos. Compared to existing systems using perspective panoramas with cropping, our approach creates a cylindrical panorama. Here, the perspective is corrected in real-time, and the result is a better and more natural zoom. Our experimental results also indicate that such zoomed virtual views can be generated far below the frame-rate threshold. Taking into account recent trends in device development, our approach should be able to scale to a large number of concurrent users in the near future.


acm sigmm conference on multimedia systems | 2014

Be your own cameraman: real-time support for zooming and panning into stored and live panoramic video

Vamsidhar Reddy Gaddam; Ragnar Langseth; Håkon Kvale Stensland; Pierre Gurdjos; Vincent Charvillat; Carsten Griwodz; Dag Johansen; Pål Halvorsen

High-resolution panoramic video with a wide field-of-view is popular in many contexts. However, in many examples, like surveillance and sports, it is often desirable to zoom and pan into the generated video. A challenge in this respect is real-time support, but in this demo, we present an end-to-end real-time panorama system with interactive zoom and panning. Our system installed at Alfheim stadium, a Norwegian premier league soccer team, generates a cylindrical panorama from five 2K cameras live where the perspective is corrected in real-time when presented to the client. This gives a better and more natural zoom compared to existing systems using perspective panoramas and zoom operations using plain crop. Our experimental results indicate that virtual views can be generated far below the frame-rate threshold, i.e., on a GPU, the processing requirement per frame is about 10 milliseconds.n The proposed demo lets participants interactively zoom and pan into stored panorama videos generated at Alfheim stadium and from a live 2-camera array on-site.


international conference on image processing | 2012

Camera tracking using concentric circle markers: Paradigms and algorithms

Lilian Calvet; Pierre Gurdjos; Vincent Charvillat

A C2Tag refers to a set of concentric circles of different radii. C2Tags have been recently introduced in computer vision, in particular for camera calibration, as they offer highly interesting photometric and geometric properties, compared to the classical “checkerboard” tags. In this work, we propose the general paradigm of camera tracking based on a planar marker consisting of at least two C2Tags. All the involved steps are described: detection, identification, 2D reconstruction, calibration and 3D reconstruction. Our contribution is to introduce the following missing steps required to deal with long video sequences under time constraints: key-frame selection, bundle adjustment and intermediate pose refinement.


acm multimedia | 2014

3D Interest Maps From Simultaneous Video Recordings

Axel Carlier; Lilian Calvet; Duong Trung Dung Nguyen; Wei Tsang Ooi; Pierre Gurdjos; Vincent Charvillat

We consider an emerging situation where multiple cameras are filming the same event simultaneously from a diverse set of angles. The captured videos provide us with the multiple view geometry and an understanding of the 3D structure of the scene. We further extend this understanding by introducing the concept of 3D interest map in this paper. As most users naturally film what they find interesting from their respective viewpoints, the 3D structure can be annotated with the level of interest, naturally crowdsourced from the users. A 3D interest map can be understood as an extension of saliency maps in the 3D space that captures the semantics of the scene. We evaluate the idea of 3D interest maps on two real datasets, taken from the environment or the cameras that are equipped enough to have an estimation of the poses of cameras and a reasonable synchronization between them. We study two aspects of the 3D interest maps in our evaluation. First, by projecting them into 2D, we compare them to state-of-the-art saliency maps. Second, to demonstrate the usefulness of the 3D interest maps, we apply them to a video mashup system that automatically produces an edited video from one of the datasets.


international conference on image processing | 2016

Towards multi-scale feature detection repeatable over intensity and depth images

Hatem A. Rashwan; Sylvie Chambon; Pierre Gurdjos; Géraldine Morin; Vincent Charvillat

Object recognition based on local features computed at multiple locations is robust to occlusions, strong viewpoint changes and object deformations. These features should be repeatable, precise and distinctive. We present an operator for repeatable feature detection on depth images (relative to 3D models) as well as 2D intensity images. The proposed detector is based on estimating the curviness saliency at multiple scales in each kind of image. We also propose quality measures that evaluate the repeatability of the features between depth and intensity images. The experiments show that the proposed detector outperforms both the most powerful, classical point detectors (e.g., SIFT) and edge detection techniques.


Signal Processing-image Communication | 2016

A full photometric and geometric model for attached webcam/matte screen devices

Yvain Quéau; Richard Modrzejewski; Pierre Gurdjos; Jean-Denis Durou

We present a thorough photometric and geometric study of the multimedia devices composed of both a matte screen and an attached camera, where it is shown that the light emitted by an image displayed on the monitor can be expressed in closed-form at any point facing the screen, and that the geometric calibration of the camera attached to the screen can be simplified by introducing simple geometric constraints. These theoretical contributions are experimentally validated in a photometric stereo application with extended sources, where a colored scene is reconstructed while watching a collection of graylevel images displayed on the screen, providing a cheap and entertaining way to acquire realistic 3D-representations for, e.g., augmented reality. HighlightsWe provide a closed-form expression for the lighting emitted by an infinitely small planar light source.This infinitesimal model is extended to arbitrary extended planar illuminants.The lighting model can be used to model light emitted by a screen displaying graylevel images.A thorough geometrical study of webcam/screen devices is provided.Both photometric and geometric applications are validated in a photometric stereo application with extended sources.


Archive | 2017

Querying Multiple Simultaneous Video Streams with 3D Interest Maps

Axel Carlier; Lilian Calvet; Pierre Gurdjos; Vincent Charvillat; Wei Tsang Ooi

With proliferation of mobile devices equipped with cameras and video recording applications, it is now common to observe multiple mobile cameras filming the same scene at an event from a diverse set of view angles. These recorded videos provide a rich set of data for someone to re-experience the event at a later time. Not all the videos recorded, however, show a desirable view. Navigating through a large collection of videos to find a video with a better viewing angle can be time consuming. We propose a query-response interface in which users can intuitively switch to another video with an alternate, better, view, by selecting a 2D region within a video as a query. The system would then response with another video that has a better view of the selected region, maximizing the viewpoint entropy. The key to our system is a lightweight 3D scene structure, also termed 3D interest map. A 3D interest map is naturally an extension of saliency maps in the 3D space since most users film what they find interesting from their respective viewpoints. A user study with more than 35 users shows that our video query system achieves a suitable compromise between accuracy and run-time.


international conference on pattern recognition applications and methods | 2015

Image Quality Assessment for Photo-consistency Evaluation on Planar Classification in Urban Scenes

Marie-Anne Bauda; Sylvie Chambon; Pierre Gurdjos; Vincent Charvillat

In the context of semantic segmentation of urban scenes, the calibrated multi-views and the flatness assumption n nare commonly used to estimate a warped image based on the homography estimation. In order to classify n nplanar and non-planar areas, we propose an evaluation protocol that compares several Image Quality Assessments n n(IQA) between a reference zone and its warped zone. We show that cosine angle distance-based n nmeasures are more efficient than euclidean distance-based for the planar/non-planar classification and that the n nUniversal Quality Image (UQI) measure outperforms the other evaluated measures.


international conference on pattern recognition | 2010

Bubble Tag Identification Using an InvariantUnderPerspective Signature

Viorica Patraucean; Pierre Gurdjos; Jean Conter

We have at our disposal a large database containing images of various configurations of coplanar circles, randomly laid-out, called “Bubble Tags”. The images are taken from different viewpoints. Given a new image (query image), the goal is to find in the database the image containing the same bubble tag as the query image. We propose representing the images through projective invariant signatures which allow identifying the bubble tag without passing through an Euclidean reconstruction step. This is justified by the size of the database, which imposes the use of queries in 1D/vectorial form, i.e. not in 2D/matrix form. The experiments carried out confirm the efficiency of our approach, in terms of precision and complexity.

Collaboration


Dive into the Pierre Gurdjos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lilian Calvet

Simula Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean Conter

University of Toulouse

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hatem A. Rashwan

Rovira i Virgili University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge