Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-François Macq is active.

Publication


Featured researches published by Jean-François Macq.


Bell Labs Technical Journal | 2012

Interactive omnidirectional video delivery: A bandwidth-effective approach

Patrice Rondao Alface; Jean-François Macq; Nico Verzijp

Omnidirectional video (cylindrical or spherical) is a new media becoming more and more popular thanks to its interactivity for online multimedia applications such as Google Street View as well as for video surveillance and robotics applications. Interactivity in this context means that the user is able to explore and navigate audio-visual scenes by freely choosing viewpoint and viewing direction. In order to provide this key feature, omnidirectional video is typically represented as a classical two-dimensional (2D) rectangular panorama video that is mapped onto a (spherical or cylindrical) mesh and then rendered on the clients screen. Early transmission models of this full panorama video and mesh content simply deal with the panorama as a high-resolution video to be encoded at uniform quality. Generally the user can only view a restricted field-of-view of the content and then interact with pan-tilt-zoom commands. This means that a significant part of the bandwidth is wasted by transmitting quality video in regions that are not being visualized. In this paper we evaluate the relevance and optimality of a personalized transmission where quality is modulated in spherical or cylindrical regions depending on their likelihood to be viewed during a live user interaction. We show, based on interaction delay as well as bandwidth constraints, how tiling and predictive methods can improve on existing methods.


acm sigmm conference on multimedia systems | 2013

Towards a format-agnostic approach for production, delivery and rendering of immersive media

O.A. Niamut; Axel Kochale; Javier Ruiz Hidalgo; Rene Kaiser; Jens Spille; Jean-François Macq; Gert Kienast; Oliver Schreer; Ben Shirley

The media industry is currently being pulled in the often-opposing directions of increased realism (high resolution, stereoscopic, large screen) and personalization (selection and control of content, availability on many devices). We investigate the feasibility of an end-to-end format-agnostic approach to support both these trends. In this paper, different aspects of a format-agnostic capture, production, delivery and rendering system are discussed. At the capture stage, the concept of layered scene representation is introduced, including panoramic video and 3D audio capture. At the analysis stage, a virtual director component is discussed that allows for automatic execution of cinematographic principles, using feature tracking and saliency detection. At the delivery stage, resolution-independent audiovisual transport mechanisms for both managed and unmanaged networks are treated. In the rendering stage, a rendering process that includes the manipulation of audiovisual content to match the connected display and loudspeaker properties is introduced. Different parts of the complete system are revisited demonstrating the requirements and the potential of this advanced concept.


Archive | 2013

Media Production, Delivery and Interaction for Platform Independent Systems: Format-Agnostic Media

Oliver Schreer; Jean-François Macq; O.A. Niamut; Javier Ruiz-Hidalgo; Ben Shirley; Georg Thallinger; Graham Thomas

The underlying audio and video processing technology that is discussed in the book relates to areas such as 3D object extraction, audio event detection; 3D sound rendering and face detection, gesture analysis and tracking using video and depth information. The book will give an insight into current trends and developments of future media production, delivery and reproduction. Consideration of the complete production, processing and distribution chain will allow for a full picture to be presented to the reader. Production developments covered will include integrated workflows developed by researchers and industry practitioners as well as capture of ultra-high resolution panoramic video and 3D object based audio across a range of programme genres. Distribution developments will include script based format agnostic network delivery to a full range of devices from large scale public panoramic displays with wavefield synthesis and ambisonic audio reproduction to ’small screen’ mobile devices. Key developments at the consumer end of the chain apply to both passive and interactive viewing modes and will incorporate user interfaces such as gesture recognition and ‘second screen’ devices to allow manipulation of the audio visual content.


international conference on distributed smart cameras | 2011

Demo: Omnidirectional video navigation on a tablet PC using a camera-based orientation tracker

Jean-François Macq; Nico Verzijp; Maarten Aerts; Frederik Vandeputte; Erwin Six

This paper describes a set-up for navigating into omnidirectional video using a tablet PC, onto which a backside camera is used for sensing the device orientation. This allows the system to automatically control the video navigation based on the device rotation, hence enabling an end-user to interact with the content in a very natural manner.


acm multimedia | 2017

16K Cinematic VR Streaming

Patrice Rondao Alface; Maarten Aerts; Donny Tytgat; Sammy Lievens; Christoph Stevens; Nico Verzijp; Jean-François Macq

We present an end-to-end system for streaming Cinematic Virtual Reality (VR) content (also called 360 or omnidirectional content). Content is captured and ingested at a resolution of 16K at 25Hz and streamed towards untethered mobile VR devices. Besides the usual navigation interactions such as panning and tilting offered by common VR systems, we also provide a zooming interactivity. This allows the VR client to fetch high quality pixels captured at a spatial resolution of 16K that greatly increase perceived quality compared to a 4K VR streaming solution. Since current client devices are not capable of receiving and decoding a 16K video, several optimizations are provided to only stream the required pixels for the current viewport of the user, while meeting strict latency and bandwidth requirements for a qualitative VR immersive experience.


international symposium on broadband multimedia systems and broadcasting | 2015

An optimized adaptive streaming framework for interactive immersive video experiences

Maarten Wijnants; Peter Quax; Gustavo Alberto Rovelo Ruiz; Wim Lamotte; Johan Claes; Jean-François Macq

This paper describes how optimized streaming strategies, based on MPEG-DASH, can be employed to power a new generation of interactive applications based on immersive video. The latter encompasses ultra-high-resolution, omnidirectional and panoramic video. The goal is to deliver experiences that are made up of multiple videos of short duration, which can be joined at run-time in an order defined through user interactions. Applications of the technology are widespread, ranging from virtual walkthroughs to interactive storytelling, the former of which will be featured in detail. The main technological challenges tackled in this paper are to deliver these experiences in a seamless fashion, at the highest quality level allowed by network conditions and on a wide range of platforms, including the Web. Besides these, the paper focuses on the two-tier software architecture of the proposed framework, as well as a short evaluation to substantiate the validity of the proposed solutions.


european conference on interactive tv | 2013

A hybrid architecture for delivery of panoramic video

Martin Prins; O.A. Niamut; Ray Van Brandenburg; Jean-François Macq; Patrice Rondao Alface; Nico Verzijp

The media industry is being pulled in the often-opposing directions of increased realism (high resolution, stereoscopic, large screen) and personalisation (selection and control of content, availability on many devices). Within the EU FP7 project FascinatE, a capture, production and delivery system capable of allowing end-users to interactively view and navigate around an ultra-high resolution video panorama showing a live event is being developed. In this paper we report on the latest developments of the FascinatE delivery network. We build upon an initial version of this delivery network architecture and its constituent functional components and propose a hybrid element to combine the two underlying delivery mechanisms that have previously been reported on. This hybrid aspect enables the delivery network to function in an end-to-end live delivery scenario.


international conference on multimedia and expo | 2011

Evaluation of bandwidth performance for interactive spherical Video

Patrice Rondao Alface; Jean-François Macq; Nico Verzijp

Spherical Video is nowadays popular, in particular for interactive on-line entertainment multimedia applications such as Google Street View, as well as for Geographical Information Systems (GIS), video surveillance applications, etc. The user is able to explore and navigate audio/visual scenes by freely choosing viewpoint and viewing direction. Spherical Video is typically represented as a classical 2D rectangular panorama video that is then mapped on a sphere mesh and finally rendered on the users screen. Existing transmission models of spherical video encode the full panorama video with uniform quality while the sphere mesh is kept implicit. However, in general, the user only watches a restricted field-of-view of the spherical content, which can be interactively modified with pan-tilt-zoom commands. In this paper we evaluate the relevance and optimality of a personalized transmission where the panorama video is decomposed into tiles, which are transmitted at a quality modulated in the spherical regions depending on their likelihood to be visualized from the user interactions.


Proceedings of the 1st International Workshop on Multimedia Alternate Realities | 2016

Efficient Encoding of Interactive Personalized Views Extracted from Immersive Video Content

Johan De Praeter; Pieter Duchi; Glenn Van Wallendael; Jean-François Macq; Peter Lambert

Traditional television limits people to a single viewpoint. However, with new technologies such as virtual reality glasses, the way in which people experience video will change. Instead of being limited to a single viewpoint, people will demand a more immersive experience that gives them a sense of being present in a sports stadium, a concert hall, or at other events. To satisfy these users, video such as 360-degree or panoramic video needs to be transported to their homes. Since these videos have an extremely high resolution, sending the entire video requires a high bandwidth capacity and also results in a high decoding complexity at the viewer. The traditional approach to this problem is to split the original video into tiles and only send the required tiles to the viewer. However, this approach still has a large bit rate overhead compared to sending only the required view. Therefore, we propose to send only a personalized view to each user. Since this paper focuses on reducing the computational cost of such a system, we accelerate the encoding of each personalized view based on coding information obtained from a pre-analysis on the entire ultra-high-resolution video. By doing this using the High Efficiency Video Coding Test Model (HM), the complexity of each individual encode of a personalized view is reduced by more than 96.5% compared to a full encode of the view. This acceleration results in a bit rate overhead of at most 19.5%, which is smaller compared to the bit rate overhead of the tile-based method.


Archive | 2011

METHOD AND ARRANGEMENT FOR MULTI-CAMERA CALIBRATION

Maarten Aerts; Donny Tytgat; Jean-François Macq; Sammy Lievens

Collaboration


Dive into the Jean-François Macq's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

O.A. Niamut

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge