Marius Preda
Institut Mines-Télécom
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marius Preda.
Signal Processing-image Communication | 2013
Euee S. Jang; Marco Mattavelli; Marius Preda; Mickaël Raulet; Huifang Sun
This paper provides an overview of the rationale of the Reconfigurable Media Coding framework developed by MPEG standardization committee to overcome the limits of traditional ways of providing decoder specifications. Such framework is an extension of the Reconfigurable Video coding framework now encompassing also 3D Graphics coding standard. The idea of this approach is to specify decoders using an actor dataflow based representation consisting of self-contained processing units (coding tools) connected altogether and communicating by explicitly exchanging data. Such representation provides a specification for which several properties of the algorithms interesting for codec implementations are explicitly exposed and can be used for exploring different implementation objectives.
Signal, Image and Video Processing | 2015
Madjid Maidi; Fakhreddine Ababsa; Malik Mallem; Marius Preda
An effective augmented reality system requires an accurate registration of virtual graphics on real images. In this work, we developed a multi-modal tracking architecture for object identification and occlusion handling. Our approach combines several sensors and techniques to overcome the environment changes. This architecture is composed of a first coded targets registration module based on a hybrid algorithm of pose estimation. To manage partial target occlusions, a second module based on a robust method for feature points tracking is developed. The latest component of the system is the hybrid tracking module. This multi-sensors part handles total target occlusions issue. Experiments with the multi-modal system proved the effectiveness of the proposed tracking approach and occlusion handling in augmented reality applications.
acm international conference on interactive experiences for tv and online video | 2015
Alberto Messina; Francisco Morán Burgos; Marius Preda; Skjalg Lepsoy; Miroslaw Bober; Davide Bertola; Stavros Paschalakis
This paper presents work in progress of the European Commission FP7 project BRIDGET BRIDging the Gap for Enhanced broadcasT. The project is developing innovative technology and the underlying architecture for efficient production of second screen applications for broadcasters and media companies. The project advancements include novel front-end authoring tools as well as back-end enabling technologies such as visual search, media structure analysis and 3D A/V reconstruction to support new editorial workflows.
international conference on image processing | 2014
Madjid Maidi; Marius Preda; Yassine Lehiani
In this paper we present a novel approach for object identification and tracking in large image datasets. Objects of interest are represented by feature points and descriptors extracted and compared to a set of reference data. An optimized matching paradigm is designed to deal with scalable image databases while keeping a good recognition rate in real-life environment conditions. Experiments are conducted to evaluate the effectiveness of the method and the obtained results demonstrate a true interest of the proposed approach.
Archive | 2012
Francisco Morán Burgos; Marius Preda
The Tri-Dimensional Graphics (3DG) information coding tools of MPEG-4 are mostly contained in MPEG-4 Part 16, “Animation Framework eXtension (AFX)”, and focus on three important requirements for 3DG applications: compression, streamability and scalability. There are tools for the efficient representation and coding of both individual 3D objects and whole interactive scenes composed by several objects. Usually, the shape, appearance and animation of a 3D object are treated separately, so we devote different sections to each of those subjects, and we also devote another section to scene graph coding, in which we describe how to integrate MPEG-4’s 3DG compression tools with other (non MPEG-4-compliant) XML-based scene graph definitions. A final section on application examples gives hints on the flexibility provided by the 3DG toolset of MPEG-4.
international symposium on mixed and augmented reality | 2017
Gregory F. Welch; Jérôme Royan; Marius Preda
Mixed and augmented reality (MAR) is on the brink of large-scale consumer level commercialization. Standards will be required for MAR to succeed and proliferate as an information media and new contents platform. Standards will enable the development of MAR system components able to interoperate through defined interfaces and hence will enable the development of end-to-end solutions built on system components easily plugged in together to achieve contents sharing and interoperability. The workshop will be a place to present existing standards and demonstrate how they could ease the adoption of MAR in many domains, a place to present recent initiatives in order to coordinate efforts and share requirements coming from the industry. The discussion will lay a foundation to many issues of standardization for MAR: proper subareas for standards and abstract levels, physical and environment object representation, content file format, calibration process, tracking and recognition, augmentation and display style standards, sensors and processing units dedicated to MAR, standards for nonvisual and multimodal augmentation, object feature presentation, benchmarking, Industry requirements, etc. About the Organizers Gerard J. Kim Korea University, South Korea Contact: [email protected] Jérôme Royan Technological Research Institute b<>com, France Marius Preda Institut MINES-Telecom, France 2017 IEEE International Symposium on Mixed and Augmented Reality Adjunct Proceedings 978-0-7695-6327-5/17
international conference on signal and image processing applications | 2015
Yassine Lehiani; Marius Preda; Madjid Maidi; Adrian Gabrielli
31.00
international conference on signal and image processing applications | 2015
Yassine Lehiani; Marius Preda; Madjid Maidi; Faouzi Ghorbel
This paper presents a novel approach for object recognition in extended image databases using a mobile client-server architecture. The proposed approach relies upon feature detection and description to characterize textured objects within the image. The similarity search is performed on descriptor arrays by computing the distance between the query descriptor compared with reference descriptors extracted offline. The key contributions of the approach are the high accuracy, the time-effectiveness and the scalability of the method towards large image datasets. The developed method is first, integrated on a mobile platform and, then, deployed on a client-server architecture to deal with high volume image galleries. Experiments are performed to evaluate the performances of the system in real-life environment conditions and the obtained results demonstrate the relevance of the proposed approach.
international symposium on mixed and augmented reality | 2014
Christine Perey; Rob Manson; Marius Preda; Neil Trevett; Martin Lechner; George Percivall; Timo Engelke; Peter Lefkin; Bruce Mahone; Mary Lynne Nielsen
This paper presents a novel approach for object identification and steady tracking in mobile augmented reality applications. First, the system identifies the object of interest using the KAZE algorithm. Then, the target tracking is enabled with the optical flow throughout the camera instant video stream. Further, the camera pose is determined by estimating the key transformation relating the camera reference frame according to the world coordinate system. Therefore, the visual perception is augmented with 3D virtual graphics overlaid on target object within the scene images. Finally, experiments are conducted to evaluate the system performances in terms of accuracy, robustness and computational efficiency as well.
international conference on signal and image processing applications | 2017
Yassine Lehiani; Madjid Maidi; Marius Preda; Faouzi Ghorbel
Today an experience developer must choose tools for authoring AR experiences based on many factors including ease of use, performance across a variety of platforms, reach and discoverability and cost. The commercially viable options are organized in closed technology silos (beginning with SDKs). A publisher of experiences must choose one or develop for multiple viewing applications, then promote one or more application to the largest possible audience. Developers of applications must then maintain the customized viewing application over time across multiple platforms or have the experience (and the application) expire at the end of a campaign.