Maxime Meilland
University of Nice Sophia Antipolis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maxime Meilland.
intelligent robots and systems | 2013
Maxime Meilland; Andrew I. Comport
This paper proposes an approach to real-time dense localisation and mapping that aims at unifying two different representations commonly used to define dense models. On one hand, much research has looked at 3D dense model representations using voxel grids in 3D. On the other hand, image-based key-frame representations for dense environment mapping have been developed. Both techniques have their relative advantages and disadvantages which will be analysed in this paper. In particular each representations space-size requirements, their effective resolution, the computation efficiency, their accuracy and robustness will be compared. This paper then proposes a new model which unifies various concepts and exhibits the main advantages of each approach within a common framework. One of the main results of the proposed approach is its ability to perform large scale reconstruction accurately at the scale of mapping a building.
international symposium on mixed and augmented reality | 2013
Maxime Meilland; Christian Barat; Andrew I. Comport
Acquiring High Dynamic Range (HDR) light-fields from several images with different exposures (sensor integration periods) has been widely considered for static camera positions. In this paper a new approach is proposed that enables 3D HDR environment maps to be acquired directly from a dynamic set of images in real-time. In particular a method will be proposed to use an RGB-D camera as a dynamic light-field sensor, based on a dense real-time 3D tracking and mapping approach, that avoids the need for a light-probe or the observation of reflective surfaces. The 6dof pose and dense scene structure will be estimated simultaneously with the observed dynamic range so as to compute the radiance map of the scene and fuse a stream of low dynamic range images (LDR) into an HDR image. This will then be used to create an arbitrary number of virtual omni-directional light-probes that will be placed at the positions where virtual augmented objects will be rendered. In addition, a solution is provided for the problem of automatic shutter variations in visual SLAM. Augmented reality results are provided which demonstrate real-time 3D HDR mapping, virtual light-probe synthesis and light source detection for rendering reflective objects with shadows seamlessly with the real video stream in real-time.
international conference on robotics and automation | 2013
Maxime Meilland; Andrew I. Comport
This paper proposes a new visual SLAM technique that not only integrates 6 degrees of freedom (DOF) pose and dense structure but also simultaneously integrates the colour information contained in the images over time. This involves developing an inverse model for creating a super-resolution map from many low resolution images. Contrary to classic super-resolution techniques, this is achieved here by taking into account full 3D translation and rotation within a dense localisation and mapping framework. This not only allows to take into account the full range of image deformations but also allows to propose a novel criteria for combining the low resolution images together based on the difference in resolution between different images in 6D space. Another originality of the proposed approach with respect to the current state of the art lies in the minimisation of both colour (RGB) and depth (D) errors, whilst competing approaches only minimise geometry. Several results are given showing that this technique runs in real-time (30Hz) and is able to map large scale environments in high-resolution whilst simultaneously improving the accuracy and robustness of the tracking.
european conference on computer vision | 2012
Glauco Garcia Scandaroli; Maxime Meilland; Rogério Richa
Direct visual tracking can be impaired by changes in illumination if the right choice of similarity function and photometric model is not made. Tracking using the sum of squared differences, for instance, often needs to be coupled with a photometric model to mitigate illumination changes. More sophisticated similarities, e.g. mutual information and cross cumulative residual entropy, however, can cope with complex illumination variations at the cost of a reduction of the convergence radius, and an increase of the computational effort. In this context, the normalized cross correlation (NCC) represents an interesting alternative. The NCC is intrinsically invariant to affine illumination changes, and also presents low computational cost. This article proposes a new direct visual tracking method based on the NCC. Two techniques have been developed to improve the robustness to complex illumination variations and partial occlusions. These techniques are based on subregion clusterization, and weighting by a residue invariant to affine illumination changes. The last contribution is an efficient Newton-style optimization procedure that does not require the explicit computation of the Hessian. The proposed method is compared against the state of the art using a benchmark database with ground-truth, as well as real-world sequences.
international conference on computer vision | 2013
Maxime Meilland; Tom Drummond; Andrew I. Comport
Motion blur and rolling shutter deformations both inhibit visual motion registration, whether it be due to a moving sensor or a moving target. Whilst both deformations exist simultaneously, no models have been proposed to handle them together. Furthermore, neither deformation has been considered previously in the context of monocular full-image 6 degrees of freedom registration or RGB-D structure and motion. As will be shown, rolling shutter deformation is observed when a camera moves faster than a single pixel in parallax between subsequent scan-lines. Blur is a function of the pixel exposure time and the motion vector. In this paper a complete dense 3D registration model will be derived to account for both motion blur and rolling shutter deformations simultaneously. Various approaches will be compared with respect to ground truth and live real-time performance will be demonstrated for complex scenarios where both blur and shutter deformations are dominant.
british machine vision conference | 2011
Maxime Meilland; Andrew I. Comport; Patrick Rives
This paper proposes a model for large illumination variations to improve direct 3D tracking techniques since they are highly prone to illumination changes. Within this context dense monocular and multi-camera tracking techniques are presented which each perform in real-time (45Hz). The proposed approach exploits the relative advantages of both model-based and visual odometry techniques for tracking. In the case of direct model-based tracking, photometric models are usually acquired under significantly greater lighting differences than those observed by the current camera view, however, model-based approaches avoid drift. Incremental visual odometry, on the other hand, has relatively less lighting variation but integrates drift. To solve this problem a hybrid approach is proposed to simultaneously minimise drift via a 3D model whilst using locally consistent illumination to correct large photometric differences. Direct 6 dof tracking is performed by an accurate method, which directly minimizes dense image measurements iteratively, using non-linear optimisation. A stereo technique for automatically acquiring the 3D photometric model has also been optimised for the purpose of this paper. Real experiments are shown on complex 3D scenes for a hand-held camera undergoing fast 3D movement and various illumination changes including daylight, artificial-lights, significant shadows, non-Lambertian reflections, occlusions and saturations.
Journal of Field Robotics | 2015
Maxime Meilland; Andrew I. Comport; Patrick Rives
This paper presents a novel method and innovative apparatus for building three-dimensional 3D dense visual maps of large-scale unstructured environments for autonomous navigation and real-time localization. The main contribution of the paper is focused on proposing an efficient and accurate 3D world representation that allows us to extend the boundaries of state-of-the-art dense visual mapping to large scales. This is achieved via an omnidirectional key-frame representation of the environment, which is able to synthesize photorealistic views of captured environments at arbitrary locations. Locally, the representation is image-based egocentric and is composed of accurate augmented spherical panoramas combining photometric information RGB, depth information D, and saliency for all viewing directions at a particular point in space i.e., a point in the light field. The spheres are related by a graph of six degree of freedom DOF poses 3 DOF translation and 3 DOF rotation that are estimated through multiview spherical registration. It is shown that this world representation can be used to perform robust real-time localization in 6 DOF of any configuration of visual sensors within their environment, whether they be monocular, stereo, or multiview. Contrary to feature-based approaches, an efficient direct image registration technique is formulated. This approach directly exploits the advantages of the spherical representation by minimizing a photometric error between a current image and a reference sphere. Two novel multicamera acquisition systems have been developed and calibrated to acquire this information, and this paper reports for the first time the second system. Given the robustness and efficiency of this representation, field experiments demonstrating autonomous navigation and large-scale mapping will be reported in detail for challenging unstructured environments, containing vegetation, pedestrians, varying illumination conditions, trams, and dense traffic.
ieee international conference on cyber technology in automation control and intelligent systems | 2014
Damien Petit; Pierre Gergondet; Andrea Cherubini; Maxime Meilland; Andrew I. Comport; Abderrahmane Kheddar
We present an assisted navigation scheme designed to control a humanoid robot via a brain computer interface in order to let it interact with the environment and with humans. The interface is based on the well-known steady-state visually evoked potentials (SSVEP) and the stimuli are integrated into the live feedback from the robot embedded camera displayed on a Head Mounted Display (HMD). One user controlled the HRP-2 humanoid robot in an experiment designed to measure the performance of the new navigation scheme based on visual SLAM feedback. The new navigation scheme performance is tested in an experience where the user is asked to navigate to a certain location in order to perform a task. It results that without the navigation assistance it is much more difficult to reach the appropriate pose for performing the task. The detailed results of the experiments are reported in this paper, and we discuss the possible improvements of our novel scheme.
international conference on ubiquitous robots and ambient intelligence | 2014
Pierre Gergondet; Damien Petit; Maxime Meilland; Abderrahmane Kheddar; Andrew I. Comport; Andrea Cherubini
In this paper, we draw perspectives to endow a humanoid robot with capabilities to reach known object in an indoor environment by combining continuous monitoring and building using SLAM and visual tracking. We integrates and exploits two key features: object recognition using the toolbox BLORT, and a SLAM (Simultaneous Localization And Mapping) software, that unifies volumetric 3D modeling and image-based key-frame modeling to be used in tracking. Using these two modules, we show that it is possible to reach a given object in the environment providing its model is registered and known. Our integration software is exemplified using a humanoid robot HRP-2, we present experimental results that illustrates the performance of our approach.
international conference on robotics and automation | 2011
Cédric Audras; Andrew I. Comport; Maxime Meilland; Patrick Rives
Collaboration
Dive into the Maxime Meilland's collaboration.
French Institute for Research in Computer Science and Automation
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs