Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew I. Comport is active.

Publication


Featured researches published by Andrew I. Comport.


intelligent robots and systems | 2013

On unifying key-frame and voxel-based dense visual SLAM at large scales

Maxime Meilland; Andrew I. Comport

This paper proposes an approach to real-time dense localisation and mapping that aims at unifying two different representations commonly used to define dense models. On one hand, much research has looked at 3D dense model representations using voxel grids in 3D. On the other hand, image-based key-frame representations for dense environment mapping have been developed. Both techniques have their relative advantages and disadvantages which will be analysed in this paper. In particular each representations space-size requirements, their effective resolution, the computation efficiency, their accuracy and robustness will be compared. This paper then proposes a new model which unifies various concepts and exhibits the main advantages of each approach within a common framework. One of the main results of the proposed approach is its ability to perform large scale reconstruction accurately at the scale of mapping a building.


international symposium on mixed and augmented reality | 2013

3D High Dynamic Range dense visual SLAM and its application to real-time object re-lighting

Maxime Meilland; Christian Barat; Andrew I. Comport

Acquiring High Dynamic Range (HDR) light-fields from several images with different exposures (sensor integration periods) has been widely considered for static camera positions. In this paper a new approach is proposed that enables 3D HDR environment maps to be acquired directly from a dynamic set of images in real-time. In particular a method will be proposed to use an RGB-D camera as a dynamic light-field sensor, based on a dense real-time 3D tracking and mapping approach, that avoids the need for a light-probe or the observation of reflective surfaces. The 6dof pose and dense scene structure will be estimated simultaneously with the observed dynamic range so as to compute the radiance map of the scene and fuse a stream of low dynamic range images (LDR) into an HDR image. This will then be used to create an arbitrary number of virtual omni-directional light-probes that will be placed at the positions where virtual augmented objects will be rendered. In addition, a solution is provided for the problem of automatic shutter variations in visual SLAM. Augmented reality results are provided which demonstrate real-time 3D HDR mapping, virtual light-probe synthesis and light source detection for rendering reflective objects with shadows seamlessly with the real video stream in real-time.


international conference on robotics and automation | 2013

Super-resolution 3D tracking and mapping

Maxime Meilland; Andrew I. Comport

This paper proposes a new visual SLAM technique that not only integrates 6 degrees of freedom (DOF) pose and dense structure but also simultaneously integrates the colour information contained in the images over time. This involves developing an inverse model for creating a super-resolution map from many low resolution images. Contrary to classic super-resolution techniques, this is achieved here by taking into account full 3D translation and rotation within a dense localisation and mapping framework. This not only allows to take into account the full range of image deformations but also allows to propose a novel criteria for combining the low resolution images together based on the difference in resolution between different images in 6D space. Another originality of the proposed approach with respect to the current state of the art lies in the minimisation of both colour (RGB) and depth (D) errors, whilst competing approaches only minimise geometry. Several results are given showing that this technique runs in real-time (30Hz) and is able to map large scale environments in high-resolution whilst simultaneously improving the accuracy and robustness of the tracking.


international conference on computer vision | 2013

A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration

Maxime Meilland; Tom Drummond; Andrew I. Comport

Motion blur and rolling shutter deformations both inhibit visual motion registration, whether it be due to a moving sensor or a moving target. Whilst both deformations exist simultaneously, no models have been proposed to handle them together. Furthermore, neither deformation has been considered previously in the context of monocular full-image 6 degrees of freedom registration or RGB-D structure and motion. As will be shown, rolling shutter deformation is observed when a camera moves faster than a single pixel in parallax between subsequent scan-lines. Blur is a function of the pixel exposure time and the motion vector. In this paper a complete dense 3D registration model will be derived to account for both motion blur and rolling shutter deformations simultaneously. Various approaches will be compared with respect to ground truth and live real-time performance will be demonstrated for complex scenarios where both blur and shutter deformations are dominant.


Journal of Visual Communication and Image Representation | 2014

Live RGB-D camera tracking for television production studios

Tommi Tykkälä; Andrew I. Comport; Joni-Kristian Kamarainen; Hannu Hartikainen

Highlights� A novel low-cost tool for camera tracking in broadcasting studio environments. � Driftless tracking with keyframes. � Real-time performance using a GPU. � Allows moving actors in the scene while tracking. � Comparison with Kinfu. In this work, a real-time image-based camera tracker is designed for live television production studios. The major concern is to decrease camera tracking expenses by an affordable vision-based approach. First, a dense keyframe model of the static studio scene is generated using image-based dense tracking and bundle adjustment. Online camera tracking is then defined as registration problem between the current RGB-D measurement and the nearest keyframe. With accurate keyframe poses, our camera tracking becomes virtually driftless. The static model is also used to avoid moving actors in the scene. Processing dense RGB-D measurements requires special attention when aiming for real-time performance at 30Hz. We derive a real-time tracker from our cost function for a low-end GPU. The system requires merely a RGB-D sensor, laptop and a low-end GPU. Camera tracking properties are compared with KinectFusion. Our solution demonstrates robust and driftless real-time camera tracking in a television production studio environment.


Journal of Field Robotics | 2015

Dense Omnidirectional RGB-D Mapping of Large-scale Outdoor Environments for Real-time Localization and Autonomous Navigation

Maxime Meilland; Andrew I. Comport; Patrick Rives

This paper presents a novel method and innovative apparatus for building three-dimensional 3D dense visual maps of large-scale unstructured environments for autonomous navigation and real-time localization. The main contribution of the paper is focused on proposing an efficient and accurate 3D world representation that allows us to extend the boundaries of state-of-the-art dense visual mapping to large scales. This is achieved via an omnidirectional key-frame representation of the environment, which is able to synthesize photorealistic views of captured environments at arbitrary locations. Locally, the representation is image-based egocentric and is composed of accurate augmented spherical panoramas combining photometric information RGB, depth information D, and saliency for all viewing directions at a particular point in space i.e., a point in the light field. The spheres are related by a graph of six degree of freedom DOF poses 3 DOF translation and 3 DOF rotation that are estimated through multiview spherical registration. It is shown that this world representation can be used to perform robust real-time localization in 6 DOF of any configuration of visual sensors within their environment, whether they be monocular, stereo, or multiview. Contrary to feature-based approaches, an efficient direct image registration technique is formulated. This approach directly exploits the advantages of the spherical representation by minimizing a photometric error between a current image and a reference sphere. Two novel multicamera acquisition systems have been developed and calibrated to acquire this information, and this paper reports for the first time the second system. Given the robustness and efficiency of this representation, field experiments demonstrating autonomous navigation and large-scale mapping will be reported in detail for challenging unstructured environments, containing vegetation, pedestrians, varying illumination conditions, trams, and dense traffic.


intelligent robots and systems | 2013

Photorealistic 3D mapping of indoors by RGB-D scanning process

Tommi Tykkälä; Andrew I. Comport; Joni-Kristian Kamarainen

In this work, a RGB-D input stream is utilized for GPU-boosted 3D reconstruction of textured indoor environments. The goal is to develop a process which produces standard 3D models from indoors to explore them virtually. Camera motion is tracked in 3D space by registering the current view with a reference view. Depending on the trajectory shape, the reference is either fetched from a concurrently built keyframe model or from a previous RGB-D measurement. Realtime tracking (30Hz) is executed on a low-end GPU, which is possible because structural data is not fused concurrently. After camera poses have been estimated, both trajectory and structure are refined in post-processing. The global point cloud is compressed into a watertight polygon mesh by using Poisson reconstruction method. The Poisson method is well-suited, because it compressed the raw data without introducing multiple geometries and also fills holes efficiently. Holes are typically introduced at occluded regions. Texturing is generated by backprojecting the nearest RGB image onto the mesh. The final model is stored in a standard 3D model format to allow easy user exploration and navigation in virtual 3D environment.


international conference on robotics and automation | 2011

A visual servoing model for generalised cameras: Case study of non-overlapping cameras

Andrew I. Comport; Robert E. Mahony; Fabien Spindler

This paper proposes an adaptation of classical image-based visual servoing to a generalised imaging model where cameras are modelled as sets of 3D viewing rays. This new model leads to a generalised visual servoing control formalism that can be applied to any type of imaging system whether it be multi-camera, catadioptric, non-central, etc. In this paper the generalised 3D viewing cones are parameterised geometrically via Plücker line coordinates. The new visual servoing model is tested on an a-typical stereo-camera system with non-overlapping cameras. In this case no 3D information is available from triangulation and the system is comparable to a 2D visual servoing system with non-central ray-based control law.


ieee international conference on cyber technology in automation control and intelligent systems | 2014

Navigation assistance for a BCI-controlled humanoid robot

Damien Petit; Pierre Gergondet; Andrea Cherubini; Maxime Meilland; Andrew I. Comport; Abderrahmane Kheddar

We present an assisted navigation scheme designed to control a humanoid robot via a brain computer interface in order to let it interact with the environment and with humans. The interface is based on the well-known steady-state visually evoked potentials (SSVEP) and the stimuli are integrated into the live feedback from the robot embedded camera displayed on a Head Mounted Display (HMD). One user controlled the HRP-2 humanoid robot in an experiment designed to measure the performance of the new navigation scheme based on visual SLAM feedback. The new navigation scheme performance is tested in an experience where the user is asked to navigate to a certain location in order to perform a task. It results that without the navigation assistance it is much more difficult to reach the appropriate pose for performing the task. The detailed results of the experiments are reported in this paper, and we discuss the possible improvements of our novel scheme.


ieee/sice international symposium on system integration | 2016

Closed-loop RGB-D SLAM multi-contact control for humanoid robots

Amaud Tanguy; Pierre Gergondet; Andrew I. Comport; Abderrahmane Kheddar

We report the integration of a state-of-the-art dense 6D simultaneous localisation and mapping (D6DSLAM) to close the QP control loop on an humanoid robot in multi-contact motions. Our multi-contact planning defines desired contacts based on 3D (CAD) models, and generates a reaching plan. Registration of the 3D model onto an RGB-D key-frame graph representation of the explored environment allows using visual odometry to make real-time adjustments to the reaching plan, leading to improved in robustness from a wide range of perturbations. Extensive results are presented on various complex tasks using the HRP-2Kai humanoid robot including valve and cars steering-wheel grasping, multi-contact stair climbing from approximate initial humanoid poses.

Collaboration


Dive into the Andrew I. Comport's collaboration.

Top Co-Authors

Avatar

Maxime Meilland

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Joni-Kristian Kamarainen

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Abderrahmane Kheddar

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tommi Tykkälä

Lappeenranta University of Technology

View shared research outputs
Top Co-Authors

Avatar

Pierre Gergondet

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnaud Tanguy

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Christian Barat

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Damien Petit

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge