Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mariano Jaimez is active.

Publication


Featured researches published by Mariano Jaimez.


international conference on robotics and automation | 2015

A primal-dual framework for real-time dense RGB-D scene flow

Mariano Jaimez; Mohamed Souiai; Javier Gonzalez-Jimenez; Daniel Cremers

This paper presents the first method to compute dense scene flow in real-time for RGB-D cameras. It is based on a variational formulation where brightness constancy and geometric consistency are imposed. Accounting for the depth data provided by RGB-D cameras, regularization of the flow field is imposed on the 3D surface (or set of surfaces) of the observed scene instead of on the image plane, leading to more geometrically consistent results. The minimization problem is efficiently solved by a primal-dual algorithm which is implemented on a GPU, achieving a previously unseen temporal performance. Several tests have been conducted to compare our approach with a state-of-the-art work (RGB-D flow) where quantitative and qualitative results are evaluated. Moreover, an additional set of experiments have been carried out to show the applicability of our work to estimate motion in real-time. Results demonstrate the accuracy of our approach, which outperforms the RGB-D flow, and which is able to estimate heterogeneous and non-rigid motions at a high frame rate.


IEEE Transactions on Robotics | 2015

Fast Visual Odometry for 3-D Range Sensors

Mariano Jaimez; Javier Gonzalez-Jimenez

This paper presents a new dense method to compute the odometry of a free-flying range sensor in real time. The method applies the range flow constraint equation to sensed points in the temporal flow to derive the linear and angular velocity of the sensor in a rigid environment. Although this approach is applicable to any range sensor, we particularize its formulation to estimate the 3-D motion of a range camera. The proposed algorithm is tested with different image resolutions and compared with two state-of-the-art methods: generalized iterative closest point (GICP) [1] and robust dense visual odometry (RDVO) [2]. Experiments show that our approach clearly overperforms GICP which uses the same geometric input data, whereas it achieves results similar to RDVO, which requires both geometric and photometric data to work. Furthermore, experiments are carried out to demonstrate that our approach is able to estimate fast motions at 60 Hz running on a single CPU core, a performance that has never been reported in the literature. The algorithm is available online under an open source license so that the robotic community can benefit from it.


Mechanics Based Design of Structures and Machines | 2012

Design and Modelling of Omnibola©, A Spherical Mobile Robot

Mariano Jaimez; Juan J. Castillo; Francisco José García; J.A. Cabrera

A spherical mobile robot called Omnibola© is introduced and analyzed in this article. Some advantages of this kind of robot compared with typical wheeled robots are described. Its geometry and its features are presented, emphasizing on those which make it different from other ball-shaped robots. A mathematical model has been developed to have a tool to study our robot dynamics. We conducted some experiments to confirm that model results are similar to experimental results observed in the real robot. Either in experiments or in simulations, the robots behavior is quite oscillatory. Because of this, a simple control law is proposed to stabilize that oscillatory motion.


international conference on 3d vision | 2015

Motion Cooperation: Smooth Piece-wise Rigid Scene Flow from RGB-D Images

Mariano Jaimez; Mohamed Souiai; Jörg Stückler; Javier Gonzalez-Jimenez; Daniel Cremers

We propose a novel joint registration and segmentation approach to estimate scene flow from RGB-D images. Instead of assuming the scene to be composed of a number of independent rigidly-moving parts, we use non-binary labels to capture non-rigid deformations at transitions between the rigid parts of the scene. Thus, the velocity of any point can be computed as a linear combination (interpolation) of the estimated rigid motions, which provides better results than traditional sharp piecewise segmentations. Within a variational framework, the smooth segments of the scene and their corresponding rigid velocities are alternately refined until convergence. A K-means-based segmentation is employed as an initialization, and the number of regions is subsequently adapted during the optimization process to capture any arbitrary number of independently moving objects. We evaluate our approach with both synthetic and real RGB-D images that contain varied and large motions. The experiments show that our method estimates the scene flow more accurately than the most recent works in the field, and at the same time provides a meaningful segmentation of the scene based on 3D motion.


International Journal of Advanced Robotic Systems | 2015

Efficient Reactive Navigation with Exact Collision Determination for 3D Robot Shapes

Mariano Jaimez; Jose-Luis Blanco; Javier Gonzalez-Jimenez

This paper presents a reactive navigator for wheeled mobile robots moving on a flat surface which takes into account both the actual 3D shape of the robot and the 3D surrounding obstacles. The robot volume is modelled by a number of prisms consecutive in height, and the detected obstacles, which can be provided by different kinds of range sensor, are segmented into these heights. Then, the reactive navigation problem is tackled by a number of concurrent 2D navigators, one for each prism, which are consistently and efficiently combined to yield an overall solution. Our proposal for each 2D navigator is based on the concept of the “Parameterized Trajectory Generator” which models the robot shape as a polygon and embeds its kinematic constraints into different motion models. Extensive testing has been conducted in office-like and real house environments, covering a total distance of 18.5 km, to demonstrate the reliability and effectiveness of the proposed method. Moreover, additional experiments are performed to highlight the advantages of a 3D-aware reactive navigator. The implemented code is available under an open-source licence.


international conference on robotics and automation | 2017

Fast odometry and scene flow from RGB-D cameras based on geometric clustering

Mariano Jaimez; Christian Kerl; Javier Gonzalez-Jimenez; Daniel Cremers

In this paper we propose an efficient solution to jointly estimate the camera motion and a piecewise-rigid scene flow from an RGB-D sequence. The key idea is to perform a two-fold segmentation of the scene, dividing it into geometric clusters that are, in turn, classified as static or moving elements. Representing the dynamic scene as a set of rigid clusters drastically accelerates the motion estimation, while segmenting it into static and dynamic parts allows us to separate the camera motion (odometry) from the rest of motions observed in the scene. The resulting method robustly and accurately determines the motion of an RGB-D camera in dynamic environments with an average runtime of 80 milliseconds on a multi-core CPU. The code is available for public use/test.


international conference on robotics and automation | 2016

Planar odometry from a radial laser scanner. A range flow-based approach

Mariano Jaimez; Javier G. Monroy; Javier Gonzalez-Jimenez

In this paper we present a fast and precise method to estimate the planar motion of a lidar from consecutive range scans. For every scanned point we formulate the range flow constraint equation in terms of the sensor velocity, and minimize a robust function of the resulting geometric constraints to obtain the motion estimate. Conversely to traditional approaches, this method does not search for correspondences but performs dense scan alignment based on the scan gradients, in the fashion of dense 3D visual odometry. The minimization problem is solved in a coarse-to-fine scheme to cope with large displacements, and a smooth filter based on the covariance of the estimate is employed to handle uncertainty in unconstraint scenarios (e.g. corridors). Simulated and real experiments have been performed to compare our approach with two prominent scan matchers and with wheel odometry. Quantitative and qualitative results demonstrate the superior performance of our approach which, along with its very low computational cost (0.9 milliseconds on a single CPU core), makes it suitable for those robotic applications that require planar odometry. For this purpose, we also provide the code so that the robotics community can benefit from it.


Archive | 2015

Design of a Driving Module for a Hybrid Locomotion Robot

Juan J. Castillo; J.A. Cabrera; Mariano Jaimez; F. Vidal; Antonio Simón

One of the challenges in today’s mobile robotics is the design of high mobility and maneuverability robots. In this work we present the design and construction of a new concept of a locomotion system for mobile robots. It consists of a hybrid leg-wheel module that can be attached to the main body of a robot in a similar way to a conventional wheel. The mechanical configuration of the driving module is described, emphasizing the characteristics which make it different from other hybrid locomotion systems. A dynamic model that simulates the movement of the module was developed to analyze its behavior and to test different control algorithms that were subsequently implemented on the real module. Finally, we have carried out a series of simple experiments that demonstrate the correct operation of the module on flat ground without obstacles.


computer vision and pattern recognition | 2017

An Efficient Background Term for 3D Reconstruction and Tracking with Smooth Surface Models

Mariano Jaimez; Thomas J. Cashman; Andrew W. Fitzgibbon; Javier Gonzalez-Jimenez; Daniel Cremers

We present a novel strategy to shrink and constrain a 3D model, represented as a smooth spline-like surface, within the visual hull of an object observed from one or multiple views. This new background or silhouette term combines the efficiency of previous approaches based on an image-plane distance transform with the accuracy of formulations based on raycasting or ray potentials. The overall formulation is solved by alternating an inner nonlinear minization (raycasting) with a joint optimization of the surface geometry, the camera poses and the data correspondences. Experiments on 3D reconstruction and object tracking show that the new formulation corrects several deficiencies of existing approaches, for instance when modelling non-convex shapes. Moreover, our proposal is more robust against defects in the object segmentation and inherently handles the presence of uncertainty in the measurements (e.g. null depth values in images provided by RGB-D cameras).


2017 ISOCS/IEEE International Symposium on Olfaction and Electronic Nose (ISOEN) | 2017

Online estimation of 2D wind maps for olfactory robots

Javier G. Monroy; Mariano Jaimez; Javier Gonzalez-Jimenez

This work introduces a novel solution to approximate in real time the 2D wind flow present in a geometrically known environment. It is grounded on the probabilistic framework provided by a Markov random field and enables the estimation of the most probable wind field from a set of noisy observations, for the case of incompressible and steady wind flow. Our method delivers reasonably precise results without falling into common unrealistic assumptions like homogeneous wind flow, absence of obstacles, etc., and performs very efficiently (less than 0.5 seconds for an environment represented with a 100×100 cell grid). This approach is then quite suitable for applications that require real-time estimation of the wind flow, as for example, the localization of gas sources, prediction of the gas dispersion, or the mapping of the gas distribution of different chemicals released in a given scenario.

Collaboration


Dive into the Mariano Jaimez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

F. Vidal

University of Málaga

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge