Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roberto G. Valenti is active.

Publication


Featured researches published by Roberto G. Valenti.


international conference on robotics and automation | 2013

Fast visual odometry and mapping from RGB-D data

Ivan Dryanovski; Roberto G. Valenti; Jizhong Xiao

An RGB-D camera is a sensor which outputs color and depth and information about the scene it observes. In this paper, we present a real-time visual odometry and mapping system for RGB-D cameras. The system runs at frequencies of 30Hz and higher in a single thread on a desktop CPU with no GPU acceleration required. We recover the unconstrained 6-DoF trajectory of a moving camera by aligning sparse features observed in the current RGB-D image against a model of previous features. The model is persistent and dynamically updated from new observations using a Kalman Filter. We formulate a novel uncertainty measure for sparse RGD-B features based on a Gaussian mixture model for the filtering stage. Our registration algorithm is capable of closing small-scale loops in indoor environments online without any additional SLAM back-end techniques.


Sensors | 2015

Keeping a Good Attitude: A Quaternion-Based Orientation Filter for IMUs and MARGs

Roberto G. Valenti; Ivan Dryanovski; Jizhong Xiao

Orientation estimation using low cost sensors is an important task for Micro Aerial Vehicles (MAVs) in order to obtain a good feedback for the attitude controller. The challenges come from the low accuracy and noisy data of the MicroElectroMechanical System (MEMS) technology, which is the basis of modern, miniaturized inertial sensors. In this article, we describe a novel approach to obtain an estimation of the orientation in quaternion form from the observations of gravity and magnetic field. Our approach provides a quaternion estimation as the algebraic solution of a system from inertial/magnetic observations. We separate the problems of finding the “tilt” quaternion and the heading quaternion in two sub-parts of our system. This procedure is the key for avoiding the impact of the magnetic disturbances on the roll and pitch components of the orientation when the sensor is surrounded by unwanted magnetic flux. We demonstrate the validity of our method first analytically and then empirically using simulated data. We propose a novel complementary filter for MAVs that fuses together gyroscope data with accelerometer and magnetic field readings. The correction part of the filter is based on the method described above and works for both IMU (Inertial Measurement Unit) and MARG (Magnetic, Angular Rate, and Gravity) sensors. We evaluate the effectiveness of the filter and show that it significantly outperforms other common methods, using publicly available datasets with ground-truth data recorded during a real flight experiment of a micro quadrotor helicopter.


international conference on robotics and automation | 2014

Autonomous quadrotor flight using onboard RGB-D visual odometry

Roberto G. Valenti; Ivan Dryanovski; Carlos Jaramillo; Daniel Perea Strom; Jizhong Xiao

In this paper we present a navigation system for Micro Aerial Vehicles (MAV) based on information provided by a visual odometry algorithm processing data from an RGB-D camera. The visual odometry algorithm uses an uncertainty analysis of the depth information to align newly observed features against a global sparse model of previously detected 3D features. The visual odometry provides updates at roughly 30 Hz that is fused at 1 KHz with the inertial sensor data through a Kalman Filter. The high-rate pose estimation is used as feedback for the controller, enabling autonomous flight. We developed a 4DOF path planner and implemented a real-time 3D SLAM where all the system runs on-board. The experimental results and live video demonstrates the autonomous flight and 3D SLAM capabilities of the quadrotor with our system.


IEEE Transactions on Instrumentation and Measurement | 2016

A Linear Kalman Filter for MARG Orientation Estimation Using the Algebraic Quaternion Algorithm

Roberto G. Valenti; Ivan Dryanovski; Jizhong Xiao

Real-time orientation estimation using low-cost inertial sensors is essential for all the applications where size and power consumption are critical constraints. Such applications include robotics, human motion analysis, and mobile devices. This paper presents a linear Kalman filter for magnetic angular rate and gravity sensors that processes angular rate, acceleration, and magnetic field data to obtain an estimation of the orientation in quaternion representation. Acceleration and magnetic field observations are preprocessed through a novel external algorithm, which computes the quaternion orientation as the composition of two algebraic quaternions. The decoupled nature of the two quaternions makes the roll and pitch components of the orientation immune to magnetic disturbances. The external algorithm reduces the complexity of the filter, making the measurement equations linear. Real-time implementation and the test results of the Kalman filter are presented and compared against a typical quaternion-based extended Kalman filter and a constant gain filter based on the gradient-descent algorithm.


Sensors | 2016

Design and Analysis of a Single—Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs)

Carlos Jaramillo; Roberto G. Valenti; Ling Guo; Jizhong Xiao

We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.


robotics and biomimetics | 2013

6-DoF pose localization in 3D point-cloud dense maps using a monocular camera

Carlos Jaramillo; Ivan Dryanovski; Roberto G. Valenti; Jizhong Xiao

We present a 6-degree-of-freedom (6-DoF) pose localization method for a monocular camera in a 3D point-cloud dense map prebuilt by depth sensors (e.g., RGB-D sensor, laser scanner, etc.). We employ fast and robust 2D feature detection on the real camera to be matched against features from a virtual view. The virtual view (color and depth images) is constructed by projecting the maps 3D points onto a plane using the previous localized pose of the real camera. 2D-to-3D point correspondences are obtained from the inherent relationship between the real cameras 2D features and their matches on the virtual depth image (projected 3D points). Thus, we can solve the Perspective-n-Point (PnP) problem in order to find the relative pose between the real and virtual cameras. With the help of RANSAC, the projection error is minimized even further. Finally, the real cameras pose is solved with respect to the map by a simple frame transformation. This procedure repeats for each time step (except for the initial case). Our results indicate that a monocular camera alone can be localized within the map in real-time (at QVGA-resolution). Our method differentiates from others in that no chain of poses is needed or kept. Our localization is not susceptible to drift because the history of motion (odometry) is mostly independent over each PnP + RANSAC solution, which throws away past errors. In fact, the previous known pose only acts as a region of interest to associate 2D features on the real image with 3D points in the map. The applications of our proposed method are various, and perhaps it is a solution that has not been attempted before.


intelligent robots and systems | 2016

GUMS: A generalized unified model for stereo omnidirectional vision (demonstrated via a folded catadioptric system)

Carlos Jaramillo; Roberto G. Valenti; Jizhong Xiao

This paper introduces GUMS, a complete projection model for omnidirectional stereo vision systems. GUMS is based on the existing generalized unified model (GUM), which we extend in order to satisfy a tight relationship among a pair of omnidirectional views for fixed baseline sensors. We exemplify the proposed models calibration via a single-camera coaxial omnistereo system in a joint bundle-adjusted fashion. We compare our coupled method against the naive approach where the calibration of intrinsic parameters is first performed individually for each omnidirectional view using existing monocular implementations, to then solve for the extrinsic parameters as an additional step that has no effect on the intrinsic model solutions initially computed. We validate GUMS and its calibration effectiveness using both real and synthetic systems against ground-truth data. Our calibration method proves successful for correcting the unavoidable misalignment present in vertically-configured catadioptric rigs. We also generate 3D point clouds employing the calibrated GUMS systems in order to demonstrate the qualitative outcome of our contribution.


robotics and biomimetics | 2013

A non-inertial acceleration suppressor for low cost inertial measurement unit attitude estimation

Roberto G. Valenti; Ivan Dryanovski; Jizhong Xiao

This paper presents a method to evaluate the attitude of a rigid body under condition of high non-gravitational acceleration. Most of the attitude estimation algorithms based on data from low cost Inertial Measurement Units (IMU), assume that the total acceleration perceived by the accelerometer be gravity or at most small variations of it. When the actual conditions are far away from such assumption, the attitude estimation results in wrong evaluations. We propose a method that uses an external RGB-D camera to measure The non-inertial linear acceleration. Such acceleration is subtracted to the total acceleration reading of the accelerometer in order to obtain a truthful gravity direction that will be fed into the fusion algorithm. Performance of our attitude estimation has been evaluated empirically under non-gravitational acceleration. We compare our results against the output of a commercially avalaible IMU sensor based on a Kalman Filter algorithm as well as the estimation of a recently developed fusion algorithm based on a gradient descent algorithm, showing significant improvement.


ieee international conference on cyber technology in automation control and intelligent systems | 2016

An autonomous flyer photographer

Roberto G. Valenti; Yong-Dian Jian; Kai Ni; Jizhong Xiao

In this paper we explore the combination of a latest generation mobile device and a micro quadrotor platform to perform indoor autonomous navigation for the purpose of autonomous photography. We use the Yellowstone tablet from Googles Tango project [1], equipped with onboard, fully integrated sensing platform and with significant computational capability. To the best of our knowledge we are the first to exploit the Googles Tango tablet as source of pose estimate to control the quadrotors motion. Using the tablets onboard camera the system is able to detect people and generate a desired pose that the quadrotor will have to reach in order to take a well framed picture of the detected subject. The experimental results and live video demonstrate the capabilities of the autonomous flying robot photographer using the system described throughout this manuscript.


Autonomous Robots | 2013

An open-source navigation system for micro aerial vehicles

Ivan Dryanovski; Roberto G. Valenti; Jizhong Xiao

Collaboration


Dive into the Roberto G. Valenti's collaboration.

Top Co-Authors

Avatar

Jizhong Xiao

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Ivan Dryanovski

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Carlos Jaramillo

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ling Guo

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge