Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ivan Dryanovski is active.

Publication


Featured researches published by Ivan Dryanovski.


international conference on robotics and automation | 2013

Fast visual odometry and mapping from RGB-D data

Ivan Dryanovski; Roberto G. Valenti; Jizhong Xiao

An RGB-D camera is a sensor which outputs color and depth and information about the scene it observes. In this paper, we present a real-time visual odometry and mapping system for RGB-D cameras. The system runs at frequencies of 30Hz and higher in a single thread on a desktop CPU with no GPU acceleration required. We recover the unconstrained 6-DoF trajectory of a moving camera by aligning sparse features observed in the current RGB-D image against a model of previous features. The model is persistent and dynamically updated from new observations using a Kalman Filter. We formulate a novel uncertainty measure for sparse RGD-B features based on a Gaussian mixture model for the filtering stage. Our registration algorithm is capable of closing small-scale loops in indoor environments online without any additional SLAM back-end techniques.


intelligent robots and systems | 2010

Multi-volume occupancy grids: An efficient probabilistic 3D mapping model for micro aerial vehicles

Ivan Dryanovski; William J. Morris; Jizhong Xiao

Advancing research into autonomous micro aerial vehicle navigation requires data structures capable of representing indoor and outdoor 3D environments. The vehicle must be able to update the map structure in real time using readings from range-finding sensors when mapping unknown areas; it must also be able to look up occupancy information from the map for the purposes of localization and path-planning. Mapping models that have been used for these tasks include voxel grids, multi-level surface maps, and octrees. In this paper, we suggest a new approach to 3D mapping using a multi-volume occupancy grid, or MVOG. MVOGs explicitly store information about both obstacles and free space. This allows us to correct previous potentially erroneous sensor readings by incrementally fusing in new positive or negative sensor information. In turn, this enables extracting more reliable probabilistic information about the occupancy of 3D space. MVOGs outperform existing probabilistic 3D mapping methods in terms of memory usage, due to the fact that observations are grouped together into continuous vertical volumes to save space. We describe the techniques required for mapping using MVOGs, and analyze their performance using indoor and outdoor experimental data.


Sensors | 2015

Keeping a Good Attitude: A Quaternion-Based Orientation Filter for IMUs and MARGs

Roberto G. Valenti; Ivan Dryanovski; Jizhong Xiao

Orientation estimation using low cost sensors is an important task for Micro Aerial Vehicles (MAVs) in order to obtain a good feedback for the attitude controller. The challenges come from the low accuracy and noisy data of the MicroElectroMechanical System (MEMS) technology, which is the basis of modern, miniaturized inertial sensors. In this article, we describe a novel approach to obtain an estimation of the orientation in quaternion form from the observations of gravity and magnetic field. Our approach provides a quaternion estimation as the algebraic solution of a system from inertial/magnetic observations. We separate the problems of finding the “tilt” quaternion and the heading quaternion in two sub-parts of our system. This procedure is the key for avoiding the impact of the magnetic disturbances on the roll and pitch components of the orientation when the sensor is surrounded by unwanted magnetic flux. We demonstrate the validity of our method first analytically and then empirically using simulated data. We propose a novel complementary filter for MAVs that fuses together gyroscope data with accelerometer and magnetic field readings. The correction part of the filter is based on the method described above and works for both IMU (Inertial Measurement Unit) and MARG (Magnetic, Angular Rate, and Gravity) sensors. We evaluate the effectiveness of the filter and show that it significantly outperforms other common methods, using publicly available datasets with ground-truth data recorded during a real flight experiment of a micro quadrotor helicopter.


international conference on robotics and automation | 2014

Autonomous quadrotor flight using onboard RGB-D visual odometry

Roberto G. Valenti; Ivan Dryanovski; Carlos Jaramillo; Daniel Perea Strom; Jizhong Xiao

In this paper we present a navigation system for Micro Aerial Vehicles (MAV) based on information provided by a visual odometry algorithm processing data from an RGB-D camera. The visual odometry algorithm uses an uncertainty analysis of the depth information to align newly observed features against a global sparse model of previously detected 3D features. The visual odometry provides updates at roughly 30 Hz that is fused at 1 KHz with the inertial sensor data through a Kalman Filter. The high-rate pose estimation is used as feedback for the controller, enabling autonomous flight. We developed a 4DOF path planner and implemented a real-time 3D SLAM where all the system runs on-board. The experimental results and live video demonstrates the autonomous flight and 3D SLAM capabilities of the quadrotor with our system.


IEEE Transactions on Instrumentation and Measurement | 2016

A Linear Kalman Filter for MARG Orientation Estimation Using the Algebraic Quaternion Algorithm

Roberto G. Valenti; Ivan Dryanovski; Jizhong Xiao

Real-time orientation estimation using low-cost inertial sensors is essential for all the applications where size and power consumption are critical constraints. Such applications include robotics, human motion analysis, and mobile devices. This paper presents a linear Kalman filter for magnetic angular rate and gravity sensors that processes angular rate, acceleration, and magnetic field data to obtain an estimation of the orientation in quaternion representation. Acceleration and magnetic field observations are preprocessed through a novel external algorithm, which computes the quaternion orientation as the composition of two algebraic quaternions. The decoupled nature of the two quaternions makes the roll and pitch components of the orientation immune to magnetic disturbances. The external algorithm reduces the complexity of the filter, making the measurement equations linear. Real-time implementation and the test results of the Kalman filter are presented and compared against a typical quaternion-based extended Kalman filter and a constant gain filter based on the gradient-descent algorithm.


international conference on robotics and automation | 2011

An open-source pose estimation system for micro-air vehicles

Ivan Dryanovski; William J. Morris; Jizhong Xiao

This paper presents the implementation of an open-source 6-DoF pose estimation system for micro-air vehicles and considers the future implications and benefits of open-source robotics. The system is designed to provide high frequency pose estimates in unknown, GPS-denied indoor environments. It requires a minimal set of sensors including a planar laser range-finder and an IMU sensor. The code is optimized to run entirely onboard, so no wireless link and ground station are explicitly needed. A major focus in our work is modularity, allowing each component to be benchmarked individually, or swapped out for a different implementation, without change to the rest of the system. We demonstrate how the pose estimation can be used for 2D SLAM or 3D mapping experiments. All the software and hardware which we have developed, as well as extensive documentation and test data, is available online.


systems, man and cybernetics | 2013

Semantic Indoor Navigation with a Blind-User Oriented Augmented Reality

Samleo L. Joseph; Xiaochen Zhang; Ivan Dryanovski; Jizhong Xiao; Chucai Yi; Yingli Tian

The aim of this paper is to design an inexpensive conceivable wearable navigation system that can aid in the navigation of a visually impaired user. A novel approach of utilizing the floor plan map posted on the buildings is used to acquire a semantic plan. The extracted landmarks such as room numbers, doors, etc act as a parameter to infer the way points to each room. This provides a mental mapping of the environment to design a navigation framework for future use. A human motion model is used to predict a path based on how real humans ambulate towards a goal by avoiding obstacles. We demonstrate the possibilities of augmented reality (AR) as a blind user interface to perceive the physical constraints of the real world using haptic and voice augmentation. The haptic belt vibrates to direct the user towards the travel destination based on the metric localization at each step. Moreover, travel route is presented using voice guidance, which is achieved by accurate estimation of the users location and confirmed by extracting the landmarks, based on landmark localization. The results show that it is feasible to assist a blind user to travel independently by providing the constraints required for safe navigation with user oriented augmented reality.


robotics science and systems | 2015

Chisel: Real Time Large Scale 3D Reconstruction Onboard a Mobile Device using Spatially Hashed Signed Distance Fields

Matthew Klingensmith; Ivan Dryanovski; Siddhartha S. Srinivasa; Jizhong Xiao

We describe CHISEL: a system for real-time housescale (300 square meter or more) dense 3D reconstruction onboard a Google Tango [1] mobile device by using a dynamic spatially-hashed truncated signed distance field[2] for mapping, and visual-inertial odometry for localization. By aggressively culling parts of the scene that do not contain surfaces, we avoid needless computation and wasted memory. Even under very noisy conditions, we produce high-quality reconstructions through the use of space carving. We are able to reconstruct and render very large scenes at a resolution of 2-3 cm in real time on a mobile device without the use of GPU computing. The user is able to view and interact with the reconstruction in real-time through an intuitive interface. We provide both qualitative and quantitative results on publicly available RGB-D datasets [3], and on datasets collected in real-time from two devices.


international conference on robotics and automation | 2012

Incremental registration of RGB-D images

Ivan Dryanovski; Carlos Jaramillo; Jizhong Xiao

An RGB-D camera is a sensor which outputs range and color information about objects. Recent technological advances in this area have introduced affordable RGB-D devices in the robotics community. In this paper, we present a real-time technique for 6-DoF camera pose estimation through the incremental registration of RGB-D images. First, a set of edge features are computed from the depth and color images. An initial motion estimation is calculated through aligning the features. This initial guess is refined by applying the Iterative Closest Point algorithm on the dense point cloud data. A rigorous error analysis assesses several sets of RGB-D ground truth data via an error accumulation metric. We show that the proposed two-stage approach significantly reduces error in the pose estimation, compared to a state-of-the-art ICP registration technique.


international conference on multisensor fusion and integration for intelligent systems | 2012

Real-time pose estimation with RGB-D camera

Ivan Dryanovski; William J. Morris; Ravi Kaushik; Jizhong Xiao

An RGB-D camera is a sensor which outputs the distances to objects in a scene in addition to their RGB color. Recent technological advances in this area have introduced affordable devices in the robotics community. In this paper, we present a real-time feature extraction and pose estimation technique using the data from a single RGB-D camera. First, a set of edge features are computed from the depth and color images. The down-sampled point clouds consisting of the feature points are aligned using the Iterative Closest Point algorithm in 3D space. New features are aligned against a model consisting of previous features from a limited number of past scans. The system achieves a 10 Hz update rate running on a desktop CPU, using VGA resolution RGB-D scans.

Collaboration


Dive into the Ivan Dryanovski's collaboration.

Top Co-Authors

Avatar

Jizhong Xiao

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Jaramillo

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chucai Yi

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Ravi Kaushik

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Samleo L. Joseph

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Xiaochen Zhang

City College of New York

View shared research outputs
Researchain Logo
Decentralizing Knowledge