Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ravi Kaushik is active.

Publication


Featured researches published by Ravi Kaushik.


international conference on mechatronics and automation | 2010

Fast planar clustering and polygon extraction from noisy range images acquired in indoor environments

Ravi Kaushik; Jizhong Xiao; Samleo L. Joseph; William J. Morris

This paper presents a novel algorithm to cluster planar points and extract polygons from 3D range images acquired in an indoor environment. The algorithm replaces large number of data points in the range image with polygons that fits the planar regions of the indoor environment resulting in high data compression. The 3D range image is acquired by panning a laser scanner and the data is stored in a 2D array. The elements in the array are stored as spherical coordinates and the indices of the array retain neighborhood information. The array is segmented into small patches and Hessian plane parameters are computed for each planar patch. We propose a Breadth First Search (BFS) graph-search algorithm to compare the plane parameters of neighboring patches and cluster the coplanar patches into respective planes. Experimental result shows 94.67% average compression rate for indoor scans. In addition, the algorithm shows a vast improvement in speed when compared to an improvised region-growing algorithm that extracts polygons from range images.


international conference on robotics and automation | 2007

Implementation of Bio-Inspired Vestibulo-Ocular Reflex in a Quadrupedal Robot

Ravi Kaushik; Marek Marcinkiewicz; Jizhong Xiao; Simon Parsons; Theodore Raphan

Studies of primate locomotion have shown that the head and eyes are stabilized in space through the vestibulo-collic and vestibulo ocular reflexes (VCR, VOR). The VOR is a reflex eye movement control system that stabilizes the image on the retina of the eye during head movements in space. This stabilization helps maintain objects of interest approximately fixed on the retina during locomotion. In this paper we present the design and implementation of an artificial vestibular system, which drives a fully articulated binocular vision system for quadrupedal robots to maintain accurate gaze. The complete robot head has 9 degrees of freedom (DOF): pitch, yaw, and roll for the head and 3 DOF for left and right cameras. The SONY AIBOreg quadruped robot has been modified with additional hardware to emulate the vestibular system and the vestibulo-ocular reflex in primates.


international conference on multisensor fusion and integration for intelligent systems | 2012

Real-time pose estimation with RGB-D camera

Ivan Dryanovski; William J. Morris; Ravi Kaushik; Jizhong Xiao

An RGB-D camera is a sensor which outputs the distances to objects in a scene in addition to their RGB color. Recent technological advances in this area have introduced affordable devices in the robotics community. In this paper, we present a real-time feature extraction and pose estimation technique using the data from a single RGB-D camera. First, a set of edge features are computed from the depth and color images. The down-sampled point clouds consisting of the feature points are aligned using the Iterative Closest Point algorithm in 3D space. New features are aligned against a model consisting of previous features from a limited number of past scans. The system achieves a 10 Hz update rate running on a desktop CPU, using VGA resolution RGB-D scans.


international conference on robotics and automation | 2009

Learning to stabilize the head of a quadrupedal robot with an artificial vestibular system

Marek Marcinkiewicz; Ravi Kaushik; Igor Labutov; Simon Parsons; Theodore Raphan

During quadrupedal robot locomotion, there is pitch, yaw, and roll of the head and body due to the stepping. The head motion adversely affects visual sensors embedded in the robots head. Mammals stabilize the head using a vestibulocollic reflex that detects linear and rotational acceleration. In this paper we describe the use of a machine learning algorithm that utilizes signals from an artificial vestibular system that has been embedded in the robots head. Our approach can rapidly learn to compensate for the head movements that appear when no stabilization mechanism is present. The stabilization using a Sony Aibo robot occurs in only a few gait cycles.


intelligent robots and systems | 2009

3D Laser scan registration of dual-robot system using vision

Ravi Kaushik; Jizhong Xiao; William J. Morris; Zhigang Zhu

This paper presents a novel technique to register a set of two 3D laser scans obtained from a ground robot and a wall-climbing robot which operates on the ceiling to construct a complete map of the indoor environment. Traditional laser scan registration methods like the Iterative Closest Point (ICP) algorithm will not converge to a global minimum without a good initial estimate of the transformation matrix. Our technique uses an overhead camera on the wall-climbing robot to keep line of sight with the ground robot and solves the Perspective Three Point (P3P) Problem to obtain the transformation matrix between the wall-climbing robot and the ground robot, which serves as a good initial estimate for the ICP algorithm to further refine the transformation matrix. We propose a novel particle filter algorithm to identify the real pose of the wall-climbing robot out of up to four possible solutions to P3P problem using Grunerts algorithm. The initial estimate ensures convergence of the ICP algorithm to a global minimum at all times. The simulation and experimental results indicate that the resulting composite laser map is accurate. In addition, the vision-based approach increases the efficiency by reducing the number of iterations of the ICP algorithm.


Robotics and Autonomous Systems | 2012

Accelerated patch-based planar clustering of noisy range images in indoor environments for robot mapping

Ravi Kaushik; Jizhong Xiao

This paper introduces a methodology to cluster noisy range images into planar regions acquired in indoor environments. The noisy range images are segmented based on a Gaussian similarity metric, which compares the geometric attributes that satisfy the coplanarity conditions. The algorithm is designed to cluster coplanar noisy range data by means of patch-based sampling from range images. We discuss the advantages of patch-based clustering over point-based clustering of noisy range images that eliminates computational redundancy to accelerate the clustering process while keeping the segmentation error to a minimum. The final output of the algorithm is a set of polygons, where each polygon is defined by a set of boundary points that replaces large number of coplanar data points in a given planar region. The 3D range image is acquired by a rotating 2D range scanner and stored in a 2D array. Each element in the array is explicitly stored as the range distance; the indices of the array implicitly retain neighborhood and angular information. The array is grouped into mutually-exclusive patches of size (kxk) and the Hessian plane parameters are computed for each patch. We propose a graph-search algorithm that compares the plane parameters of neighboring patches by searching breadth-wise and clusters the coplanar patches into respective planes. We compare the proposed Patch-based Plane Clustering (PPC) algorithm with the point-based Region Growing (RG) algorithm and the RANSAC plane segmentation method to analyze the performance of each of the algorithms in terms of speed and accuracy. Experimental results indicate that the PPC algorithm shows a significant improvement in computational speed when compared with the state-of-the-art segmentation algorithms while maintaining a high accuracy in segmenting noisy range images.


robotics and biomimetics | 2010

Polygon-based laser scan registration by heterogeneous robots

Ravi Kaushik; Jizhong Xiao; Samleo L. Joseph; William J. Morris

This paper presents an algorithm to register two laser scans acquired by dual heterogeneous robots in a structured indoor environment. The dual robot system consists of a ground robot and a wall climbing robot equipped with a 3D range scanner and a perspective camera. Both robots alternately step in tandem and stop to acquire a panoramic range image. At each step, the range images from the two robots are registered together and the relative pose of the two robots are updated with respect to the world coordinate frame. An initial estimate of the relative pose between the two range images is computed using a camera pose estimation algorithm with the aid of the camera on the wall-climbing robot. The pose estimate is further refined by a laser scan registration algorithm. This novel algorithm registers two sets of polygons extracted from overlapping range images. The experimental results indicate that the algorithm is robust to fusion of noisy range images acquired in structured indoor environment.


international conference on control, automation, robotics and vision | 2008

3D map construction using heterogeneous robots

Ravi Kaushik; Yi Feng; William J. Morris; Jizhong Xiao; Zhigang Zhu

This paper presents a novel method to construct a complete 3D map that includes all surfaces (ceiling, wall, and furniture tops, etc.) in indoor environments. A team of four robots, including three ground robots and one wall-climbing robot is deployed in a tetrahedron configuration that satisfies the perspective three point (P3P) problem. P3P problem is to estimate the pose of a perspective camera on the wall-climbing robot viewing three ground robots, which will produce up to four solutions using Grunerts algorithm while only one of them is genuine. We propose a probabilistic Bayesian algorithm that identifies the unique solution of the P3P problem using the mobility of the camera. Based on this technique, we introduce an intra-robot localization method to determine the geometric relationship among four robots. Each ground robot is equipped with a rotary laser range finder (LRF), a pan-tilt-zoom camera, and a LED cluster. The wall-climbing robot is fitted with a LRF, a perspective camera, and a motion sensor. Through the vision sensors, the robots obtain their relative poses by solving the P3P problem. Through the LRF on each robot, 4 laser point cloud maps are produced from each robots point of view. With the information of relative poses of the multiple robots and the calibration data of each LRF and camera pair, the 4 partial maps are fused to acquire a complete 3D map that is rich with information of all surfaces. Our approach outperforms the traditional range image fusion algorithms in terms of time complexity and is suitable for real-time implementation. Real experiments verified the effectiveness of the method.


international conference on control, automation, robotics and vision | 2008

Combinning linear vestibulo-ocular and opto-kinetic reflex in a humanoid robot

Igor Labutov; Ravi Kaushik; Marek Marcinkiewicz; Jizhong Xiao; Simon Parsons; Theodore Raphan

The angular vestibulo-ocular and opto-kinetic reflexes (aVOR and OKR) combine to provide compensation for head rotations in space to help maintain a steady image on the retina. We previously implemented an artificial angular vestibulo-ocular reflex with a fully articulated binocular control system in a quadruped robot head. In this paper, we describe the implementation of artificial opto-kinetic and linear vestibular ocular reflexes (OKR and lVOR) that use inputs from an artificial vestibular sensor and a binocular camera system to compensate for linear movements of the head and visual motion to stabilize images on the cameras. The object tracking algorithm was able to fixate a steady object in the cameras field of view during linear perturbations of the robots head in space at low frequencies of movement (0.2-0.6 Hz), simulating the linear VOR. We implemented an algorithm that combines the linear VOR and OKR model and computes changes in relative pose of the cameras with respect to the object being tracked. The system provides compensatory angular movements of the Ocular Servo Module (OSM) to stabilize images as the robot is moved laterally.


international conference on robotics and automation | 2012

POLYGON-BASED 3D SCAN REGISTRATION WITH DUAL-ROBOTS IN STRUCTURED INDOOR ENVIRONMENTS

Ravi Kaushik; Jizhong Xiao; Samleo L. Joseph; William J. Morris

Collaboration


Dive into the Ravi Kaushik's collaboration.

Top Co-Authors

Avatar

Jizhong Xiao

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samleo L. Joseph

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Theodore Raphan

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Igor Labutov

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Zhigang Zhu

City College of New York

View shared research outputs
Top Co-Authors

Avatar

Ivan Dryanovski

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Yi Feng

City University of New York

View shared research outputs
Researchain Logo
Decentralizing Knowledge