Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajive Joshi is active.

Publication


Featured researches published by Rajive Joshi.


systems man and cybernetics | 1999

Minimal representation multisensor fusion using differential evolution

Rajive Joshi; Arthur C. Sanderson

Fusion of information from multiple sensors is required for planning and control of robotic systems in complex environments. The minimal representation approach is based on an information measure as a universal yardstick for fusion and provides a framework for integrating information from a variety of sources. In this paper, we describe the principles of minimal representation multisensor fusion and evaluate a differential evolution approach to the search for solutions. Experiments in robot manipulation using both tactile and visual sensing demonstrate that this algorithm is effective in finding useful and practical solutions to this problem for real systems. Comparison of this differential evolution algorithm with more traditional genetic algorithms shows distinct advantages in both accuracy and efficiency.


international conference on robotics and automation | 1994

Model-based multisensor data fusion: a minimal representation approach

Rajive Joshi; Arthur C. Sanderson

A general approach to model-based multisensor data fusion using a minimal representation size criterion is described. Each sensor is modeled by a general constraint equation which defines a data constraint manifold (DCM), and observed sensor data populate the measurement space according to these constraints. The choice of multisensor interpretations is based on a minimal representation size criterion which evaluates the complexity through correspondence and encoded errors weighted by relative sensor accuracy and precision. This general framework automatically selects subsets of data features called constraining data feature sets (CDFS) and chooses the CDFS corresponding to a minimal representation interpretation of the observed data. The resulting procedure fuses heterogeneous sensor readings into a single estimation method. The method is illustrated for a visual and tactile data fusion example. The approach generalizes to problems with non-geometric models, and can be used for multisensor system identification in other domains such as process control.<<ETX>>


international conference on robotics and automation | 1993

Shape matching from grasp using a minimal representation size criterion

Rajive Joshi; Arthur C. Sanderson

A robust polynomial-time algorithm for model-based pose estimation from tactile grasp data is presented. A minimal representation criterion is used to formulate the matching problem as a global optimization. The hypothesize-and-test paradigm is invoked to search for the optimal solution. A three-on-three match of model features to data features is used to reduce the transform search space to polynomial size. A polynomial-time assignment algorithm is used to compute the optimal correspondence for each hypothesized pose. The strength of the algorithm lies in its ability to perform with noisy or incomplete data and with missing or spurious features. It is capable of rejecting outliers and finding partial matches, and it produces sensible results even in the presence of large noise. The algorithm has been implemented and tested on actual data.<<ETX>>


international conference on robotics and automation | 1996

Application of feature-based multi-view servoing for lamp filament alignment

Rajive Joshi; Arthur C. Sanderson

This paper presents an application of feature-based visual servoing to achieve accurate and robust 3D filament alignment. Two orthogonal cameras are used to localize the five degrees of freedom of an axi-symmetric filament. An algorithm for precisely estimating the center and orientation features of a filament in a camera view is described. The use of feature-based servoing overcomes the difficulty of 3D camera calibration in a factory environment. These features drive a PID control loop for each view. The two views are coupled in one degree of freedom, and the multiple-view controller switches between two single-view controllers to achieve 3D servoing. The overall accuracy achieved is in thousandths of an inch. Experimental results on the performance of the control algorithm are discussed. Models explaining the system behavior are presented. The use of visual servoing in this application far exceeds the positioning accuracy and the repeatability of human operators.


international conference on advanced robotics | 1997

Experimental studies on minimal representation multisensor fusion

Rajive Joshi; Arthur C. Sanderson

We describe laboratory experiments, in which tactile data obtained from the finger-tips of a robot hand, while it is holding an object in front of a calibrated camera, is fused with the vision data from the camera, to determine the object identity, pose, and the touch and vision data correspondences. The touch data is incomplete due to required hand configurations, while nearly half of the vision data are spurious due to the presence of the hand in the image. Using either sensor alone results in ambiguous or incorrect interpretations. A minimal representation size framework is used to formulate the multisensor fusion problem, and can automatically select the object class, correspondence (data subsamples), and pose parameters. The experiments demonstrate that it consistently finds the correct interpretation, and is a practical method for multisensor fusion and model selection.


intelligent robots and systems | 1997

Multisensor fusion of touch and vision using minimal representation size

Rajive Joshi; Arthur C. Sanderson

Multisensor fusion has emerged as a central problem in the development of robotic systems where interaction with the environment is critical to the achievement of a given task. The Anthrobot five-fingered hand grasps an object, and senses the contact points with the surface of the object using tactile sensors. The tactile sensors extract touch position and approximate surface normal in the kinematic reference frame of the hand. In addition, a CCD camera views the position of the same object and extracts vertex/edge features of the object image. Both the tactile features and the visual features are related to the position and orientation of the object, and in practice we wish to combine these two sources of information to improve robots ability to accurately manipulate the object. The fusion of the tactile and image feature data is used to derive an improved estimate of the object pose which guides the manipulation.


Archive | 1999

Multisensor Fusion: A Minimal Representation Framework

Rajive Joshi; Arthur C Sanderson


international conference on robotics and automation | 1995

Multisensor fusion and unknown statistics

Rajive Joshi; Arthur C. Sanderson


international conference on multisensor fusion and integration for intelligent systems | 1996

Multisensor fusion and model selection using a minimal representation size framework

Rajive Joshi; Arthur C. Sanderson


Archive | 1999

Minimal Representation Multisensor Fusion and Model Selection

Rajive Joshi; Arthur C Sanderson

Collaboration


Dive into the Rajive Joshi's collaboration.

Top Co-Authors

Avatar

Arthur C. Sanderson

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge