Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Max Bajracharya is active.

Publication


Featured researches published by Max Bajracharya.


Unmanned ground vehicle technology. Conference | 2003

CLARAty: an architecture for reusable robotic software

Issa A. D. Nesnas; Anne Wright; Max Bajracharya; Reid G. Simmons; Tara Estlin; Won S. Kim

In this article, we will present an overview of the Coupled Layered Architecture for Robotic Autonomy. CLARAty develops a framework for generic and reusable robotic components that can be adapted to a number of heterogeneous robot platforms. It also provides a framework that will simplify the integration of new technologies and enable the comparison of various elements. CLARAty consists of two distinct layers: a Functional Layer and a Decision Layer. The Functional Layer defines the various abstractions of the system and adapts the abstract components to real or simulated devices. It provides a framework and the algorithms for low- and mid-level autonomy. The Decision Layer provides the systems high-level autonomy, which reasons about global resources and mission constraints. The Decision Layer accesses information from the Functional Layer at multiple levels of granularity. In this article, we will also present some of the challenges in developing interoperable software for various rover platforms. Examples will include challenges from the locomotion and manipulation domains


IEEE Computer | 2008

Autonomy for Mars Rovers: Past, Present, and Future

Max Bajracharya; Mark W. Maimone; Daniel M. Helmick

The vehicles used to explore the Martian surface require a high degree of autonomy to navigate challenging and unknown terrain, investigate targets, and detect scientific events. Increased autonomy will be critical to the success of future missions. In July 1997, as part of NASAs Mars Pathfinder mission, the Sojourner rover became the first spacecraft to autonomously drive on another planet. The twin Mars Exploration Rovers (MER) vehicles landed in January 2004, and after four years Spirit had driven more than four miles and Opportunity more than seven miles-lasting well past their projected three-month lifetime and expected distances traveled. The newest member of the Mars rover family will have the ability to autonomously approach and inspect a target and automatically detect interesting scientific events. In fall 2009, NASA plans to launch the Mars Science Laboratory (MSL) rover, with a primary mission of two years of surface exploration and the ability to acquire and process rock samples. In the near future, the Mars Sample Return (MSR) mission, a cooperative project of NASA and the European Space Agency, will likely use a lightweight rover to drive out and collect samples and bring them back to an Earth return vehicle. This rover will use an unprecedented level of autonomy because of the limited lifetime of a return rocket on the Martian surface and the desire to obtain samples from distant crater walls.


intelligent robots and systems | 2003

CLARAty and challenges of developing interoperable robotic software

Issa A. D. Nesnas; Anne Wright; Max Bajracharya; Reid Simmons; Tara Estlin

We present an overview of the Coupled Layered Architecture for Robotic Autonomy. CLARAty develops a framework for generic and reusable robotic components that can be adapted to a number of heterogeneous robot platforms. It also provides a framework that will simplify the integration of new technologies and enable the comparison of various elements. CLARAty consists of two distinct layers: a functional layer and a decision layer. The functional layer defines the various abstractions of the system and adapts the abstract components to real or simulated devices. It provides a framework and the algorithms for low- and mid-level autonomy. The decision layer provides the systems high-level autonomy, which reasons about global resources and mission constraints. The decision layer accesses information from the functional layer at multiple levels of granularity. We also present some of the challenges in developing interoperable software for various rover platforms.


The International Journal of Robotics Research | 2009

A Fast Stereo-based System for Detecting and Tracking Pedestrians from a Moving Vehicle

Max Bajracharya; Baback Moghaddam; Andrew W. Howard; Shane Brennan; Larry H. Matthies

In this paper we describe a fully integrated system for detecting, localizing, and tracking pedestrians from a moving vehicle. The system can reliably detect upright pedestrians to a range of 40 m in lightly cluttered urban environments. The system uses range data from stereo vision to segment the scene into regions of interest, from which shape features are extracted and used to classify pedestrians. The regions are tracked using shape and appearance features. Tracking is used to temporally filter classifications to improve performance and to estimate the velocity of pedestrians for use in path planning. The end-to-end system runs at 5 Hz on 1,024 × 768 imagery using a standard 2.4 GHz Intel Core 2 Quad processor, and has been integrated and tested on multiple ground vehicles and environments. We show performance on a diverse set of datasets with groundtruth in outdoor environments with varying degrees of pedestrian density and clutter. In highly cluttered urban environments, the detection rates are on a par with state-of-the-art but significantly slower systems.


Advanced Robotics | 2006

Slip-compensated path following for planetary exploration rovers

Daniel M. Helmick; Stergios I. Roumeliotis; Yang Cheng; Daniel S. Clouse; Max Bajracharya; Larry H. Matthies

A system that enables continuous slip compensation for a Mars rover has been designed, implemented and field-tested. This system is composed of several components that allow the rover to accurately and continuously follow a designated path, compensate for slippage and reach intended goals in high-slip environments. These components include visual odometry, vehicle kinematics, a Kalman filter pose estimator and a slip-compensated path follower. Visual odometry tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs. The kinematics for a rocker–bogie suspension system estimates vehicle motion by measuring wheel rates, and rocker, bogie and steering angles. The Kalman filter processes measurements from an inertial measurement unit and visual odometry. The filter estimate is then compared to the kinematic estimate to determine whether slippage has occurred, taking into account estimate uncertainties. If slippage is detected, the slip vector is calculated by differencing the current Kalman filter estimate from the kinematic estimate. This slip vector is then used to determine the necessary wheel velocities and steering angles to compensate for slip and follow the desired path.


Journal of Field Robotics | 2015

Mobile Manipulation and Mobility as Manipulation-Design and Algorithms of RoboSimian

Paul Hebert; Max Bajracharya; Jeremy Ma; Nicolas Hudson; Alper Aydemir; Jason Reid; Charles F. Bergh; James Borders; Matthew Frost; Michael Hagman; John Leichty; Paul G. Backes; Brett Kennedy; Paul Karplus; Brian W. Satzinger; Katie Byl; Krishna Shankar; Joel W. Burdick

This article presents the hardware design and software algorithms of RoboSimian, a statically stable quadrupedal robot capable of both dexterous manipulation and versatile mobility in difficult terrain. The robot has generalized limbs and hands capable of mobility and manipulation, along with almost fully hemispherical three-dimensional sensing with passive stereo cameras. The system is semiautonomous, enabling low-bandwidth, high latency control operated from a standard laptop. Because limbs are used for mobility and manipulation, a single unified mobile manipulation planner is used to generate autonomous behaviors, including walking, sitting, climbing, grasping, and manipulating. The remote operator interface is optimized to designate, parametrize, sequence, and preview behaviors, which are then executed by the robot. RoboSimian placed fifth in the DARPA Robotics Challenge Trials, demonstrating its ability to perform disaster recovery tasks in degraded human environments.


international conference on robotics and automation | 2012

End-to-end dexterous manipulation with deliberate interactive estimation

Nicolas Hudson; Thomas M. Howard; Jeremy Ma; Abhinandan Jain; Max Bajracharya; Steven Myint; Calvin Kuo; Larry H. Matthies; Paul G. Backes; Paul Hebert; Thomas J. Fuchs; Joel W. Burdick

This paper presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation (ARM) program. The developed autonomy system uses robot, object, and environment models to identify and localize objects, and well as plan and execute required manipulation tasks. Deliberate interaction with objects and the environment increases system knowledge about the combined robot and environmental state, enabling high precision tasks such as key insertion to be performed in a consistent framework. This approach has been demonstrated across a wide range of manipulation tasks, and in independent DARPA testing archived the most successfully completed tasks with the fastest average task execution of any evaluated team.


international conference on robotics and automation | 2012

Robust multi-sensor, day/night 6-DOF pose estimation for a dynamic legged vehicle in GPS-denied environments

Jeremy Ma; Sara Susca; Max Bajracharya; Larry H. Matthies; Matthew Malchano; David Wooden

We present a real-time system that enables a highly capable dynamic quadruped robot to maintain an accurate 6-DOF pose estimate (better than 0.5m over every 50m traveled) over long distances traversed through complex, dynamic outdoor terrain, during day and night, in the presence of camera occlusion and saturation, and occasional large external disturbances, such as slips or falls. The system fuses a stereo-camera sensor, inertial measurement units (IMU), and leg odometry with an Extended Kalman Filter (EKF) to ensure robust, low-latency performance. Extensive experimental results obtained from multiple field tests are presented to illustrate the performance and robustness of the system over hours of continuous runs over hundreds of meters of distance traveled in a wide variety of terrains and conditions.


international conference on robotics and automation | 2008

Gamma-SLAM: Using stereo vision and variance grid maps for SLAM in unstructured environments

Tim K. Marks; Andrew Howard; Max Bajracharya; Garrison W. Cottrell; Larry H. Matthies

We introduce a new method for stereo visual SLAM (simultaneous localization and mapping) that works in unstructured, outdoor environments. Unlike other grid-based SLAM algorithms, which use occupancy grid maps, our algorithm uses a new mapping technique that maintains a posterior distribution over the height variance in each cell. This idea was motivated by our experience with outdoor navigation tasks, which has shown height variance to be a useful measure of traversability. To obtain a joint posterior over poses and maps, we use a Rao-Blackwellized particle filter: the pose distribution is estimated using a particle filter, and each particle has its own map that is obtained through exact filtering conditioned on the particles pose. Visual odometry provides good proposal distributions for the particle pose. In the analytical (exact) filter for the map, we update the sufficient statistics of a gamma distribution over the precision (inverse variance) of heights in each grid cell. We verify the algorithms accuracy on two outdoor courses by comparing with ground truth data obtained using electronic surveying equipment. In addition, we solve for the optimal transformation from the SLAM map to georeferenced coordinates, based on a noisy GPS signal. We derive an online version of this alignment process, which can be used to maintain a running estimate of the robots global position that is much more accurate than the GPS readings.


intelligent robots and systems | 2005

Slip compensation for a Mars rover

Daniel M. Helmick; Yang Cheng; Daniel S. Clouse; Max Bajracharya; Larry H. Matthies; Stergios I. Roumeliotis

A system that enables continuous slip compensation for a Mars rover has been designed, implemented, and field-tested. This system is composed of several components that allow the rover to accurately and continuously follow a designated path, compensate for slippage, and reach intended goals in high-slip environments. These components include: visual odometry, vehicle kinematics, a Kalman filter pose estimator, and a slip compensation/path follower. Visual odometry tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs. The vehicle kinematics for a rocker-bogie suspension system estimates motion by measuring wheel rates, and rocker, bogie, and steering angles. The Kalman filter merges data from an inertial measurement unit (IMU) and visual odometry. This merged estimate is then compared to the kinematic estimate to determine how much slippage has occurred, taking into account estimate uncertainties. If slippage has occurred then a slip vector is calculated by differencing the current Kalman filter estimate from the kinematic estimate. This slip vector is then used to determine the necessary wheel velocities and steering angles to compensate for slip and follow the desired path.

Collaboration


Dive into the Max Bajracharya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul G. Backes

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel M. Helmick

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeremy Ma

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar

Issa A. D. Nesnas

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nicolas Hudson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew Howard

University of California

View shared research outputs
Top Co-Authors

Avatar

Paul Hebert

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joel W. Burdick

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge