Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Weisz is active.

Publication


Featured researches published by Jonathan Weisz.


international conference on robotics and automation | 2012

Pose error robust grasping from contact wrench space metrics

Jonathan Weisz; Peter K. Allen

Grasp quality metrics which analyze the contact wrench space are commonly used to synthesize and analyze preplanned grasps. Preplanned grasping approaches rely on the robustness of stored solutions. Analyzing the robustness of such solutions for large databases of preplanned grasps is a limiting factor for the applicability of data driven approaches to grasping. In this work, we will focus on the stability of the widely used grasp wrench space epsilon quality metric over a large range of poses in simulation. We examine a large number of grasps from the Columbia Grasp Database for the Barrett hand. We find that in most cases the grasp with the most robust force closure with respect to pose error for a particular object is not the grasp with the highest epsilon quality. We demonstrate that grasps can be reranked by an estimate of the stability of their epsilon quality. We find that the grasps ranked best by this method are successful more often in physical experiments than grasps ranked best by the epsilon quality.


international conference on robotics and automation | 2011

Blind grasping: Stable robotic grasping using tactile feedback and hand kinematics

Hao Dang; Jonathan Weisz; Peter K. Allen

We propose a machine learning approach to the perception of a stable robotic grasp based on tactile feedback and hand kinematic data, which we call blind grasping. We first discuss a method for simulating tactile feedback using a soft finger contact model in GraspIt!, which is a robotic grasping simulator [10]. Using this simulation technique, we compute tactile contacts of thousands of grasps with a robotic hand using the Columbia Grasp Database [6]. The tactile contacts along with the hand kinematic data are then input to a Support Vector Machine (SVM) which is trained to estimate the stability of a given grasp based on this tactile feedback and also the robotic hand kinematics. Experimental results indicate that the tactile feedback along with the hand kinematic data carry meaningful information for the prediction of the stability of a blind robotic grasp.


intelligent robots and systems | 2011

A highly-underactuated robotic hand with force and joint angle sensors

Long Wang; Joseph DelPreto; Sam Bhattacharyya; Jonathan Weisz; Peter K. Allen

This paper describes a novel underactuated robotic hand design. The hand is highly underactuated as it contains three fingers with three joints each controlled by a single motor. One of the fingers (“thumb”) can also be rotated about the base of the hand, yielding a total of two controllable degrees-of-freedom. A key component of the design is the addition of position and tactile sensors which provide precise angle feedback and binary force feedback. Our mechanical design can be analyzed theoretically to predict contact forces as well as hand position given a particular object shape


international conference on robotics and automation | 2012

Towards a design optimization method for reducing the mechanical complexity of underactuated robotic hands

Frank L. Hammond; Jonathan Weisz; Andres A. de la Llera Kurth; Peter K. Allen; Robert D. Howe

Underactuated compliant robotic hands exploit passive mechanics and joint coupling to reduce the number of actuators required to achieve grasp robustness in unstructured environments. Reduced actuation requirements generally serve to decrease design cost and improve grasp planning efficiency, but overzealous simplification of an actuation topology, coupled with insufficient tuning of mechanical compliance and hand kinematics, can adversely affect grasp quality and adaptability. This paper presents a computational framework for reducing the mechanical complexity of robotic hand actuation topologies without significantly decreasing grasp robustness. Open-source grasp planning software and well-established grasp quality metrics are used to simulate a fully-actuated, 24 DOF anthropomorphic robotic hand grasping a set of daily living objects. DOFs are systematically demoted or removed from the hand actuation topology according to their contribution to grasp quality. The resulting actuation topology contained 22% fewer DOFs, 51% less aggregate joint motion, and required 82% less grasp planning time than the fully-actuated design, but decreased average grasp quality by only 11%.


intelligent robots and systems | 2015

Generating multi-fingered robotic grasps via deep learning

Jacob Varley; Jonathan Weisz; Jared Weiss; Peter K. Allen

This paper presents a deep learning architecture for detecting the palm and fingertip positions of stable grasps directly from partial object views. The architecture is trained using RGBD image patches of fingertip and palm positions from grasps computed on complete object models using a grasping simulator. At runtime, the architecture is able to estimate grasp quality metrics without the need to explicitly calculate the given metric. This ability is useful as the exact calculation of these quality functions is impossible from an incomplete view of a novel object without any tactile feedback. This architecture for grasp quality prediction provides a framework for generalizing grasp experience from known to novel objects.


international symposium on experimental robotics | 2013

Grasping with Your Face

Jonathan Weisz; Benjamin Shababo; Lixing Dong; Peter K. Allen

BCI (Brain Computer Interface) technology shows great promise in the field of assistive robotics. In particular, severely impaired individuals lacking the use of their hands and arms would benefit greatly from a robotic grasping system that can be controlled by a simple and intuitive BCI. In this paper we describe an end-to-end robotic grasping system that is controlled by only four classified facial EMG signals resulting in robust and stable grasps. A front end vision system is used to identify and register objects to be grasped against a database of models. Once the model is aligned, it can be used in a real-time grasp planning simulator that is controlled through a non-invasive and inexpensive BCI interface in both discrete and continuous modes. The user can control the approach direction through the BCI interface, and can also assist the planner in choosing the best grasp. Once the grasp is planned, a robotic hand/arm system can execute the grasp. We show results in using this system to pick up a variety of objects in real-time, from a number of different approach directions, using facial BCI signals exclusively. We believe this system is a working prototype for a fully automated assistive grasping system.


ISRR (1) | 2018

Grasping with Your Brain: A Brain-Computer Interface for Fast Grasp Selection

Robert Ying; Jonathan Weisz; Peter K. Allen

Brain-Computer Interfaces are promising technologies that can improve Human-Robot Interaction, especially for disabled and impaired individuals. Non-invasive BCI’s, which are very desirable from a medical and therapeutic perspective, are only able to deliver noisy, low-bandwidth signals, making their use in complex tasks difficult. To this end, we present a shared control online grasp planning framework using an advanced EEG-based interface. Unlike commonly used paradigms, the EEG interface we incorporate allows online generation of a flexible number of options. This online planning framework allows the user to direct the planner towards grasps that reflect their intent for using the grasped object by successively selecting grasps that approach the desired approach direction of the hand. The planner divides the grasping task into phases, and generates images that reflect the choices that the planner can make at each phase. The EEG interface is used to recognize the user’s preference among a set of options presented by the planner. The EEG signal classifier is fast and simple to train, and the system as a whole requires almost no learning on the part of the subject. Three subjects were able to successfully use the system to grasp and pick up a number of objects in a cluttered scene.


intelligent robots and systems | 2016

Towards automated system and experiment reproduction in robotics

Florian Lier; Marc Hanheide; Lorenzo Natale; Simon Schulz; Jonathan Weisz; Sven Wachsmuth; Sebastian Wrede

Even though research on autonomous robots and human-robot interaction accomplished great progress in recent years, and reusable soft- and hardware components are available, many of the reported findings are only hardly reproducible by fellow scientists. Usually, reproducibility is impeded because required information, such as the specification of software versions and their configuration, required data sets, and experiment protocols are not mentioned or referenced in most publications. In order to address these issues, we recently introduced an integrated tool chain and its underlying development process to facilitate reproducibility in robotics. In this contribution we instantiate the complete tool chain in a unique user study in order to assess its applicability and usability. To this end, we chose three different robotic systems from independent institutions and modeled them in our tool chain, including three exemplary experiments. Subsequently, we asked twelve researchers to reproduce one of the formerly unknown systems and the associated experiment. We show that all twelve scientists were able to replicate a formerly unknown robotics experiment using our tool chain.


intelligent robots and systems | 2013

A user interface for assistive grasping

Jonathan Weisz; Carmine Elvezio; Peter K. Allen

There has been considerable interest in producing grasping platforms using non-invasive, low bandwidth brain computer interfaces(BCIs). Most of this work focuses on low level control of simple hands. Using complex hands improves the versatility of a grasping platform at the cost of increasing its complexity. In order to control more complex hands with these low bandwidth signals, we need to use higher level abstractions. Here, we present a user interface which allows the user to combine the speed and convenience of offline preplanned grasps with the versatility of an online planner. This system incorporates a database of pre-planned grasps with the ability to refine these grasps using an online planner designed for arbitrarily complex hands. Only four commands are necessary to control the entire grasping pipeline, allowing us to use a low cost, noninvasive commercial BCI device to produce robust grasps that reflect user intent. We demonstrate the efficacy of this system with results from five subjects and present results using this system to grasp unknown objects.


intelligent robots and systems | 2014

Single Muscle Site sEMG Interface for Assistive Grasping

Jonathan Weisz; Alexander G. Barszap; Sanjay S. Joshi; Peter K. Allen

We present a joint demonstration between the Robotics, Autonomous Systems, and Controls Laboratory (RASCAL) at UC Davis and the Columbia University Robotics Group, wherein a human-in-the-loop robotic grasping platform in the Columbia lab (New York, NY) is controlled to select and grasp an object by a C3-C4 spinal cord injury (SCI) subject in the UC Davis lab (Davis, CA) using a new single-signal, multi-degree-of-freedom surface electromyography (sEMG) human-robot interface. The grasping system breaks the grasping task into a multi-stage pipeline that can be navigated with only a few inputs. It integrates pre-planned grasps with on-line grasp planning capability and an object recognition and target selection system capable of handling multi-object scenes with moderate occlusion. Previous work performed in the RASCAL lab demonstrated that by continuously modulating the power in two individual bands in the frequency spectrum of a single sEMG signal, users were able to control a cursor in 2D for cursor to target tasks. Using this paradigm, four targets were presented in order for the subject to command the multi-stage grasping pipeline. We demonstrate that using this system, operators are able to grasp objects in a remote location using a robotic grasping platform.

Collaboration


Dive into the Jonathan Weisz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge