Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Berk Calli is active.

Publication


Featured researches published by Berk Calli.


IEEE Robotics & Automation Magazine | 2015

Benchmarking in Manipulation Research: Using the Yale-CMU-Berkeley Object and Model Set

Berk Calli; Aaron Walsman; Arjun Singh; Siddhartha S. Srinivasa; Pieter Abbeel; Aaron M. Dollar

In this article, we present the Yale-Carnegie Mellon University (CMU)-Berkeley (YCB) object and model set, intended to be used to facilitate benchmarking in robotic manipulation research. The objects in the set are designed to cover a wide range of aspects of the manipulation problem. The set includes objects of daily life with different shapes, sizes, textures, weights, and rigidities as well as some widely used manipulation tests. The associated database provides high-resolution red, green, blue, plus depth (RGB-D) scans, physical properties, and geometric models of the objects for easy incorporation into manipulation and planning software platforms. In addition to describing the objects and models in the set along with how they were chosen and derived, we provide a framework and a number of example task protocols, laying out how the set can be used to quantitatively evaluate a range of manipulation approaches, including planning, learning, mechanical design, control, and many others. A comprehensive literature survey on the existing benchmarks and object data sets is also presented, and their scope and limitations are discussed. The YCB set will be freely distributed to research groups worldwide at a series of tutorials at robotics conferences. Subsequent sets will be, otherwise, available to purchase at a reasonable cost. It is our hope that the ready availability of this set along with the ground laid in terms of protocol templates will enable the community of manipulation researchers to more easily compare approaches as well as continually evolve standardized benchmarking tests and metrics as the field matures.


international conference on advanced robotics | 2015

The YCB object and Model set: Towards common benchmarks for manipulation research

Berk Calli; Arjun Singh; Aaron Walsman; Siddhartha S. Srinivasa; Pieter Abbeel; Aaron M. Dollar

In this paper we present the Yale-CMU-Berkeley (YCB) Object and Model set, intended to be used for benchmarking in robotic grasping and manipulation research. The objects in the set are designed to cover various aspects of the manipulation problem; it includes objects of daily life with different shapes, sizes, textures, weight and rigidity, as well as some widely used manipulation tests. The associated database provides high-resolution RGBD scans, physical properties and geometric models of the objects for easy incorporation into manipulation and planning software platforms. A comprehensive literature survey on existing benchmarks and object datasets is also presented and their scope and limitations are discussed. The set will be freely distributed to research groups worldwide at a series of tutorials at robotics conferences, and will be otherwise available at a reasonable purchase cost.


intelligent robots and systems | 2015

Unplanned, model-free, single grasp object classification with underactuated hands and force sensors

Minas V. Liarokapis; Berk Calli; Adam Spiers; Aaron M. Dollar

In this paper we present a methodology for discriminating between different objects using only a single force closure grasp with an underactuated robot hand equipped with force sensors. The technique leverages the benefits of simple, adaptive robot grippers (which can grasp successfully without prior knowledge of the hand or the object model), with an advanced machine learning technique (Random Forests). Unlike prior work in literature, the proposed methodology does not require object exploration, release or re-grasping and works for arbitrary object positions and orientations within the reach of a grasp. A two-fingered compliant, underactuated robot hand is controlled in an open-loop fashion to grasp objects with various shapes, sizes and stiffness. The Random Forests classification technique is used in order to discriminate between different object classes. The feature space used consists only of the actuator positions and the force sensor measurements at two specific time instances of the grasping process. A feature variables importance calculation procedure facilitates the identification of the most crucial features, concluding to the minimum number of sensors required. The efficiency of the proposed method is validated with two experimental paradigms involving two sets of fabricated model objects with different shapes, sizes and stiffness and a set of everyday life objects.


IEEE Transactions on Haptics | 2016

Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors

Adam Spiers; Minas V. Liarokapis; Berk Calli; Aaron M. Dollar

Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a ‘haptic glance’). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.


The International Journal of Robotics Research | 2017

Yale-CMU-Berkeley dataset for robotic manipulation research:

Berk Calli; Arjun Singh; James R. Bruce; Aaron Walsman; Kurt Konolige; Siddhartha S. Srinivasa; Pieter Abbeel; Aaron M. Dollar

In this paper, we present an image and model dataset of the real-life objects from the Yale-CMU-Berkeley Object Set, which is specifically designed for benchmarking in manipulation research. For each object, the dataset presents 600 high-resolution RGB images, 600 RGB-D images and five sets of textured three-dimensional geometric models. Segmentation masks and calibration information for each image are also provided. These data are acquired using the BigBIRD Object Scanning Rig and Google Scanners. Together with the dataset, Python scripts and a Robot Operating System node are provided to download the data, generate point clouds and create Unified Robot Description Files. The dataset is also supported by our website, www.ycbbenchmarks.org, which serves as a portal for publishing and discussing test results along with proposing task protocols and benchmarks.


intelligent robots and systems | 2016

Vision-based precision manipulation with underactuated hands: Simple and effective solutions for dexterity

Berk Calli; Aaron M. Dollar

In this paper, a method is proposed for vision-based within-hand precision manipulation with underactuated grippers. The method combines the advantages of adaptive underactuation with the robustness of visual servoing algorithms by employing simple action sets in actuator space, called precision manipulation primitives (PMPs). It is shown that, with this approach, reliable precision manipulation is possible even without joint and force sensors by using only minimal gripper kinematics information. An adaptation method is also utilized in the vision loop to enhance the systems transient performance. The proposed methods are analyzed with experiments using various target objects and reference signals. The results indicate that underactuated hands, even with minimalistic sensing and control via visual servoing, can provide a simple and inexpensive solution to allow low-fidelity precision manipulation.


international conference on robotics and automation | 2017

Vision-based model predictive control for within-hand precision manipulation with underactuated grippers

Berk Calli; Aaron M. Dollar

Precision manipulation with underactuated hands is a challenging problem due to difficulties in obtaining precise gripper, object and contact models. Using vision feedback provides a degree of robustness to modeling inaccuracies, but conventional visual servoing schemes may suffer from performance degradation if inaccuracies are large and/or unmodeled phenomena (e.g. friction) have significant effect on the system. In this paper, we propose the use of Model Predictive Control (MPC) framework within a visual servoing scheme to achieve high performance precision manipulation even with very rough models of the manipulation process. With experiments using step and periodic reference signals (in total 204 experiments), we show that the utilization of MPC provides superior performance in terms of accuracy and efficiency comparing to the conventional visual servoing methods.


international conference on robotics and automation | 2018

Variable-Friction Finger Surfaces to Enable Within-Hand Manipulation via Gripping and Sliding

Adam Spiers; Berk Calli; Aaron M. Dollar


international conference on robotics and automation | 2018

Learning Modes of Within-Hand Manipulation

Berk Calli; Krishnan Srinivasan; Andrew Morgan; Aaron M. Dollar


IEEE Transactions on Automation Science and Engineering | 2018

Active Vision via Extremum Seeking for Robots in Unstructured Environments: Applications in Object Recognition and Manipulation

Berk Calli; Wouter Caarls; Martijn Wisse; Pieter P. Jonker

Collaboration


Dive into the Berk Calli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron Walsman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Arjun Singh

University of California

View shared research outputs
Top Co-Authors

Avatar

Pieter Abbeel

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minas V. Liarokapis

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Wouter Caarls

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Researchain Logo
Decentralizing Knowledge