Peter K. Allen
Columbia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter K. Allen.
IEEE Robotics & Automation Magazine | 2004
Andrew T. Miller; Peter K. Allen
A robotic grasping simulator, called Graspit!, is presented as versatile tool for the grasping community. The focus of the grasp analysis has been on force-closure grasps, which are useful for pick-and-place type tasks. This work discusses the different types of world elements and the general robot definition, and presented the robot library. The paper also describes the user interface of Graspit! and present the collision detection and contact determination system. The grasp analysis and visualization method were also presented that allow a user to evaluate a grasp and compute optimal grasping forces. A brief overview of the dynamic simulation system was provided.
international conference on robotics and automation | 1995
Konstantinos A. Tarabanis; Peter K. Allen; Roger Y. Tsai
A survey of research in the area of vision sensor planning is presented. The problem can be summarized as follows: given information about the environment as well as information about the task that the vision system is to accomplish, develop strategies to automatically determine sensor parameter values that achieve this task with a certain degree of satisfaction. With such strategies, sensor parameters values can be selected and can be purposefully changed in order to effectively perform the task at hand. The focus here is on vision sensor planning for the task of robustly detecting object features. For this task, camera and illumination parameters such as position, orientation, and optical settings are determined so that object features are, for example, visible, in focus, within the sensor field of view, magnified as required, and imaged with sufficient contrast. References to, and a brief description of, representative sensing strategies for the tasks of object recognition and scene reconstruction are also presented. For these tasks, sensor configurations are sought that will prove most useful when trying to identify an object or reconstruct a scene. >
The International Journal of Robotics Research | 2009
Matei T. Ciocarlie; Peter K. Allen
In this paper we focus on the concept of low-dimensional posture subspaces for artificial hands. We begin by discussing the applicability of a hand configuration subspace to the problem of automated grasp synthesis; our results show that low-dimensional optimization can be instrumental in deriving effective pre-grasp shapes for a number of complex robotic hands. We then show that the computational advantages of using a reduced dimensionality framework enable it to serve as an interface between the human and automated components of an interactive grasping system. We present an on-line grasp planner that allows a human operator to perform dexterous grasping tasks using an artificial hand. In order to achieve the computational rates required for effective user interaction, grasp planning is performed in a hand posture subspace of highly reduced dimensionality. The system also uses real-time input provided by the operator, further simplifying the search for stable grasps to the point where solutions can be found at interactive rates. We demonstrate our approach on a number of different hand models and target objects, in both real and virtual environments.
computer vision and pattern recognition | 2000
Ioannis Stamos; Peter K. Allen
This paper deals with the automated creation of geometric and photometric correct 3-D models of the world. Those models can be used for virtual reality, tele-presence, digital cinematography and urban planning applications. The combination of range (dense depth estimates) and image sensing (color information) provides data-sets which allow us to create geometrically correct, photorealistic models of high quality. The 3-D models are first built from range data using a volumetric set intersection method previously developed by us. Photometry can be napped onto these models by registering features from both the 3-D and 2-D data sets. Range data segmentation algorithms have been developed to identify planar regions, determine linear features from planar intersections that can serve as features for registration with 2-D imagery lines, and reduce the overall complexity of the models. Results are shown for building models of large buildings on our campus using real data acquired from multiple sensors.
international conference on robotics and automation | 2009
Corey Goldfeder; Matei T. Ciocarlie; Hao Dang; Peter K. Allen
Collecting grasp data for learning and benchmarking purposes is very expensive. It would be helpful to have a standard database of graspable objects, along with a set of stable grasps for each object, but no such database exists. In this work we show how to automate the construction of a database consisting of several hands, thousands of objects, and hundreds of thousands of grasps. Using this database, we demonstrate a novel grasp planning algorithm that exploits geometric similarity between a 3D model and the objects in the database to synthesize form closure grasps. Our contributions are this algorithm, and the database itself, which we are releasing to the community as a tool for both grasp planning and benchmarking.
international conference on robotics and automation | 1991
Peter K. Allen; Billibon H. Yoshimi; Aleksandar Timcenko
A real-time tracking algorithm in conjunction with a predictive filter to allow real-time visual servoing of a robotic arm that is tracking a moving object is described. The system consists of two calibrated (but unregistered) cameras that provide images to a real-time, pipeline-parallel optic-flow algorithm that can robustly compute optic-flow and calculate the 3-D position of a moving object at approximately 5-Hz rates. These 3-D positions of the moving object serve as input to a predictive kinematic control algorithm that uses an alpha - beta - gamma filter to update the position of a robotic arm tracking the moving object. Experimental results are presented for the tracking of a moving model train in a variety of different trajectories.<<ETX>>
IEEE Transactions on Robotics | 2004
Atanas Georgiev; Peter K. Allen
This paper addresses the problems of building a functional mobile robot for urban site navigation and modeling with focus on keeping track of the robot location. We have developed a localization system that employs two methods. The first method uses odometry, a compass and tilt sensor, and a global positioning sensor. An extended Kalman filter integrates the sensor data and keeps track of the uncertainty associated with it. The second method is based on camera pose estimation. It is used when the uncertainty from the first method becomes very large. The pose estimation is done by matching linear features in the image with a simple and compact environmental model. We have demonstrated the functionality of the robot and the localization methods with real-world experiments.
Computer Vision and Image Understanding | 2002
Ioannis Stamos; Peter K. Allen
This paper presents a systematic approach to the problem of photorealistic 3-D model acquisition from the combination of range and image sensing. The input is a sequence of unregistered range scans of the scene and a sequence of unregistered 2-D photographs of the same scene. The output is a true texture-mapped geometric model of the scene. We believe that the developed modules are of vital importance for a flexible photorealistic 3-D model acquisition system. Segmentation algorithms simplify the dense datasets and provide stable features of interest which can be used for registration purposes. Solid modeling provides geometrically correct 3-D models. Finally, the automated range to an image registration algorithm can increase the flexibility of the system by decoupling the slow geometry recovery process from the image acquisition process; the camera does not have to be precalibrated and rigidly attached to the range sensor. The system is comprehensive in that it addresses all phases of the modeling problem with a particular emphasis on automating the entire process interaction.
international conference on robotics and automation | 1995
Konstantinos A. Tarabanis; Roger Y. Tsai; Peter K. Allen
The MVP (machine vision planner) model-based sensor planning system for robotic vision is presented. MVP automatically synthesizes desirable camera views of a scene based on geometric models of the environment, optical models of the vision sensors, and models of the task to be achieved. The generic task of feature detectability has been chosen since it is applicable to many robot-controlled vision systems. For such a task, features of interest in the environment are required to simultaneously be visible, inside the field of view, in focus, and magnified as required. In this paper, we present a technique that poses the vision sensor planning problem in an optimization setting and determines viewpoints that satisfy all previous requirements simultaneously and with a margin. In addition, we present experimental results of this technique when applied to a robotic vision system that consists of a camera mounted on a robot manipulator in a hand-eye configuration. >
international conference on robotics and automation | 1999
Andrew T. Miller; Peter K. Allen
Previous grasp quality research is mainly theoretical, and has assumed that contact types and positions are given, in order to preserve the generality of the proposed quality measures. The example results provided by these works either ignore hand geometry and kinematics entirely or involve only the simplest of grippers. We present a unique grasp analysis system that, when given a 3D object, hand, and pose for the hand, can accurately determine the types of contacts that will occur between the links of the hand and the object, and compute two measures of quality for the grasp. Using models of two articulated robotic hands, we analyze several grasps of a polyhedral model of a telephone handset, and we use a novel technique to visualize the 6D space used in these computations. In addition, we demonstrate the possibility of using this system for synthesizing high quality grasps by performing a search over a subset of possible hand configurations.