Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph M. Romano is active.

Publication


Featured researches published by Joseph M. Romano.


IEEE Transactions on Robotics | 2009

Mechanics of Precurved-Tube Continuum Robots

Robert J. Webster; Joseph M. Romano; Noah J. Cowan

This paper presents a new class of thin, dexterous continuum robots, which we call active cannulas due to their potential medical applications. An active cannula is composed of telescoping, concentric, precurved superelastic tubes that can be axially translated and rotated at the base relative to one another. Active cannulas derive bending not from tendon wires or other external mechanisms but from elastic tube interaction in the backbone itself, permitting high dexterity and small size, and dexterity improves with miniaturization. They are designed to traverse narrow and winding environments without relying on ldquoguidingrdquo environmental reaction forces. These features seem ideal for a variety of applications where a very thin robot with tentacle-like dexterity is needed. In this paper, we apply beam mechanics to obtain a kinematic model of active cannula shape and describe design tools that result from the modeling process. After deriving general equations, we apply them to a simple three-link active cannula. Experimental results illustrate the importance of including torsional effects and the ability of our model to predict energy bifurcation and active cannula shape.


IEEE Transactions on Robotics | 2011

Human-Inspired Robotic Grasp Control With Tactile Sensing

Joseph M. Romano; Kaijen Hsiao; Günter Niemeyer; Sachin Chitta; Katherine J. Kuchenbecker

We present a novel robotic grasp controller that allows a sensorized parallel jaw gripper to gently pick up and set down unknown objects once a grasp location has been selected. Our approach is inspired by the control scheme that humans employ for such actions, which is known to centrally depend on tactile sensation rather than vision or proprioception. Our controller processes measurements from the grippers fingertip pressure arrays and hand-mounted accelerometer in real time to generate robotic tactile signals that are designed to mimic human SA-I, FA-I, and FA-II channels. These signals are combined into tactile event cues that drive the transitions between six discrete states in the grasp controller: Close, Load, Lift and Hold, Replace, Unload, and Open. The controller selects an appropriate initial grasping force, detects when an object is slipping from the grasp, increases the grasp force as needed, and judges when to release an object to set it down. We demonstrate the promise of our approach through implementation on the PR2 robotic platform, including grasp testing on a large number of real-world objects.


IEEE Transactions on Haptics | 2012

Creating Realistic Virtual Textures from Contact Acceleration Data

Joseph M. Romano; Katherine J. Kuchenbecker

Modern haptic interfaces are adept at conveying the large-scale shape of virtual objects, but they often provide unrealistic or no feedback when it comes to the microscopic details of surface texture. Direct texture-rendering challenges the state of the art in haptics because it requires a finely detailed model of the surfaces properties, real-time dynamic simulation of complex interactions, and high-bandwidth haptic output to enable the user to feel the resulting contacts. This paper presents a new, fully realized solution for creating realistic virtual textures. Our system employs a sensorized handheld tool to capture the feel of a given texture, recording three-dimensional tool acceleration, tool position, and contact force over time. We reduce the three-dimensional acceleration signals to a perceptually equivalent one-dimensional signal, and then we use linear predictive coding to distill this raw haptic information into a database of frequency-domain texture models. Finally, we render these texture models in real time on a Wacom tablet using a stylus augmented with small voice coil actuators. The resulting virtual textures provide a compelling simulation of contact with the real surfaces, which we verify through a human subject study.


international symposium on experimental robotics | 2009

Closed-Form Differential Kinematics for Concentric-Tube Continuum Robots with Application to Visual Servoing

Robert J. Webster; John P. Swensen; Joseph M. Romano; Noah J. Cowan

Active cannulas, so named because of their potential medical applications, are a new class of continuum robots consisting of precurved, telescoping, elastic tubes. As individual component tubes are actuated at the base relative to one another, an active cannula changes shape to minimize stored elastic energy. Here, we derive the differential kinematics of a general n-tube active cannula while accounting for torsional compliance. We experimentally validate the Jacobian using a three-link prototype in a simple stereo visual servoing scheme.


international conference on human haptic sensing and touch enabled computer applications | 2010

Dimensional reduction of high-frequency accelerations for haptic rendering

Nils Landin; Joseph M. Romano; William McMahan; Katherine J. Kuchenbecker

Haptics research has seen several recent efforts at understanding and recreating real vibrations to improve the quality of haptic feedback in both virtual environments and teleoperation. To simplify the modeling process and enable the use of single-axis actuators, these previous efforts have used just one axis of a three-dimensional vibration signal, even though the main vibration mechanoreceptors in the hand are know to detect vibrations in all directions. Furthermore, the fact that these mechanoreceptors are largely insensitive to the direction of high-frequency vibrations points to the existence of a transformation that can reduce three-dimensional high-frequency vibration signals to a one-dimensional signal without appreciable perceptual degradation. After formalizing the requirements for this transformation, this paper describes and compares several candidate methods of varying degrees of sophistication, culminating in a novel frequency-domain solution that performs very well on our chosen metrics.


international conference on robotics and automation | 2010

Automatic filter design for synthesis of haptic textures from recorded acceleration data

Joseph M. Romano; Takashi Yoshioka; Katherine J. Kuchenbecker

Sliding a probe over a textured surface generates a rich collection of vibrations that one can easily use to create a mental model of the surface. Haptic virtual environments attempt to mimic these real interactions, but common haptic rendering techniques typically fail to reproduce the sensations that are encountered during texture exploration. Past approaches have focused on building a representation of textures using a priori ideas about surface properties. Instead, this paper describes a process of synthesizing probe-surface interactions from data recorded from real interactions. We explain how to apply the mathematical principles of Linear Predictive Coding (LPC) to develop a discrete transfer function that represents the acceleration response under specific probe-surface interaction conditions. We then use this predictive transfer function to generate unique acceleration signals of arbitrary length. In order to move between transfer functions from different probe-surface interaction conditions, we develop a method for interpolating the variables involved in the texture synthesis process. Finally, we compare the results of this process with real recorded acceleration signals, and we show that the two correlate strongly in the frequency domain.


international conference on robotics and automation | 2007

Teleoperation of Steerable Needles

Joseph M. Romano; Robert J. Webster; Allison M. Okamura

Needles are commonly used in medical practice as a minimally invasive means to reach subsurface targets for diagnosis or therapy delivery. Recent results indicate that steerable needles may enhance targeting accuracy and allow needles to avoid obstacles along the path to the target. This work considers teleoperation of needles made of a superelastic alloy that steer through tissue using forces generated by the standard asymmetric bevel tip. The needle may be modeled as a nonholonomic system, with inputs of insertion along and spin about the needle axis. A teleoperation system consisting of a commercial master haptic device, a custom needle-steering robot slave, and visual feedback to the operator was assembled. Human subjects experiments were performed to evaluate targeting accuracy in phantom tissue for three needle control methods: teleoperation of both insertion and spin, teleoperation of insertion with open-loop-controlled spin, and open-loop control of both insertion and spin. Targeting accuracy improved with increasing degrees of freedom of human (teleoperation) control, primarily because tissue deformation and modeling limitations result in open-loop control errors. Subjects typically performed multiple spins of the needle during insertion in order to fine tune the needle path. In addition, position, rate, and a nonlinear hybrid control were compared during teleoperation of the insertion degree of freedom. The hybrid method resulted in significantly better targeting accuracy


international conference on robotics and automation | 2010

Visual sensing of continuum robot shape using self-organizing maps

Jordan M. Croom; D. Caleb Rucker; Joseph M. Romano; Robert J. Webster

Shape control of continuum robots requires a means of sensing the the curved shape of the robot. Since continuum robots are deformable, they take on shapes that are general curves in space, which are not fully defined by actuator positions. Vision-based shape-estimation provides a promising avenue for shape-sensing. While this is often facilitated by fiducial markers, sometimes fiducials are not feasible due to either the robots application or its size. To address this, we present a robust and efficient stereo-vision-based, shape-sensing algorithm for continuum robots that does not rely on fiducials or assume orthogonal camera placement. The algorithm employs self-organizing maps to triangulate three-dimensional backbone curves. Experiments with an object with a known shape demonstrate an average accuracy of 1.53 mm on a 239 mm arc length curve.


ieee haptics symposium | 2012

Refined methods for creating realistic haptic virtual textures from tool-mediated contact acceleration data

Heather Culbertson; Joseph M. Romano; Pablo Castillo; Max Mintz; Katherine J. Kuchenbecker

Dragging a tool across a textured object creates rich high-frequency vibrations that distinctly convey the physical interaction between the tool tip and the object surface. Varying ones scanning speed and normal force alters these vibrations, but it does not change the perceived identity of the tool or the surface. Previous research developed a promising data-driven approach to embedding this natural complexity in a haptic virtual environment: the approach centers on recording and modeling the tool contact accelerations that occur during real texture interactions at a limited set of force-speed combinations. This paper aims to optimize these prior methods of texture modeling and rendering to improve system performance and enable potentially higher levels of haptic realism. The key elements of our approach are drawn from time series analysis, speech processing, and discrete-time control. We represent each recorded texture vibration with a low-order auto-regressive moving-average (ARMA) model, and we optimize this set of models for a specific tool-surface pairing (plastic stylus and textured ABS plastic) using metrics that depend on spectral match, final prediction error, and model order. For rendering, we stably resample the texture models at the desired output rate, and we derive a new texture model at each time step using bilinear interpolation on the line spectral frequencies of the resampled models adjacent to the users current force and speed. These refined processes enable our TexturePad system to generate a stable and spectrally accurate vibration waveform in real time, moving us closer to the goal of virtual textures that are indistinguishable from their real counterparts.


ISRR | 2011

Haptography: Capturing and Recreating the Rich Feel of Real Surfaces

Katherine J. Kuchenbecker; Joseph M. Romano; William McMahan

Haptic interfaces, which allow a user to touch virtual and remote environments through a hand-held tool, have opened up exciting new possibilities for applications such as computer-aided design and robot-assisted surgery. Unfortunately, the haptic renderings produced by these systems seldom feel like authentic re-creations of the richly varied surfaces one encounters in the real world. We have thus envisioned the new approach of haptography, or haptic photography, in which an individual quickly records a physical interaction with a real surface and then recreates that experience for a user at a different time and/or place. This paper presents an overview of the goals and methods of haptography, emphasizing the importance of accurately capturing and recreating the high frequency accelerations that occur during tool-mediated interactions. In the capturing domain, we introduce a new texture modeling and synthesis method based on linear prediction applied to acceleration signals recorded from real tool interactions. For recreating, we show a new haptography handle prototype that enables the user of a Phantom Omni to feel fine surface features and textures.

Collaboration


Dive into the Joseph M. Romano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William McMahan

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Noah J. Cowan

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Jordan Brindza

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Steven R. Gray

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Alla Safonova

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge