Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David J. Cannon is active.

Publication


Featured researches published by David J. Cannon.


international conference on robotics and automation | 1999

Dynamic analysis of the cable array robotic crane

Wei-Jung Shiang; David J. Cannon; Jason J. Gorman

Offshore loading and unloading of cargo vessels and on board cargo relocation during conditions of Sea State 3 or greater have been found to be difficult with existing crane technology due to oscillation of the payload. A new type of crane which uses four actuated cables to control the motion of the payload is presented. The closed chain configuration will intuitively provide more stability with respect to the motion of the sea compared to existing cranes. The kinematics and dynamics are derived using cable coordinates. Since there are four cables and three degrees of freedom, the system is redundant. This problem is solved by applying a geometric constraint to the equations of motion such that the reduced number of equations equals the degrees of freedom. The force distribution method is applied using linear programming to solve for the required cable tensions. Simulation results showing cable tensions and cable lengths during a typical crane operation are presented.


international conference on robotics and automation | 2001

The cable array robot: theory and experiment

J.J. German; David J. Cannon

Cable array robots are a class of robotic mechanisms which utilize multiple actuated cables to manipulate objects. This paper discusses the defining characteristics and the important issues relating to this class of robots. One particular type of cable array robot with three cables is presented in detail including kinematic relations and dynamic modeling. The resulting models are used to design a sliding mode control system for the robot, providing robustness to uncertainty in the mass of manipulated objects while tracking a given trajectory. Experimental results are provided for the system, showing that the robot is capable of tracking a trajectory within several centimeters, which is reasonable performance for many industrial applications. These results are discussed in detail and future research plans are presented.


international conference on robotics and automation | 2000

Optimal force distribution applied to a robotic crane with flexible cables

Wei-Jung Shiang; David J. Cannon; Jason J. Gorman

A multiple cable robotic crane designed to provide improved cargo handling is investigated. The equations of motion are derived for the cargo and flexible cables using Lagranges equations and the assumed modes method. The resulting equations are kinematically redundant due to fewer degrees of freedom of the cargo than the number of cables. A nonlinear transformation is used to reduce the number of variables. An optimal force distribution method is then applied to the equations to solve for a set of necessary cable tensions which will cause the system to track a desired trajectory. These tensions are tested on the dynamic model using computer simulation. The results are compared against desired cable lengths and results gained in previous research using a rigid cable model.


international conference on robotics and automation | 1993

A virtual end-effector pointing system in point-and-direct robotics for inspection of surface flaws using a neural network based skeleton transform

Collin Wang; David J. Cannon

The development of a system for interweaving virtual reality tools with live video scenes to direct robots using video graphic gestures is described. This virtual point-and-direct (V-PAD) concept is applied to workpiece inspection. a normalized coding method is employed for surface flaw identification using a neural network based skeleton transform. The method is scale, location, and orientation invariant so that robotic placement of workpieces for inspection requires no exact precision from special fixturing.<<ETX>>


international conference on robotics and automation | 1997

Virtual collaborative control to improve intelligent robotic system efficiency and quality

Michael J. McDonald; Daniel E. Small; Charles C. Graves; David J. Cannon

Virtual collaborative environments allow designers to interact over a network to discuss technical issues and solve problems in shared virtual scenes. This paper describes applications of this concept in virtual collaborative engineering (VCE) and virtual collaborative visualization (VCV). It then introduces virtual collaborative control (VCC), a new systems approach that uses virtual collaborative environments to improve equipment utilization in unstructured and dynamic environments. A working VCC system, developed by Sandia National Laboratories and the Pennsylvania State University that extends the techniques of collaborative design and visualization to accomplish collaborative control of robots and machines, is described. New VCC utilization and quality-improving system design principals are proposed. The new principles are verified through a series of pilot tests conducted on four VCC configurations. A factor of two and better utilization improvements and better decision-making quality are achieved. The new VCC capability will enable geographically dispersed experts to participate more fully in custom operations such as in manufacturing, hazardous waste remediation, and space telerobotics.


Presence: Teleoperators & Virtual Environments | 1997

Virtual tools for supervisory and collaborative control of robots

David J. Cannon; Geb W. Thomas

Often, robotics has failed to meet industry expectations because programming robots is tedious, requires specialists, and often does not provide enough real flexibility to be worth the investment. In order to advance beyond a possible robotics plateau, an integrating technology will need to emerge that can take advantage of complex new robotic capabilities while making systems easier for nonrobotics people to use. This research introduces virtual tools with robotic attributes, and collaborative control concepts, that enable experts in areas other than robotics to simply point and direct sophisticated robots and machines to do new tasks. A system of robots that are directed using such virtual tools is now in place at the Pennsylvania State University (Penn State) and has been replicated at Sandia National Laboratories. (Mpeg movies from the Penn State Virtual Tools and Robotics Laboratory are at http://virtuoso.psu.edu/ mpeg_page.html.) Virtual tools, which appear as graphic representations of robot endeffectors interwoven into live video, carry robotic attributes that define trajectory details and determine how to interpret sensor readings for a particular type of task. An operator, or team of experts, directs robot tasks by virtually placing these tool icons in the scene. The operator(s) direct tasks involving attributes in the same natural way that supervisors direct human subordinates to, for example, put that there, dig there, cut there, and grind there. In this human-machine interface, operators do not teach entire tasks via virtual telemanipulation. They define key action points. The virtual tool attributes allow operators to stay at a supervisory level, doing what humans can do best in terms of task perceptualization, while robots plan appropriate trajectories and a variety of tool-dependent executions. Neither the task experts (e.g., in hazardous environments) nor the plant supervisors (e.g., in remote manufacturing applications) must turn over control to specialized robot technicians for long periods. Within this concept, shutting down a plant to reprogram robots to produce a new product, for example, is no longer required. Further, even though several key collaborators may be in different cities for a particular application, they may work with other experts over a project net that is formed for a particular mission. (We link simply by sending video frames over Netscape.) Using a shared set of virtual tools displayed simultaneously on each of the collaborator workstations, experts virtually enter a common videographic scene to direct portions of a task while graphically and verbally discussing alternatives with the other experts. In the process of achieving collaborative consensus, the robots are automatically programmed as a byproduct of using the virtual tools to decide what should be done and where. The robots can immediately execute the task for all to see once consensus is reached. Virtual tools and their attributes achieve robotic flexibility without requiring specialized programming or telemanipulation on the part of in situ operators. By sharing the virtual tools over project nets, noncollocated experts may now contribute to robot and intelligent machine tasks. To date, we have used virtual tools to direct a large gantry robot at Sandia National Laboratories from Penn State. We will soon have multiple collaborators sharing the virtual tools remotely, with a protocol for participants to take turns placing and moving virtual tools to define portions of complex tasks in other industrial, space-telerobotic, and educational environments. Attributes from each area of robotics research are envisioned with virtual tools as a repository for combining these independently developed robotic capabilities into integrated entities that are easy for an operator to understand, use, and modify.


International Journal of Computer Integrated Manufacturing | 1995

A dynamic point specification approach to sequencing robot moves for PCB assembly

Yi-Chen Su; Collin Wang; Pius J. Egbelu; David J. Cannon

Abstract Reducing the time it lakes to place components on to printed circuit boards (PCBs) is a major robotic assembly objective. This paper explores a robot control approach featuring dynamic choice of pick-and-place points for retrieving and inserting components. In this concept, the component magazine and PCB tray both move to calculated points that minimize the overall robotic assembly time. Compared with that of another approach which employs fixed positions for retrieval and insertion, the new approach avoids robot waiting time. The same assembly sequence and component magazine assignment was used to test the difference between these two approaches. In both models, the PCB worktables and magazine are assumed to be mobile and to move at constant velocity between points. Experimental results showed that the proposed dynamic pick-and-place (DPP) approach is superior to the fixed pick-and-place (FPP) approach in nearly all cases.


Computers & Industrial Engineering | 1998

Heuristics for assembly sequencing and relative magazine assignment for robotic assembly

Collin Wang; Li-Shing Ho; David J. Cannon

Abstract There are three factors directly affecting robotic assembly efficiency: (i) robot motion control, (ii) the sequence for placing the individual component on the assembly board, and (iii) the corresponding magazine slot location from which the components must be selected. This paper describes an off-line heuristics to sequence the insertion orders and to assign corresponding components to a magazine so as to improve robotic assembly efficiency. The algorithms are developed for a Cartesian robot, which allows dynamic allocation of pick-and-place locations. The paper also demonstrates the operational procedures for an on-line implementation of this novel approach. The approach could be extended to assembly tasks for printed circuit boards.


international conference on robotics and automation | 1996

Virtual-reality-based point-and-direct robotic inspection in manufacturing

Collin Wang; David J. Cannon

This paper explores a flexible manufacturing paradigm in which robot grasping is interactively specified and skeletal images are efficiently used in combination to allow rapidly setting up surface flaw identification tasks in small-quantity/large-variety manufacturing. Two complementary technologies are combined to make implementation of inspection as rapid as possible. First, a novel material handling approach is described for robotic picking and placing of parts onto an inspection table using virtual tools. This allows an operator to point and give directives to set up robotic inspection tasks. Second, since specification may be approximate using this method, a fast and flexible means of identifying images of perfect and flawed parts is explored that avoids rotational or translational restrictions on workpiece placement. This is accomplished by using skeleton pixel counts as neural network inputs. The total system, including material handling and skeleton-based inspection, features flexibility during manufacturing set-up, and reduces the process time and memory requirements for workpiece inspection.


international conference on robotics and automation | 1995

A scheme integrating neural networks for real-time robotic collision detection

Heng Ma; David J. Cannon; Soundar R. T. Kumara

We present a scheme incorporating neural network mappings for geometric modeling and interference determination in robotic collision detection. The scheme promises to greatly reduce the computational time associated with calculating collision points, which makes real-time obstacle avoidance more achievable. The scheme includes three modules: a geometric modeling module, a collision detection module, and a decision support module. The geometric modeling module employs the restricted Coulomb energy (RCE) paradigm to describe the spatial occupancy of a 3-D object by a number of overlapping spheres. The collision detection module receives the geometric pattern in the robots environment and updates the spherical representation to perform geometric computation for existence of interference. The decision support module, using neural networks, provides online information for the collision detection module. A PUMA 560 robots CAD model was built to test the scheme. The performances using the scheme and using the CAD model were compared and presented.

Collaboration


Dive into the David J. Cannon's collaboration.

Top Co-Authors

Avatar

Collin Wang

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andris Freivalds

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Heng Ma

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Jason J. Gorman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. McDonald

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Soundar R. T. Kumara

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Wei-Jung Shiang

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Collin Wang

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge