Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael J. Seelinger is active.

Publication


Featured researches published by Michael J. Seelinger.


Robotics and Autonomous Systems | 2006

Automatic visual guidance of a forklift engaging a pallet

Michael J. Seelinger; John-David Yoder

This paper presents the development of a prototype vision-guided forklift system for the automatic engagement of pallets. The system is controlled using the visual guidance method of mobile camera-space manipulation, which is capable of achieving a high level of precision in positioning and orienting mobile manipulator robots without relying on camera calibration. The paper contains development of the method, the development of a prototype forklift as well as experimental results in actual pallet engagement tasks. The technology could be added to AGV systems enabling them to engage arbitrarily located pallets. It also could be added to standard forklifts as an operator assist capability.


international conference on robotics and automation | 2002

High-precision visual control of mobile manipulators

Michael J. Seelinger; John-David Yoder; Eric T. Baumgartner; Steven B. Skaar

In this paper, we present a high-precision visual control method for mobile manipulators called mobile camera-space manipulation (MCSM). Development of MCSM was inspired by the unique challenges presented in conducting unmanned planetary exploration using rovers. In order to increase the efficacy of such missions, the amount of human interaction must be minimized due to the large time delay and high cost of transmissions between Earth and other planets. Using MCSM, the rover can maneuver itself into position, engage a target rock, and perform any of a variety of manipulation tasks all with one round-trip transmission of instruction. MCSM also achieves a high level of precision in positioning the onboard manipulator relative to its target. Experimental results are presented in which a rover positions a tool mounted in its manipulator to within 1 mm of the desired target feature on a rock. MCSM makes efficient use of all of the systems degrees of freedom (DOF), which reduces the required number of actuators for the manipulator. This reduction in manipulator DOFs decreases overall system weight, power consumption, and complexity while increasing reliability. MCSM does not rely on a calibrated camera system. Its excellent positioning precision is robust to model errors and uncertainties in measurements, a great strength for systems operating in harsh environments.


Robotics and Computer-integrated Manufacturing | 2003

An efficient multi-camera, multi-target scheme for the three-dimensional control of robots using uncalibrated vision

Emilio J. González-Galván; Sergio R. Cruz-Ramı́rez; Michael J. Seelinger; J. Jesús Cervantes-Sánchez

Abstract A vision-based control methodology is presented in this paper that can perform accurate, three-dimensional (3D), positioning and path-tracking tasks. Tested with the challenging manufacturing task of welding in an unstructured environment, the proposed methodology has proven to be highly reliable, consistently achieving terminal precision of 1 mm . A key limiting factor for this high precision is camera–space resolution per unit physical space. This paper also presents a means of preserving and even increasing this ratio over a large region of the robots workspace by using data from multiple vision sensors. In the experiments reported in this paper, a laser is used to facilitate the image processing aspect of the vision-based control strategy. The laser projects “laser spots” over the workpiece in order to gather information about the workpiece geometry. Previous applications of the control method were limited to considering only local, geometric information of the workpiece, close to the region where the robots tool is going to be placed. This paper presents a methodology to consider all available information about the geometry of the workpiece. This data is represented in a compact matrix format that is used within the algorithm to evaluate an optimal robot configuration. The proposed strategy processes and stores the information that comes from various vision sensors in an efficient manner. An important goal of the proposed methodology is to facilitate the use of industrial robots in unstructured environments. A graphical-user-interface (GUI) has been developed that simplifies the use of the robot/vision system. With this GUI, complex tasks such as welding can be successfully performed by users with limited experience in the control of robots and welding techniques.


IEEE Robotics & Automation Magazine | 1998

Towards a robotic plasma spraying operation using vision

Michael J. Seelinger; Emilio J. González-Galván; Matthew L. Robinson; Steven B. Skaar

An uncalibrated, vision-guided robotic system, based on the method of camera-space manipulation, has been developed to reduce the time and cost associated with teaching a robot a suitable trajectory for plasma coating. The system achieves a high level of precision in both position and orientation control of a 6-DOF robotic arm.


international conference on robotics and automation | 2005

Automatic Pallet Engagment by a Vision Guided Forklift

Michael J. Seelinger; John-David Yoder

This paper presents a vision-guided control method called mobile camera-space manipulation (MCSM) that enables a robotic forklift vehicle to engage pallets based on a pallet’s actual current location by using feedback from vision sensors that are part of the robotic forklift. MCSM is capable of high precision mobile manipulation control without relying on strict camera calibration. The paper contains development of the method as well as experimental results with a forklift prototype in actual pallet engagement tasks. The technology could be added to AGV (automatically guided vehicle) systems enabling them to engage arbitrarily located pallets. It also could be added to standard forklifts as an operator assist capability.


The International Journal of Robotics Research | 1999

Efficient Camera-Space Target Disposition in a Matrix of Moments Structure Using Camera-Space Manipulation

Emilio J. González-Galván; Michael J. Seelinger

This paper introduces a newestimation approach for determining the sequence of internal manipulator configurations which are required to perform a task on an arbitrarily positioned and oriented workpiece, in the context of the method of camera-space manipulation— a robust and precise means of controlling three-dimensional robot maneuvers using vision. Despite a nonlinear estimation model, a recursive scheme is developed. This approach reduces the computational and memory burden that is required by the “batch” estimation approach while retaining identical results. The same formalism that permits this result is used to condense to a minimum the visual information required to create “camera-space objectives.” The discussion includes actual experimental results wherein robust, millimeter six-axis positioning precision with a three-dimensional, rigid-body task is achieved using a very large GMF S-400 robot.


international symposium on experimental robotics | 2013

Experiments Comparing Precision of Stereo-Vision Approaches for Control of an Industrial Manipulator

John-David Yoder; Jeffrey West; Eric T. Baumgartner; Mathias Perrollaz; Michael J. Seelinger; Matthew Robinson

Despite years of research in the area of robotics, the vast majority of industrial robots are still used in “teach-repeat” mode. This requires that the workpiece be in exactly the same position and orientation every time. In many high-volume robotics applications, this is not a problem, since the parts are likely to be fixtured anyway. However, in small to medium lot applications, this can be a significant limitation. The motivation for this project was a corporation who wanted to explore the use of visual control of a manipulator to allow for automated teaching of robot tasks for parts that are run in small lot sizes.


Journal of Field Robotics | 2012

Autonomous Go-and-Touch Exploration (AGATE)

Michael J. Seelinger; John-David Yoder; Eric T. Baumgartner

This paper presents work done to enable a mobile manipulator to autonomously place, its tool with high accuracy and reliability, relative to a visually distinctive target. The work is novel in that the cameras are not calibrated a priori, rather, the system calibrates the cameras by moving the manipulator through the field of view, and the algorithm combines motion of the mobile base and the manipulator in order to achieve the task. Although not creating a globally improved camera calibration, the method provides very high precision in positioning a mobile manipulator relative to a visually selected target. The work was motivated by a desire to increase the precision and efficiency of the Mars exploration rovers (MER), allowing more science to be carried out in the same span of time. In addition to the algorithm, the paper describes a large number of experiments used to show the effectiveness of the method. For the experiments described in this paper, the starting distance of the rover relative to the point of interest ranged from about 2 to 8 m. Depending on the distance of traverse required, the rover had to use one to three sets of stereo cameras. Over a large range of distances, and many experiments, the system was shown to be robust and accurate. The paper further breaks down the sources of error and examines their importance based on a large number of experiments.


international symposium on experimental robotics | 2008

Long-Range Autonomous Instrument Placement

John-David Yoder; Michael J. Seelinger

This paper presents work done to enable a mobile manipulator to place the tool tip of its end effector accurately relative to a visually-distinctive target. The mobile manipulator began the approach from approximately eight meters. Because of the changing distance to the target, a single set of fixed-focal-length cameras could not be used. The application required three sets of cameras, which means that the target had to be transferred between sets of cameras twice. Two approaches were used to transfer the targets, and results are discussed. The system has shown the ability to repeatedly position the tool tip within approximately a few millimeters of the target in the direction perpendicular to the object of interest.


Archive | 1999

Means and method of robot control relative to an arbitrary surface using camera-space manipulation

Steven B. Skaar; Michael J. Seelinger; Matthew L. Robinson; Emilio J. Gonzalez Galvan

Collaboration


Dive into the Michael J. Seelinger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emilio J. González-Galván

Universidad Autónoma de San Luis Potosí

View shared research outputs
Top Co-Authors

Avatar

Eric T. Baumgartner

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bill Goodwine

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Qun Ma

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Zacarias Dieck

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Jeffrey West

Ohio Northern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge