William P. Shackleford
National Institute of Standards and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William P. Shackleford.
Proceedings of the SPIE Sensors and Controls for Intelligent Manufacturing II | 2001
Frederick M. Proctor; William P. Shackleford
General-purpose microprocessors are increasingly being used for control applications due to their widespread availability and software support for non-control functions like networking and operator interfaces. Two classes of real-time operating systems (RTOS) exist for these systems. The traditional RTOS serves as the sole operating system, and provides all OS services. Examples include ETS, LynxOS, QNX, Windows CE and VxWorks. RTOS extensions add real-time scheduling capabilities to non-real-time OSes, and provide minimal services needed for the time-critical portions of an application. Examples include RTAI and RTL for Linux, and HyperKernel, OnTime and RTX for Windows NT. Timing jitter is an issue in these systems, due to hardware effects such as bus locking, caches and pipelines, and software effects from mutual exclusion resource locks, non-preemtible critical sections, disabled interrupts, and multiple code paths in the scheduler. Jitter is typically on the order of a microsecond to a few tens of microseconds for hard real-time operating systems, and ranges from milliseconds to seconds in the worst case for soft real-time operating systems. The question of its significance on the performance of a controller arises. Naturally, the smaller the scheduling period required for a control task, the more significant is the impact of timing jitter. Aside from this intuitive relationship is the greater significance of timing on open-loop control, such as for stepper motors, than for closed-loop control, such as for servo motors. Techniques for measuring timing jitter are discussed, and comparisons between various platforms are presented. Techniques to reduce jitter or mitigate its effects are presented. The impact of jitter on stepper motor control is analyzed.
Autonomous Robots | 2008
Michael O. Shneier; Tommy Chang; Tsai Hong; William P. Shackleford; Roger V. Bostelman; James S. Albus
Abstract Autonomous mobile robots need to adapt their behavior to the terrain over which they drive, and to predict the traversability of the terrain so that they can effectively plan their paths. Such robots usually make use of a set of sensors to investigate the terrain around them and build up an internal representation that enables them to navigate. This paper addresses the question of how to use sensor data to learn properties of the environment and use this knowledge to predict which regions of the environment are traversable. The approach makes use of sensed information from range sensors (stereo or ladar), color cameras, and the vehicle’s navigation sensors. Models of terrain regions are learned from subsets of pixels that are selected by projection into a local occupancy grid. The models include color and texture as well as traversability information obtained from an analysis of the range data associated with the pixels. The models are learned without supervision, deriving their properties from the geometry and the appearance of the scene. The models are used to classify color images and assign traversability costs to regions. The classification does not use the range or position information, but only color images. Traversability determined during the model-building phase is stored in the models. This enables classification of regions beyond the range of stereo or ladar using the information in the color images. The paper describes how the models are constructed and maintained, how they are used to classify image regions, and how the system adapts to changing environments. Examples are shown from the implementation of this algorithm in the DARPA Learning Applied to Ground Robots (LAGR) program, and an evaluation of the algorithm against human-provided ground truth is presented.
Proceedings of SPIE | 1995
Frederick M. Proctor; William P. Shackleford; Charles Yang; Tony Barbera; M L. Fitzgerald; Nat Frampton; Keith Bradford; Dwight Koogle; Mark Bankard
The National Institute of Standards and Technology has developed a modular definition of components for machine control, and a specification to their interfaces, with broad application to robots, machine tools, and coordinate measuring machines. These components include individual axis control, coordinate trajectory generation, discrete input/output, language interpretation, and task planning and execution. The intent of the specification is to support interoperability of components provided by independent vendors. NIST has installed a machine tool controller based on these interfaces on a 4-axis horizontal machining center at the Pontiac Powertrain Division of General Motors. The intent of this system is to validate that the interfaces are comprehensive enough to serve a demanding application, and to demonstrate several key concepts of open architecture controllers: component interoperability, controller scalability, and function extension. In particular, the GM-NIST Enhanced Machine Controller demonstrates interoperability of motion control hardware, scalability across computing platforms, and extensibility via user-defined graphical user interfaces. An important benefit of platform scalability is the ease with which the developers could test the controller in simulation before site installation. The EMC specifications are serving a larger goal of driving the development of true industry standards that will ultimately benefit users of machine tools, robots, and coordinate measuring machines. To this end, a consortium has been established and cooperative participation with the Department of Energy TEAM program and the US Air Force Title III program has been undertaken.
Journal of Field Robotics | 2006
James S. Albus; Roger V. Bostelman; Tommy Chang; Tsai Hong Hong; William P. Shackleford; Michael O. Shneier
Abstract : The Defense Applied Research Projects Agency (DARPA) Learning Applied to Ground Vehicles (LAGR) program aims to develop algorithms for autonomous vehicle navigation that learn how to operate in complex terrain. Over many years, the National Institute of Standards and Technology (NIST) has developed a reference model control system architecture called 4D/RCS that has been applied to many kinds of robot control, including autonomous vehicle control. For the LAGR program, NIST has embedded learning into a 4D/RCS controller to enable the small robot used in the program to learn to navigate through a range of terrain types. The vehicle learns in several ways. These include learning by example, learning by experience, and learning how to optimize traversal. Learning takes place in the sensory processing, world modeling, and behavior generation parts of the control system. The 4D/RCS architecture is explained in the paper, its application to LAGR is described, and the learning algorithms are discussed. Results are shown of the performance of the NIST control system on independently-conducted tests. Further work on the system and its learning capabilities is discussed.
Journal of Computing and Information Science in Engineering | 2001
Harry H. Cheng; Frederick M. Proctor; John L. Michaloski; William P. Shackleford
This article reviews various mechanisms in languages and o ating systems for deterministic real-time computing. Op architecture systems will be defined and their applications manufacturing will be addressed. Market directions for ope architecture manufacturing systems will be surveyed. Per mance issues based on real-time, reliability, and safety will discussed relating to manufacturing factory automation desig and implemented with component-based, plug-and-play op architecture. @DOI: 10.1115/1.1351819 #
performance metrics for intelligent systems | 2009
Roger V. Bostelman; William P. Shackleford
In this paper, we describe the current 2D (two dimensional) sensor used for industrial vehicles and ideal sensor configurations for mounting 3D imagers on manufacturing vehicles in an attempt to make them safer. In a search for the ideal sensor configuration, three experiments were performed using an advanced 3D imager and a color camera. The experiments are intended to be useful to the standards community and manned and unmanned forklift and automated guided vehicle industries. The imager that was used was a 3D Flash LIDAR (Light Detection and Ranging) camera with 7.5 m range and rapid detection. It was selected because it shows promise for use on forklifts and other industrial vehicles. Experiments included: 1) detection of standard sized obstacles, 2) detection of obstacles with highly reflective surfaces within detection range, and 3) detection of forklift tines above the floor. We briefly describe these experiments and reference their detailed reports.
ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference | 2002
Frederick M. Proctor; William P. Shackleford
Linux is a version of the Unix operating system distributed according to the open source model. Programmers are free to adapt the source code for their purposes, but are required to make their modifications or enhancements available as open source software as well. This model has fostered the widespread adoption of Linux for typical Unix server and workstation roles, and also in more arcane applications such as embedded or real-time computing. Embedded applications typically run in small physical and computing footprints, usually without fragile peripherals like hard disk drives. Special configurations are required to support these limited environments. Real-time applications require guarantees that tasks will execute within their deadlines, something not possible in general with the normal Linux scheduler. Real-time extensions to Linux enable deterministic scheduling, at task periods at tens of microseconds. This paper describes embedded and real-time Linux, and an application for distributed control of a Stewart Platform cable robot. Special Linux configuration requirements are detailed, and the architecture for teleoperated control of the cable robot is presented, with emphasis on the resolved-rate control of the suspended platform.Copyright
computer vision and pattern recognition | 2013
Afzal Godil; Roger V. Bostelman; Kamel S. Saidi; William P. Shackleford; Geraldine S. Cheok; Michael O. Shneier; Tsai Hong
We have been researching three dimensional (3D) ground-truth systems for performance evaluation of vision and perception systems in the fields of smart manufacturing and robot safety. In this paper we first present an overview of different systems that have been used to provide ground-truth (GT) measurements and then we discuss the advantages of physically-sensed ground-truth systems for our applications. Then we discuss in detail the three ground- truth systems that we have used in our experiments: ultra wide-band, indoor GPS, and a camera-based motion capture system. Finally, we discuss three different perception-evaluation experiments where we have used these GT systems.
world automation congress | 2002
Frederick M. Proctor; John L. Michaloski; William P. Shackleford
While simulation has been successful in tying design and process planning into an iterative loop, machining has traditionally been a downstream terminus of the manufacturing cycle. Simulation of machining has proven difficult due to the highly dynamic and nonlinear nature of the material removal process. Consequently, actual machining must be performed to determine the suitability of designs and process plans. The nature of the programming interface to machine tools has been an obstacle to information flow in both directions. Toward the machine tool, useful design and process information is reduced to primitive tool path motion statements. From the machine tool, no automatic data paths exist, so machinists must translate results into recommendations for design or process changes manually. The advent of STEP (ISO 10303) data for numerical control, or STEP-NC, promises to rectify these problems by providing full product and process data to the machine tool controller at run time and an automatic path back to these upstream activities using technologies such as XML. This paper describes STEP-NC and early experiences that show how these benefits can be achieved.
Industrial Robot-an International Journal | 2016
Frederick M. Proctor; Stephen B. Balakirsky; Zeid Kootbally; Thomas R. Kramer; Craig I. Schlenoff; William P. Shackleford
Industrial robots can perform motion with sub-millimeter repeatability when programmed using the teach-and-playback method. While effective, this method requires significant up-front time, tying up the robot and a person during the teaching phase. Off-line programming can be used to generate robot programs, but the accuracy of this method is poor unless supplemented with good calibration to remove systematic errors, feed-forward models to anticipate robot response to loads, and sensing to compensate for unmodeled errors. These increase the complexity and up-front cost of the system, but the payback in the reduction of recurring teach programming time can be worth the effort. This payback especially benefits small-batch, short-turnaround applications typical of small-to-medium enterprises, who need the agility afforded by off-line application development to be competitive against low-cost manual labor. To fully benefit from this agile application tasking model, a common representation of tasks should be used that is understood by all of the resources required for the job: robots, tooling, sensors, and people. This paper describes an information model, the Canonical Robot Command Language (CRCL), which provides a high-level description of robot tasks and associated control and status information.