Randy S. Roberts
Lawrence Livermore National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Randy S. Roberts.
international conference on robotics and automation | 2001
Christopher T. Cunningham; Randy S. Roberts
An adaptive path planning algorithm is presented for cooperating unmanned air vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.
international conference on robotics and automation | 2003
E.D. Jones; Randy S. Roberts; T.C.S. Hsia
This paper presents the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture and framework for simulating, controlling and communicating with unmanned air vehicles (UAVs) servicing large distributed sensor networks. STOMP provides hardware-in-the-loop capability enabling real UAVs and sensors to feedback state information, route data and receive command and control requests while interacting with other real or virtual objects thereby enhancing support for simulation of dynamic and complex events.
international conference on robotics and automation | 1998
Jason P. Luck; Charles Q. Little; Randy S. Roberts
A three-dimensional world model is crucial for many robotic tasks. Modeling techniques tend to be either fully manual or autonomous. Manual methods are extremely time consuming but also highly accurate and flexible. Autonomous techniques are fast but inflexible and, with real-world data, often inaccurate. The method presented in this paper combines the two, yielding a highly efficient, flexible, and accurate mapping tool. The segmentation and modeling algorithms that compose the method are specifically designed for industrial environments, and are described in detail. A mapping system based on these algorithms has been designed. It enables a human supervisor to quickly construct a fully defined world model from unfiltered and unsegmented real-world range imagery. Examples of how industrial scenes are modeled with the mapping system are provided.
Proceedings of SPIE | 2014
Abdul A. S. Awwal; Richard R. Leach; Randy S. Roberts; Karl Wilhelmsen; David McGuigan; Jeff Jarboe
The National Ignition Facility (NIF) utilizes 192 beams, four of which are diverted to create the Advanced Radiographic Capability (ARC) by generating a sequence of short laser pulses. This ARC beam after being converted to X-rays will act as a back lighter to create a radiographic movie and provide an unprecedented insight into the imploding dynamics and serve as a diagnostic for tuning the experimental parameters to achieve fusion. One such beam is the centering beam of the pre-amplifier module which due to a split path obstructs the central square alignment fiducials. This fiducial is used for alignment and also as reference for the programmable spatial shaper (PSS) system. Image processing algorithms are used to process the images and calculate the position of various fiducials in the beam path. We discuss the algorithm to process ARC split beam injector (SBI) centering images with partial fiducial information.
Proceedings of SPIE | 2014
Randy S. Roberts; Erlan S. Bliss; Michael C. Rushford; John M. Halpin; Abdul A. S. Awwal; Richard R. Leach
The Advance Radiographic Capability (ARC) at the National Ignition Facility (NIF) is a laser system designed to produce a sequence of short pulses used to backlight imploding fuel capsules. Laser pulses from a short-pulse oscillator are dispersed in wavelength into long, low-power pulses, injected in the NIF main laser for amplification, and then compressed into high-power pulses before being directed into the NIF target chamber. In the target chamber, the laser pulses hit targets which produce x-rays used to backlight imploding fuel capsules. Compression of the ARC laser pulses is accomplished with a set of precision-surveyed optical gratings mounted inside of vacuum vessels. The tilt of each grating is monitored by a measurement system consisting of a laser diode, camera and crosshair, all mounted in a pedestal outside of the vacuum vessel, and a mirror mounted on the back of a grating inside the vacuum vessel. The crosshair is mounted in front of the camera, and a diffraction pattern is formed when illuminated with the laser diode beam reflected from the mirror. This diffraction pattern contains information related to relative movements between the grating and the pedestal. Image analysis algorithms have been developed to determine the relative movements between the gratings and pedestal. In the paper we elaborate on features in the diffraction pattern, and describe the image analysis algorithms used to monitor grating tilt changes. Experimental results are provided which indicate the high degree of sensitivity provided by the tilt sensor and image analysis algorithms.
international geoscience and remote sensing symposium | 2010
Randy S. Roberts; Timothy G. Trucano; Paul A. Pope; Cecilia R. Aragon; Ming Jiang; Thomas Y. C. Wei; Lawrence K. Chilton; Alan Bakel
Verification and validation (V&V) of geospatial image analysis algorithms is a difficult task and is becoming increasingly important. While there are many types of image analysis algorithms, we focus on developing V&V methodologies for algorithms designed to provide textual descriptions of geospatial imagery. In this paper, we present a novel methodological basis for V&V that employs a domain-specific ontology, which provides a naming convention for a domain-bounded set of objects and a set of named relationships between these objects. We describe a validation process that proceeds through objectively comparing benchmark imagery, produced using the ontology, with algorithm results. As an example, we describe how the proposed V&V methodology would be applied to algorithms designed to provide textual descriptions of facilities.
Presented at: SPIE Defense and Security Symposium, Orlando, FL, United States, Apr 17 - Apr 21, 2006 | 2006
Stephen Snarski; Karl F. Scheibner; Scott Shaw; Randy S. Roberts; Andy LaRow; Eric F. Breitfeller; Jasper Lupo; Darron Nielson; Bill Judge; Jim Forren
This paper describes experimental results from a live-fire data collect designed to demonstrate the ability of IR and acoustic sensing systems to detect and map high-volume gunfire events from tactical UAVs. The data collect supports an exploratory study of the FightSight concept in which an autonomous UAV-based sensor exploitation and decision support capability is being proposed to provide dynamic situational awareness for large-scale battalion-level firefights in cluttered urban environments. FightSight integrates IR imagery, acoustic data, and 3D scene context data with prior time information in a multi-level, multi-step probabilistic-based fusion process to reliably locate and map the array of urban firing events and firepower movements and trends associated with the evolving urban battlefield situation. Described here are sensor results from live-fire experiments involving simultaneous firing of multiple sub/super-sonic weapons (2-AK47, 2-M16, 1 Beretta, 1 Mortar, 1 rocket) with high optical and acoustic clutter at ranges up to 400m. Sensor-shooter-target configurations and clutter were designed to simulate UAV sensing conditions for a high-intensity firefight in an urban environment. Sensor systems evaluated were an IR bullet tracking system by Lawrence Livermore National Laboratory (LLNL) and an acoustic gunshot detection system by Planning Systems, Inc. (PSI). The results demonstrate convincingly the ability for the LLNL and PSI sensor systems to accurately detect, separate, and localize multiple shooters and the associated shot directions during a high-intensity firefight (77 rounds in 5 sec) in a high acoustic and optical clutter environment with very low false alarms. Preliminary fusion processing was also examined that demonstrated an ability to distinguish co-located shooters (shooter density), range to <0.5 m accuracy at 400m, and weapon type. The combined results of the high-intensity firefight data collect and a detailed systems study demonstrate the readiness of the FightSight concept for full system development and integration.
Proceedings of SPIE | 2015
Abdul A. S. Awwal; Karl Wilhelmsen; Randy S. Roberts; Richard R. Leach; Victoria Miller Kamm; Tony Ngo; Roger Lowe-Webb
The current automation of image-based alignment of NIF high energy laser beams is providing the capability of executing multiple target shots per day. An important aspect of performing multiple shots in a day is to reduce additional time spent aligning specific beams due to perturbations in those beam images. One such alignment is beam centration through the second and third harmonic generating crystals in the final optics assembly (FOA), which employs two retro-reflecting corner cubes to represent the beam center. The FOA houses the frequency conversion crystals for third harmonic generation as the beams enters the target chamber. Beam-to-beam variations and systematic beam changes over time in the FOA corner-cube images can lead to a reduction in accuracy as well as increased convergence durations for the template based centroid detector. This work presents a systematic approach of maintaining FOA corner cube centroid templates so that stable position estimation is applied thereby leading to fast convergence of alignment control loops. In the matched filtering approach, a template is designed based on most recent images taken in the last 60 days. The results show that new filter reduces the divergence of the position estimation of FOA images.
Proceedings of SPIE | 2015
Richard R. Leach; Abdul A. S. Awwal; Simon J. Cohen; Roger Lowe-Webb; Randy S. Roberts; Thad Salmon; David A. Smauley; Karl Wilhelmsen
The Advance Radiographic Capability (ARC) at the National Ignition Facility (NIF) is a laser system that employs up to four petawatt (PW) lasers to produce a sequence of short pulses that generate X-rays which backlight high-density inertial confinement fusion (ICF) targets. ARC is designed to produce multiple, sequential X-ray images by using up to eight back lighters. The images will be used to examine the compression and ignition of a cryogenic deuterium-tritium target with tens-of-picosecond temporal resolution during the critical phases of an ICF shot. Multi-frame, hard-X-ray radiography of imploding NIF capsules is a capability which is critical to the success of NIFs missions. As in the NIF system, ARC requires an optical alignment mask that can be inserted and removed as needed for precise positioning of the beam. Due to ARC’s split beam design, inserting the nominal NIF main laser alignment mask in ARC produced a partial blockage of the mask pattern. Requirements for a new mask design were needed. In this paper we describe the ARC mask requirements, the resulting mask design pattern, and the image analysis algorithms used to detect and identify the beam and reference centers required for ARC alignment.
international geoscience and remote sensing symposium | 2011
Randy S. Roberts; Paul A. Pope; Ranga Raju Vatsavai; Ming Jiang; Lloyd F. Arrowood; Timothy G. Trucano; Shaun S. Gleason; Anil M. Cheriyadat; Alex Sorokine; Aggelos K. Katsaggelos; Thrasyvoulos N. Pappas; Lucinda R. Gaines; Lawrence K. Chilton
The design of benchmark imagery for validation of image annotation algorithms is considered. Emphasis is placed on imagery that contains industrial facilities, such as chemical refineries. An application-level facility ontology is used as a means to define salient objects in the benchmark imagery. In-strinsic and extrinsic scene factors important for comprehensive validation are listed, and variability in the benchmarks discussed. Finally, the pros and cons of three forms of benchmark imagery: real, composite and synthetic, are delineated.