Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harald Ruda is active.

Publication


Featured researches published by Harald Ruda.


Proceedings of SPIE | 2001

Distributed course-of-action planning using genetic algorithms, XML, and JMS

Harald Ruda; Janet E. Burge; Peter Aykroyd; Jeffrey Sander; Dennis Okon; Greg L. Zacharias

Future command and control (C2) systems must be constructed in such a way that they are extensible both in terms of the kinds of scenarios they can handle and the type of manipulations that they support. This paper presents an open architecture that uses commercial standards and implementations where appropriate. The discussion is framed by our ongoing work with a course of action planner and generator that uses genetic algorithms together with an abstract wargamer to suggest a small number of possible COAs (FOX).


international conference on acoustics, speech, and signal processing | 1995

Automatic target recognition in laser radar imagery

Magnus Snorrason; Harald Ruda; Alper K. Caglayan

This paper presents an automatic target recognition (ATR) system for laser radar (LADAR) imagery, designed to classify objects at multiple levels of discrimination (target detection, classification, and recognition) from single LADAR images. Segmentation is performed in both the range and non-range LADAR channels and results combined to increase object detection rate or decrease false positive detection rate. Through use of the range data, object subimages are projected and rotated to canonical orientations, providing invariance to translation, scale and rotations in 3-D. Global features are extracted for rapid target detection and local receptive field features are computed for target recognition 100% detection and recognition rates are shown for a small set of real LADAR data.


Automatic target recognition. Conference | 2002

Feature based Target classification in laser radar

Mark R. Stevens; Magnus Snorrason; Harald Ruda; Sengvieng A. Amphay

Numerous feature detectors have been defined for detecting military vehicles in natural scenes. These features can be computed for a given image chip containing a known target and used to train a classifier. This classifier can then be used to assign a label to an un-labeled image chip. The performance of the classifier is dependent on the quality of the set of features used. In this paper, we first describe a set of features commonly used by the Automatic Target Recognition (ATR) community. We then analyze feature performance on a vehicle identification task in laser radar (LADAR) imagery. Our features are computed over both the range and reflectance channels. In addition, we perform feature subset selection using two different methods and compare the results. The goal of this analysis is to determine which subset of features to choose in order to optimize performance in LADAR Autonomous Target Acquisition (ATA).


Proceedings of SPIE | 2001

Automated obstacle mapping and navigability analysis for rover mission planning

Thomas G. Goodsell; Magnus Snorrason; Harald Ruda; Vitaly Ablavsky

One of NASAs goals for the Mars Rover missions of 2003 and 2005 is to have a distributed team of mission scientists. Since these scientists are not experts on rover mobility, we have developed the Rover Obstacle Visualizer and Navigability Expert (ROVANE). ROVANE is a combined obstacle detection and path planning software suite, to assist in distributed mission planning. ROVANE uses terrain data, in the form of panoramic stereo images captured by the rover, to detect obstacles in the rovers vicinity. These obstacles are combined into a traversability map which is used to provide path planning assistance for mission scientists. A corresponding visual representation is also generated, allowing human operators to easily identify hazardous regions and to understand ROVANEs path selection. Since the terrain data often contains uncertain regions, the ROVANE obstacle detector generates a probability distribution describing the likely cost of a given obstacle or region. ROVANE then allows the user to plan for best-, worst-, and intermediate-case scenarios. ROVANE thus allows non-experts to examine scenarios and plan missions which have a high probability of success. ROVANE is capable of stand-alone operation, but it is designed to work with JPLs Web Interface for Telescience, an Internet-based tool for collaborative command sequence generation.


Mobile Robots XV and Telemanipulator and Telepresence Technologies VII | 2001

Rover obstacle visualizer and navigability evaluator

Thomas G. Goodsell; Magnus Snorrason; Harald Ruda; Vitaly Ablavsky

The primary data used in ground-based, global path planning for NASAs Planetary Rovers are stereo images down-linked from the rover and range data derived from those images. The range data are often incomplete: the sensors are inherently noisy and sections of the landscape are blocked. This missing data complicates the path planning process and necessitates the help of human experts. We present the Rover Obstacle Visualizer and Navigability Evaluator (ROVANE), which assists these human experts and allows non-experts to plan missions without expert help. ROVANE generates a hazard map identifying slow, impassable, or dangerous regions with varying degrees of certainty. This map is used to create possible paths, which are assigned variable costs based on possible hazards. A hazard visualization is also produced, allowing the user to visually identify hazards and understand the systems path selection. As target locations are entered by the user, the system finds appropriate paths using a variation of the A* algorithm. A found path can be further modified by the user and output in a format suitable for commanding an actual rover. The system is capable of stand-alone operation, but is designed to be integrated into the Jet Propulsion Laboratory’s Web Interface for Telescience.


Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000

Analysis and modeling of fixation point selection for visual search in cluttered backgrounds

Magnus Snorrason; James E. Hoffman; Harald Ruda

Hard-to-see targets are generally only detected by human observers once they have been fixated. Hence, understanding how the human visual system allocates fixation locations is necessary for predicting target detectability. Visual search experiments were conducted where observers searched for military vehicles in cluttered terrain. Instantaneous eye position measurements were collected using an eye tracker. The resulting data was partitioned into fixations and saccades, and analyzed for correlation with various image properties. The fixation data was used to validate out model for predicting fixation locations. This model generates a saliency map from bottom-up image features, such as local contrast. To account for top-down scene understanding effects, a separate cognitive bias map is generated. The combination of these two maps provides a fixation probability map, from which sequences of fixation points were generated.


Targets and Backgrounds: Characterization and Representation V | 1999

Modeling time to detection for observers searching for targets in cluttered backgrounds

Harald Ruda; Magnus Snorrason

The purpose of this work is to provide a model for the average time to detection for observers searching for targets in photo-realistic images of cluttered scenes. The proposed model builds on previous work that constructs a fixation probability map (FPM) from the image. This FPM is constructed from bottom- up features, such as local contrast, but also includes top- down cognitive effects, such as the location of the horizon. The FPM is used to generate a set of conspicuous points that are likely to be fixation points, along with initial probabilities of fixation. These points are used to assemble fixation sequences. The order of these fixations is clearly crucial for determining the time to fixation. Recognizing that different observers (unconsciously) choose different orderings of the conspicuous points, the present model performs a Monte- Carlo simulation to find the probability of fixating each conspicuous point at each position in the sequence. The three main assumptions of this model are: the observer can only attend to the area of the image being fixated, each fixation has an approximately constant duration, and there is a short term memory for the locations of previous fixation points. This fixation point memory is an essential feature of the model, and the memory decay constant is a parameter of the model. Simulations show that the average time to fixation for a given conspicuous point in the image depends on the distribution of other conspicuous points. This is true even if the initial probability of fixation for a given point is the same across distributions, and only the initial probability of fixation of the other points is distributed differently.


Proceedings of SPIE | 2001

Refined time-to-detection model using shunting neural networks

Harald Ruda; Magnus Snorrason

The purpose of this work is to provide a model for the average time to detection for observers searching for targets in photo-realistic images of cluttered scenes. The current work proposes to extend previous results of modeling time to detection that used a simple decaying fixation memory. While the aforementioned results were encouraging in showing a strong effect of fixation memory, there were also discrepancies. The main discrepancy was the tendency of immediate refixation, which was not accounted for at all by the original model. The present paper describes how the original fixation memory model is extended using a shunting neural network. Shunting neural networks are neurally plausible mechanisms for modeling various brain functions. Furthermore, this shunting neural network can then be extended in a simple manner to incorporate effects of spatial relationships, which were completely ignored in the original model. The model described is testable on experimental data, and is being calibrated using both analytical and experimental methods.


Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000

Calibration of a time-to-detection model using data from visual search experiments

Harald Ruda; James E. Hoffman; Magnus Snorrason

Using a model of visual search that predicts fixation probabilities for hard-to-see targets in naturalistic images, it is possible to stochastically generate fixation sequences and time to detection for targets in these images. The purpose of the current work is to calibrate some of the parameters of a time to detection model. In particular, this work is an attempt to elucidate the parameters of the proposed fixation memory model, the strength and decay parameters. The methods used to perform this calibration consist chiefly of comparison of the stochastic model with both experimental data and a theoretical analysis of a simplified scenario. The experimental data have been collected from ten observers performing a visual search experiment. During the experiment, eye fixations were tracked with an ISCAN infrared camera system. The visual search stimuli required fixation on target for detection (i.e. hard-to-detect stimuli). The experiment studied re-fixations of previously fixated targets ,where the fixation memory failed. The theoretical analysis is based on a simplified scenario that parallels the experimental setup, with a fixed number, N, of equally probable objects. It is possible to derive analytical expressions for the re- fixation probability in this case. The results of the analysis can be used in three different ways: (1) to verify the implementation of the stochastic model, (2) to estimate the stochastic parameters of the model (i.e., number of fixations sequences to generate), and (3) to calibrate the fixation memory parameters by fitting the experimental data.


Archive | 1996

Automated Construction of a Hierarchy of Self-Organized Neural Network Classifiers

Harald Ruda; Magnus Snorrason

Collaboration


Dive into the Harald Ruda's collaboration.

Top Co-Authors

Avatar

Magnus Snorrason

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alper K. Caglayan

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar

Dennis Okon

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar

Greg L. Zacharias

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey Sander

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar

Mark R. Stevens

Charles River Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge