Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lilly Spirkovska is active.

Publication


Featured researches published by Lilly Spirkovska.


IEEE Transactions on Neural Networks | 1993

Coarse-coded higher-order neural networks for PSRI object recognition

Lilly Spirkovska; Max B. Reid

The authors describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096x4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, the authors empirically determine the limits of the coarse coding technique in the position, scale, and rotation invariant (PSRI) object recognition domain.


Pattern Recognition | 1992

Robust position, scale, and rotation invariant object recognition using higher-order neural networks

Lilly Spirkovska; Max B. Reid

Abstract For object recognition invariant to changes in the objects position, size, and in-plane rotation, higher-order neural networks (HONNs) have numerous advantages over other neural network approaches. Because distortion invariance can be built into the architecture of the network, HONNs need to be trained on just one view of each object, not numerous distorted views, reducing the training time significantly. Further, 100% accuracy can be guaranteed for noise-free test images characterized by the built-in distortions. Specifically, a third-order neural network trained on just one view of an SR-71 aircraft and a U-2 aircraft in a 127 × 127 pixel input field successfully recognized all views of both aircraft larger than 70% of the original size, regardless of orientation or position of the test image. Training required just six passes. In contrast, other neural network approaches require thousands of passes through a training set consisting of a much larger number of training images and typically achieve only 80–90% accuracy on novel views of the objects. The above results assume a noise-free environment. The performance of HONNs is explored with non-ideal test images characterized by white Gaussian noise or partial occlusion. With white noise added to images with an ideal separation of background vs. foreground gray levels, it is shown that HONNs achieve 100% recognition accuracy for the test set for a standard deviation up to ∼10% of the maximum gray value and continue to show good performance (defined as better than 75% accuracy) up to a standard deviation of ∼14%. HONNs are also robust with respect to partial occlusion. For the test set of training images with very similar profiles, HONNs achieve 100% recognition accuracy for one occlusion of ∼13% of the input field size and four occlusions of ∼70% of the input field size. They show good performance for one occlusion of ∼23% of the input field size or four occlusions of ∼15% of the input field size each. For training images with very different profiles, HONNs achieve 100% recognition accuracy for the test set for up to four occlusions of ∼2% of the input field size and continue to show good performance for up to four occlusions of ∼23% of the input field size each.


international symposium on neural networks | 1990

Connectivity strategies for higher-order neural networks applied to pattern recognition

Lilly Spirkovska; Max B. Reid

Different strategies for non-fully connected HONNs (higher-order neural networks) are discussed, showing that by using such strategies an input field of 128×128 pixels can be attained while still achieving in-plane rotation and translation-invariant recognition. These techniques allow HONNs to be used with the larger input scenes required for practical pattern-recognition applications. The number of interconnections that must be stored has been reduced by a factor of approximately 200000 in a T/C case and ~2000 in a Space Shuttle/F-18 case by using regional connectivity. Third-order networks have been simulated using several connection strategies


Computers & Graphics | 2002

AWE: Aviation Weather Data Visualization Environment

Lilly Spirkovska; Suresh K. Lodha

Abstract Weather is one of the major causes of aviation accidents. General aviation (GA) flights account for 92% of all the aviation accidents. Researchers are addressing this problem from various perspectives including improving meteorological forecasting techniques, collecting additional weather data automatically via on-board sensors and “flight” modems, and improving weather data dissemination (often available only in the textual format) and visualization techniques. We approach the problem from the improved dissemination perspective and propose weather visualization methods tailored for general aviation pilots. Although some aviation weather data, such as possible icing (Airmans Meteorological Information (AIRMETs)) or turbulence conditions (Significant Meteorological Conditions (SIGMETs)), or information about precipitation intensity and movement, has already been presented well by existing systems, there is still an urgent need for visualizing several critical weather elements neglected so far. Our system, Aviation Weather Data Visualization Environment (AWE), focuses on graphical displays of these weather elements, namely, meteorological observations, terminal area forecasts, and winds aloft forecasts and maps them onto a cartographic grid specific to the pilots area of interest. Additional weather graphics such as icing (AIRMETs) or turbulence conditions (SIGMETs) can easily be added to our system to provide a pilot with a more complete visual weather briefing. Decisions regarding the graphical display and design are made based on careful consideration of user needs. Integral visual display of these elements of weather reports is designed for the use of GA pilots as a weather briefing and route selection tool. AWE provides linking of the weather information to the flights path and schedule. The pilot can interact with the system to obtain aviation-specific weather for the entire area or for his specific route to explore what-if scenarios including the selection of alternates, and make “go/no-go” decisions. AWE, as evaluated by some pilots at National Aeronautics and Space Administration Ames Research Center, was found to be useful.


AIAA Infotech@Aerospace 2007 Conference and Exhibit | 2007

Evaluation, Selection, and Application of Model-Based Diagnosis Tools and Approaches

Scott Poll; Ann Patterson-Hine; Joe Camisa; David Nishikawa; Lilly Spirkovska; David Garcia; David N. Hall; Christian Neukom; Adam Sweet; Serge Yentus; Charles Lee; John Ossenfort; Ole J. Mengshoel; Indranil Roychoudhury; Matthew Daigle; Gautam Biswas; Xenofon D. Koutsoukos; Robyn R. Lutz

Model-based approaches have proven fruitful in the design and implementation of intelligent systems that provide automated diagnostic functions. A wide variety of models are used in these approaches to represent the particular domain knowledge, including analytic state-based models, input-output transfer function models, fault propagation models, and qualitative and quantitative physics-based models. Diagnostic applications are built around three main steps: observation, comparison, and diagnosis. If the modeling begins in the early stages of system development, engineering models such as fault propagation models can be used for testability analysis to aid definition and evaluation of instrumentation suites for observation of system behavior. Analytical models can be used in the design of monitoring algorithms that process observations to provide information for the second step in the process, comparison of expected behavior of the system to actual measured behavior. In the final diagnostic step, reasoning about the results of the comparison can be performed in a variety of ways, such as dependency matrices, graph propagation, constraint propagation, and state estimation. Realistic empirical evaluation and comparison of these approaches is often hampered by a lack of standard data sets and suitable testbeds. In this paper we describe the Advanced Diagnostics and Prognostics Testbed (ADAPT) at NASA Ames Research Center. The purpose of the testbed is to measure, evaluate, and mature diagnostic and prognostic health management technologies. This paper describes the testbed’s hardware, software architecture, and concept of operations. A simulation testbed that


Machine Learning | 1994

Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

Lilly Spirkovska; Max B. Reid

A higher-order neural network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition.The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.


Journal of Aerospace Computing Information and Communication | 2012

General Purpose Data-Driven Monitoring for Space Operations

David L. Iverson; Rodney Martin; Mark Schwabacher; Lilly Spirkovska; William Taylor; Ryan Mackey; J.Patrick Castle; Vijayakumar Baskaran

As modern space propulsion and exploration systems improve in capability and efficiency, their designs are becoming increasingly sophisticated and complex. Determining the health state of these systems, using traditional parameter limit checking, model-based, or rule-based methods, is becoming more difficult as the number of sensors and component interactions grow. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults or failures. The Inductive Monitoring System (IMS) is a data-driven system health monitoring software tool that has been successfully applied to several aerospace applications. IMS uses a data mining technique called clustering to analyze archived system data and characterize normal interactions between parameters. The scope of IMS based data-driven monitoring applications continues to expand with current development activities. Successful IMS deployment in the International Space Station (ISS) flight control room to monitor ISS attitude control systems has led to applications in other ISS flight control disciplines, such as thermal control. It has also generated interest in data-driven monitoring capability for Constellation, NASAs program to replace the Space Shuttle with new launch vehicles and spacecraft capable of returning astronauts to the moon, and then on to Mars. Several projects are currently underway to evaluate and mature the IMS technology and complementary tools for use in the Constellation program. These include an experiment on board the Air Force TacSat-3 satellite, and ground systems monitoring for NASAs Ares I-X and Ares I launch vehicles. The TacSat-3 Vehicle System Management (TVSM) project is a software experiment to integrate fault and anomaly detection algorithms and diagnosis tools with executive and adaptive planning functions contained in the flight software on-board the Air Force Research Laboratory TacSat-3 satellite. The TVSM software package will be uploaded after launch to monitor spacecraft subsystems such as power and guidance, navigation, and control (GN&C). It will analyze data in real-time to demonstrate detection of faults and unusual conditions, diagnose problems, and react to threats to spacecraft health and mission goals. The experiment will demonstrate the feasibility and effectiveness of integrated system health management (ISHM) technologies with both ground and on-board experiments.


IEEE Transactions on Applications and Industry | 1990

An empirical comparison of ID3 and HONNs for distortion invariant object recognition

Lilly Spirkovska; Max B. Reid

The authors present results of experiments comparing the performance of the ID3 symbolic learning algorithm with a higher-order neural network (HONN) in the distortion invariant object recognition domain. In this domain, the classification algorithm needs to be able to distinguish between two objects regardless of their position in the input field, their in-plane rotation, or their scale. It is shown that HONNs are superior to ID3 with respect to recognition accuracy, whereas, on a sequential machine, ID3 classifies examples faster once trained. A further advantage of HONNs is the small training set required. HONNs can be trained on just one view of each object, whereas ID3 needs an exhaustive training set.<<ETX>>


Infotech@Aerospace | 2005

Inductive Learning Approaches for Improving Pilot Awareness of Aircraft Faults

Lilly Spirkovska; David L. Iverson; Scott Poll; Anna H. Pryor

Neural network flight controllers are able to accommodate a variety of aircraft control surface faults without detectable degradation of aircraft handling qualities. Under some faults, however, the effective flight envelope is reduced; this can lead to unexpected behavior if a pilot performs an action that exceeds the remaining control authority of the damaged aircraft. The goal of our work is to increase the pilot s situational awareness by informing him of the type of damage and resulting reduction in flight envelope. Our methodology integrates two inductive learning systems with novel visualization techniques. One learning system, the Inductive Monitoring System (IMS), learns to detect when a simulation includes faulty controls, while two others, Inductive Classification System (INCLASS) and multiple binary decision tree system (utilizing C4.5), determine the type of fault. In off-line training using only non-failure data, IMS constructs a characterization of nominal flight control performance based on control signals issued by the neural net flight controller. This characterization can be used to determine the degree of control augmentation required in the pitch, roll, and yaw command channels to counteract control surface failures. This derived information is typically sufficient to distinguish between the various control surface failures and is used to train both INCLASS and C4.5. Using data from failed control surface flight simulations, INCLASS and C4.5 independently discover and amplify features in IMS results that can be used to differentiate each distinct control surface failure situation. In real-time flight simulations, distinguishing features learned during training are used to classify control surface failures. Knowledge about the type of failure can be used by an additional automated system to alter its approach for planning tactical and strategic maneuvers. The knowledge can also be used directly to increase the pilot s situational awareness and inform manual maneuver decisions. Our multi-modal display of this information provides speech output to issue control surface failure warnings to a lesser-used communication channel and provides graphical displays with pilot-selectable !eve!s of details to issues additional information about the failure. We also describe a potential presentation for flight envelope reduction that can be viewed separately or integrated with an existing attitude indicator instrument. Preliminary results suggest that the inductive approach is capable of detecting that a control surface has failed and determining the type of fault. Furthermore, preliminary evaluations suggest that the interface discloses a concise summary of this information to the pilot.


visualization and data analysis | 2003

Audio-visual situational awareness for general aviation pilots

Lilly Spirkovska; Suresh K. Lodha

Weather is one of the major causes of general aviation accidents. One possible cause is that the pilot may not absorb and retain all the weather information she is required to review prior to flight. A second cause is the inadequacy of in-flight weather updates: pilots are limited to verbal updates via aircraft radio contact with a ground-based weather specialist. We propose weather visualization and interaction methods tailored for general aviation pilots to improve understanding of pre-flight weather data and improve in-flight weather updates. Our system, Aviation Weather Environment (AWE), utilizes information visualization techniques, a direct manipulation graphical interface, and a speech-based interface to improve a pilots situational awareness of relevant weather data. The system design is based on a user study and feedback from pilots.

Collaboration


Dive into the Lilly Spirkovska's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Daigle

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge