Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael J. Healy is active.

Publication


Featured researches published by Michael J. Healy.


IEEE Transactions on Neural Networks | 1993

A neural architecture for pattern sequence verification through inferencing

Michael J. Healy; Thomas P. Caudell; Scott D. G. Smith

LAPART, a neural network architecture for logical inferencing and supervised learning is discussed. Emphasizing its use in recognizing familiar sequences of patterns by verifying pattern pairs inferred from prior experience. It consists of interconnected adaptive resonance theory (ART) networks. The interconnects enable LAPART to learn to infer one pattern class from another to form a predictive sequence. It predicts a next pattern class based upon recognition of a current pattern and tests the prediction as new data become available. A confirmed prediction aids verification of a familiar sequence, and a disconfirmation flags a novel pairing of patterns. A simulation of LAPART is applied to verification of a hypothetical, known target using a sequence of sensor images obtained along a predetermined approach path. Application issues are addressed with a simple strategy, and it is shown how they could be addressed in a more complete fashion. Other topics, including a logical interpretation of ART and LAPART, are discussed.


IEEE Transactions on Neural Networks | 1997

Acquiring rule sets as a product of learning in a logical neural architecture

Michael J. Healy; Thomas P. Caudell

Envisioning neural networks as systems that learn rules calls forth the verification issues already being studied in knowledge-based systems engineering, and complicates these with neural-network concepts such as nonlinear dynamics and distributed memories. We show that the issues can be clarified and the learned rules visualized symbolically by formalizing the semantics of rule-learning in the mathematical language of two-valued predicate logic. We further show that this can, at least in some cases, be done with a fairly simple logical model. We illustrate this with a combination of two example neural-network architectures, LAPART, designed to learn rules as logical inferences from binary data patterns, and the stack interval network, which converts real-valued data into binary patterns that preserve the semantics of the ordering of real values. We discuss the significance of the formal model in facilitating the analysis of the underlying logic of rule-learning and numerical data representation. We provide examples to illustrate the formal model, with the combined stack interval/LAPART networks extracting rules from numerical data.


international symposium on neural networks | 2000

Category theory applied to neural modeling and graphical representations

Michael J. Healy

Category theory can be applied to mathematically model the semantics of cognitive neural systems. Here, we employ colimits, functors and natural transformations to model the implementation of concept hierarchies in neural networks equipped with multiple sensors.


international symposium on neural networks | 2001

A categorical semantic analysis of ART architectures

Michael J. Healy; Thomas P. Caudell

We apply a new semantic model for neural networks to the analysis of learned concept representations in ART networks. The new model is based upon the category theory, the mathematical theory of structure. It allows an unambiguous evaluation of the ability of an ART network to capture the hierarchical structure of interrelated symbolic concepts accurately within its connectionist structure. For inferential ART networks, such as LAPART and fuzzy ARTMAP, the analysis can go further, evaluating the coherence within a system of interconnected ART subnetworks. The connections across hierarchies must be consistent with the concept relations within each hierarchy. Categorical notions are key to the analysis of concept hierarchies and their coherence. The analysis shows that ART networks have a partial capability to represent learned concept relationships, but the representation is incomplete even when the network performs perfectly on data examples.


IEEE Transactions on Neural Networks | 1998

Guaranteed two-pass convergence for supervised and inferential learning

Michael J. Healy; Thomas P. Caudell

We present a theoretical analysis of a version of the LAPART adaptive inferencing neural network. Our main result is a proof that the new architecture, called LAPART 2, converges in two passes through a fixed training set of inputs. We also prove that it does not suffer from template proliferation. For comparison, Georgiopoulos et al. have proved the upper bound n-1 on the number of passes required for convergence for the ARTMAP architecture, where n is the size of the binary pattern input space. If the ARTMAP result is regarded as an n-pass, or finite-pass, convergence result, ours is then a two-pass, or fixed-pass, convergence result. Our results have added significance in that they apply to set-valued mappings, as opposed to the usual supervised learning model of affixing labels to classes.


Neurocomputing | 2009

Applying category theory to improve the performance of a neural architecture

Michael J. Healy; Richard D. Olinger; Robert J. Young; Shawn E. Taylor; Thomas P. Caudell; Kurt W. Larson

A recently developed mathematical semantic theory explains the relationship between knowledge and its representation in connectionist systems. The semantic theory is based upon category theory, the mathematical theory of structure. A product of its explanatory capability is a set of principles to guide the design of future neural architectures and enhancements to existing designs. We claim that this mathematical semantic approach to network design is an effective basis for advancing the state of the art. We offer two experiments to support this claim. One of these involves multispectral imaging using data from a satellite camera.


international symposium on neural networks | 2005

Modification of the ART-1 architecture based on category theoretic design principles

Michael J. Healy; Richard D. Olinger; Robert J. Young; Thomas P. Caudell; Kurt W. Larson

Many studies have addressed the knowledge representation capability of neural networks. A recently-developed mathematical semantic theory explains the relationship between knowledge and its representation in connectionist systems. The theory yields design principles for neural networks whose behavioral repertoire expresses any desired capability that can be expressed logically. In this paper, we show how the design principle of limit formation can he applied to modify the ART-1 architecture, yielding a discrimination capability that goes beyond vigilance. Simulations of this new design illustrate the increased discrimination ability it provides for multi-spectral image analysis.


international symposium on neural networks | 2003

eLoom and Flatland: specification, simulation and visualization engines for the study of arbitrary hierarchical neural architectures

Thomas P. Caudell; Yunhai Xiao; Michael J. Healy

eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNMs Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatlands capabilities.


international symposium on neural networks | 1999

Colimits in memory: category theory and neural systems

Michael J. Healy

We introduce a new kind of mathematics for neural network modeling and show its application in modeling a cognitive memory system. Category theory has found increasing use in formal semantics, the modeling of the concepts (or meaning) behind computations. Here, we apply it to derive a mathematical model of concept formation and recall in a neural network that serves as a cognitive memory system. A unique feature of this approach is that the mathematical model was used to derive the neural system architecture, using some general connectionist modeling principles. The system is a subnetwork of a larger neural network that includes subnetworks for sensor input processing, planning and generating outputs, such as motor commands for controlling a robot. Alternatively, it is proposed as a mathematical model of the process and organization of human memory. The model provides a possible formal base for investigations in the biological and cognitive sciences.


international symposium on neural networks | 2002

Aphasic compressed representations: a functorial semantic design principle for coupled ART networks

Michael J. Healy; Thomas P. Caudell

Supervised ART networks consist of interconnected subnetworks, a, b, and possibly also a map field. The a and b subnetworks have separate input representations for inputs and associated outputs. A categorical semantic analysis suggests that each input field must have a compressed representation in the other subnetwork(s).

Collaboration


Dive into the Michael J. Healy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James D. Morrow

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kurt W. Larson

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen J. Verzi

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge