Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John E. Seiffertt Iv is active.

Publication


Featured researches published by John E. Seiffertt Iv.


Neural Networks | 2009

2009 Special Issue: Coordinated machine learning and decision support for situation awareness

Nathan G. Brannon; John E. Seiffertt Iv; Timothy J. Draelos; Donald C. Wunsch

Domains such as force protection require an effective decision maker to maintain a high level of situation awareness. A system that combines humans with neural networks is a desirable approach. Furthermore, it is advantageous for the calculation engine to operate in three learning modes: supervised for initial training and known updating, reinforcement for online operational improvement, and unsupervised in the absence of all external signaling. An Adaptive Resonance Theory based architecture capable of seamlessly switching among the three types of learning is discussed that can be used to help optimize the decision making of a human operator in such a scenario. This is followed by a situation assessment module.


IEEE Transactions on Neural Networks | 2010

Backpropagation and Ordered Derivatives in the Time Scales Calculus

John E. Seiffertt Iv; Donald C. Wunsch

Backpropagation is the most widely used neural network learning technique. It is based on the mathematical notion of an ordered derivative. In this paper, we present a formulation of ordered derivatives and the backpropagation training algorithm using the important emerging area of mathematics known as the time scales calculus. This calculus, with its potential for application to a wide variety of inter-disciplinary problems, is becoming a key area of mathematics. It is capable of unifying continuous and discrete analysis within one coherent theoretical framework. Using this calculus, we present here a generalization of backpropagation which is appropriate for cases beyond the specifically continuous or discrete. We develop a new multivariate chain rule of this calculus, define ordered derivatives on time scales, prove a key theorem about them, and derive the backpropagation weight update equations for a feedforward multilayer neural network architecture. By drawing together the time scales calculus and the area of neural network learning, we present the first connection of two major fields of research.


international symposium on neural networks | 2009

Neural networks and Markov models for the iterated prisoner's dilemma

John E. Seiffertt Iv; Samuel A. Mulder; Rohit Dua; Donald C. Wunsch

The study of strategic interaction among a society of agents is often handled using the machinery of game theory. This research examines how a Markov Decision Process (MDP) model may be applied to an important element of repeated game theory: the iterated prisoners dilemma. Our study uses a Markovian approach to the game to represent the problem of in a computer simulation environment. A pure Markov approach is used on a simplified version of the iterated game and then we formulate the general game as a partially observable Markov decision process (POMDP). Finally, we use a cellular structure as an environment for players to compete and adapt. We apply both a simple replacement strategy and a cellular neural network to the environment.


international symposium on neural networks | 2009

An alpha derivative formulation of the Hamilton-Jacobi-Bellman equation Of Dynamic Programming

John E. Seiffertt Iv

The time scales calculus, which includes the study of the alpha derivative, is an emerging key area in mathematics. We extend this calculus to Approximate Dynamic Programming. In particular, we investigate application of the alpha derivative, one of the fundamental dynamic derivatives of time scales. We present a alpha-derivative based derivation and proof of the Hamilton-Jacobi-Bellman equation, the solution of which is the fundamental problem in the field of dynamic programming. By drawing together the calculus of time scales and the applied area of stochastic control via Approximate Dynamic Programming, we connect two major fields of research.


international joint conference on neural network | 2006

Information Fusion and Situation Awareness using ARTMAP and Partially Observable Markov Decision Processes

Nathan G. Brannon; Gregory N. Conrad; Timothy J. Draelos; John E. Seiffertt Iv; Donald C. Wunsch

For applications such as force protection, an effective decision maker needs to maintain an unambiguous grasp of the environment. Opportunities exist to leverage computational mechanisms for the adaptive fusion of diverse information sources. The current research involves the use of neural networks and Markov chains to process information from sources including sensors, weather data, and law enforcement. Furthermore, the system operators input is used as a point of reference for the machine learning algorithms. More detailed features of the approach are provided along with an example scenario.


Archive | 2010

An Application of Unified Computational Intelligence

John E. Seiffertt Iv; Donald C. Wunsch

The previous section described a unified computational intelligence learning architecture based on Adaptive Resonance Theory neural networks. In this chapter, this architecture is used in an application that was briefly introduced in Chapter 1.


Archive | 2010

Backpropagation on Time Scales

John E. Seiffertt Iv; Donald C. Wunsch

This section extends the previous section’s focus on the unified computational intelligence goal of developing the capability to adapt. The dynamic programming algorithm typically utilizes neural networks as function approximation tools. Therefore, discussing how to train a neural network within the unified framework of the time scales calculus contributes directly to this goal.


Archive | 2010

Unified Computational Intelligence for Complex Systems

John E. Seiffertt Iv; Donald C. Wunsch

Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to erect suspension bridges, although both are products of technological advancement and reflect an increased understanding of our world. In this book, we show how to unify aspects of learning and adaptation within the computational intelligence framework. While a number of algorithms exist that fall under the umbrella of computational intelligence, with new ones added every year, all of them focus on the capabilities of learning, adapting, and helping us seek. So, the term unified computational intelligence relates not to the individual algorithms but to the underlying goals driving them. This book focuses on the computational intelligence areas of neural networks and dynamic programming, showing how to unify aspects of these areas to create new, more powerful, computational intelligence architectures to apply to new problem domains.


Archive | 2010

The Time Scales Calculus

John E. Seiffertt Iv; Donald C. Wunsch

This chapter begins the second part of this book. The first part outlined a unified computational intelligence learning architecture based on neural networks. The design, theoretical underpinnings, and an application were presented to achieve the first goal of this book: to develop unified computational intelligence for learning.


Archive | 2010

The Unified Art Architecture

John E. Seiffertt Iv; Donald C. Wunsch

The previous chapter introduced the idea of unified computational intelligence and discussed its implications for a learning machine. This chapter presents the design unified ART architecture, and Chapter 3 contains an application implementing this design.

Collaboration


Dive into the John E. Seiffertt Iv's collaboration.

Top Co-Authors

Avatar

Donald C. Wunsch

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Timothy J. Draelos

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Nathan G. Brannon

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Paul Robinette

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Rohit Dua

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Vanbrunt

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Brian Blaha

University of Missouri

View shared research outputs
Top Co-Authors

Avatar

Daryl G. Beetner

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Gregory N. Conrad

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge