Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J.J. Choi is active.

Publication


Featured researches published by J.J. Choi.


IEEE Transactions on Neural Networks | 1991

Query-based learning applied to partially trained multilayer perceptrons

Jenq-Neng Hwang; J.J. Choi; Seho Oh; Robert J. Marks

An approach is presented for query-based neural network learning. A layered perceptron partially trained for binary classification is considered. The single-output neuron is trained to be either a zero or a one. A test decision is made by thresholding the output at, for example, one-half. The set of inputs that produce an output of one-half forms the classification boundary. The authors adopted an inversion algorithm for the neural network that allows generation of this boundary. For each boundary point, the classification gradient can be generated. The gradient provides a useful measure of the steepness of the multidimensional decision surfaces. Conjugate input pairs are generated using the boundary point and gradient information and presented to an oracle for proper classification. These data are used to refine further the classification boundary, thereby increasing the classification accuracy. The result can be a significant reduction in the training set cardinality in comparison with, for example, randomly generated data points. An application example to power system security assessment is given.


ieee international conference on fuzzy systems | 1993

Adaptive membership function fusion and annihilation in fuzzy if-then rules

B.G. Song; Robert J. Marks; Seho Oh; Payman Arabshahi; Thomas P. Caudell; J.J. Choi

The parameters of the input and output fuzzy membership functions for fuzzy if-then min-max inferencing may be adapted using supervised learning applied to training data. Under the assumption that the inference surface is in some sense smooth, the process of adaptation can reveal overdetermination of the fuzzy system in two ways. First, if two membership functions come sufficiently close to each other, they can be fused into a single membership function. Second, annihilation occurs when a membership function becomes sufficiently narrow. In both cases, the number of if-then rules is reduced. In certain cases, the overall performance of the fuzzy system can be improved by this adaptive pruning. The process of membership function fusion and annihilation is illustrated with two examples.<<ETX>>


ieee international conference on fuzzy systems | 1992

Fuzzy control of backpropagation

Payman Arabshahi; J.J. Choi; Robert J. Marks; Thomas P. Caudell

The authors propose a fuzzy logic controlled implementation of the backpropagation training algorithm for layered perceptrons. The heuristics for adjusting the value of the learning rate eta are incorporated into a simple fuzzy control system. This provides automatic tuning of the learning rate parameter depending on the shape of the error surface. The application of this straightforward procedure was shown to be able to dramatically improve training time in some problems.<<ETX>>


international symposium on neural networks | 2002

Set constraint discovery: missing sensor data restoration using autoassociative regression machines

S. Narayanan; Robert J. Marks; J.L. Vian; J.J. Choi; Mohamed A. El-Sharkawi; B.B. Thompson

A sensor array can generate interdependent readings among the sensors. If the dependence is sufficiently strong, the readings may contain redundancy to the degree that the readings from one or more lost sensors may be able to be accurately estimated from those remaining. An autoassociative regression machine can learn the data interrelationships through inspection of historical data. Once trained, the autoassociative machine can be used to restore one or more arbitrary lost sensors if the data dependency is sufficiently strong. Recovery techniques include alternating projection onto convex sets (POCS) and iterative search algorithms.


international symposium on neural networks | 1992

Fuzzy parameter adaptation in neural systems

J.J. Choi; Payman Arabshahi; Robert J. Marks; Thomas P. Caudell

The general structure of a neuro-fuzzy controller applicable to many diverse neural systems is presented. As an example, fuzzy control of the backpropagation training technique is considered for multilayer perceptrons, where significant speedup in training was observed. Fuzzy control of the number of classes in an ART 1 classifier is also considered. This can be advantageous in situations where there is prior knowledge of the number of classes into which one wishes to classify the input data.<<ETX>>


systems man and cybernetics | 1999

Dynamic fuzzy control of genetic algorithm parameter coding

Robert J. Streifel; Robert J. Marks; Russell Reed; J.J. Choi; Michael Healy

An algorithm for adaptively controlling genetic algorithm parameter (GAP) coding using fuzzy rules is presented. The fuzzy GAP coding algorithm is compared to the dynamic parameter encoding scheme proposed by Schraudolph and Belew. The performance of the algorithm on a hydraulic brake emulator parameter identification problem is investigated. Fuzzy GAP coding control is shown to dramatically increase the rate of convergence and accuracy of genetic algorithms.


international symposium on neural networks | 1990

Query learning based on boundary search and gradient computation of trained multilayer perceptrons

Jenq-Neng Hwang; J.J. Choi; Seho Oh

A novel approach to query-based neural network learning is presented. A layered perceptron partially trained for binary classification is considered. The single-output neuron is trained to be either a 0 or a 1. A test decision is made by thresholding the output at, for example, 1/2. The set of inputs that produce an output of 1/2 forms the classification boundary. For each boundary point, the classification gradient can be generated. The gradient provides a useful measure of the sharpness of the multidimensional decision surfaces. Conjugate input pair locations are generated using the boundary point and gradient information and are presented to the oracle for proper classification. These new data are used to further refine the classification boundary, thereby increasing the classification accuracy. The result can be a significant reduction in the training set cardinality in comparison with, for example, randomly generated data points. An application example to power security assessment is given


international symposium on neural networks | 1991

Training layered perceptrons using low accuracy computation

J.J. Choi; Seho Oh; Robert J. Marks

It is demonstrated that the random search approach to training layered perceptrons can be performed using low-accuracy computational precision, and therefore can be implemented using analog computational accuracy. In spite of their numerical stability, random search techniques suffer from ever-increasing search time as dimensionality grows. In response, the authors introduce a modified random search technique, improved bidirectional random optimization (IBRO), to improve the search accuracy per iteration. The proposed scheme should reduce overall search iterations dramatically. The authors compare the performance of IBRO with that of the bidirectional random optimization method through simulations.<<ETX>>


international symposium on circuits and systems | 1990

Classification boundaries and gradients of trained multilayer perceptrons

Jenq-Neng Hwang; J.J. Choi; Seho Oh; Robert J. Marks

An approach for query-based neural network learning is presented. Consider a layered perceptron partially trained for binary classification. The single output neuron is trained to be either a 0 or a 1. A test decision is made by thresholding the output at, for instance, 1/2. The set of inputs that produce an output of 1/2 forms the classification boundary. An inversion algorithm is adopted for the neural network that allows generation of this boundary. In addition, the classification gradient can be generated for each boundary point. The gradient provides a useful measure of the sharpness of the multidimensional decision surfaces. Using the boundary point and gradient information, conjugate input pair locations are generated and presented to an oracle for proper classification. These data are used to further refine the classification boundary, thereby increasing the classification accuracy. The result can be a significant reduction in the training set cardinality in comparison with, for example, randomly generated data points.<<ETX>>


international symposium on neural networks | 2003

Missing sensor data restoration for vibration sensors on a jet aircraft engine

S. Narayanan; J.L. Vian; J.J. Choi; Robert J. Marks; Mohamed A. El-Sharkawi; B.B. Thompson

Using array historical data, the readings from a sensor array may be shown to contain sufficient redundancy such that the readings from one or more lost sensors may be able to be accurately estimated from those remaining. This interdependency can be established by an neural network encoder. The encoder is also used in the restoration process. In this paper, we give some examples of sensor restoration for vibration sensors on jet engine and computer traffic data.

Collaboration


Dive into the J.J. Choi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seho Oh

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B.B. Thompson

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J.L. Vian

University of Washington

View shared research outputs
Top Co-Authors

Avatar

S. Narayanan

University of Washington

View shared research outputs
Top Co-Authors

Avatar

B.G. Song

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge