Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Geczy is active.

Publication


Featured researches published by Peter Geczy.


international symposium on neural networks | 1997

Learning performance measures for MLP networks

Peter Geczy; Shiro Usui

Training of MLP networks is mainly based on implementation of first order line search optimization techniques. Determination of the search direction is given by an error matrix for a neural network. The error matrix contains essential information not only about the search direction, but also about the specific features of the error landscape. The analysis of the error matrix based on the estimation of its spectral radius provides relative measure on proportion of the algorithms movement in multidimensional weight/error space. Furthermore, the estimate of a spectral radius forms a suitable reference ground for derivation of performance measures. The article presents effective and computationally inexpensive performance measures for MLP networks. Such measures allow not only monitoring of networks performance but On their basis an individual performance measure for each structural element can be derived. This has direct applicability to the pruning strategies of MLP networks. In addition, the proposed performance measures permit detection of specific shapes of error surfaces such as flat regions and sharp slopes. This feature is of essential importance for algorithms implementing dynamic modifications of learning rate.


international symposium on neural networks | 1997

Effects of structural adjustments on the estimate of a spectral radius of error matrices

Peter Geczy; Shiro Usui

Successful practical applications of MLP networks trained by first order line search optimization approaches initiated a wave of interest in improving the speed of convergence and finding the optimum structure. Recently, training procedures have been implemented that incorporate dynamic structural changes of a network during a learning phase. The objective is to optimize the size of a network and yet maintain good performance. Essential part of structure modifying algorithms is the formulation of criteria for detecting the irrelevant structural elements. Performance measures for MLP networks previously designed by the authors (1997) allow the derivation of the individual performance measure Ipm(u/sub 1/). The underlying reference ground for the individual performance measure Ipm(u/sub 1/) is the estimate of a spectral radius of an error matrix for a neural network. The estimate of a spectral radius can be affected by imposed modifications of structure. This paper presents the first deterministic analytical apparatus for observing the effects of structural modifications on the estimate of a spectral radius. Theoretical material has wide applicability. However, it is especially useful for developing training procedures incorporating the dynamic structural changes and also for monitoring the performance of MLP networks.


international symposium on neural networks | 1998

On design of superlinear first order automatic machine learning techniques

Peter Geczy; Shiro Usui

Due to the computational excess of second order methods machine learning techniques in general, and neural network training techniques in particular, primarily employ first order line search optimization methods. The article presents a variation of first order line search optimization techniques that has superlinear convergence rates, i.e. the fastest convergence rates for first order methods. The presented algorithm has substantially simplified a line search subproblem into a single step calculation of the appropriate values of step length and/or momentum term. This remarkably simplifies the implementation and computational complexity of the line search subproblem and yet does not harm the stability of the methods. The algorithm is theoretically proven to be convergent, with superlinear convergence rates, and exactly classified within the newly proposed classification framework for first order techniques. Performance of the proposed algorithm is practically evaluated on five data sets and compared to the relevant standard first order optimization techniques. The results indicate superior performance of the presented algorithm over the standard first order methods.


Archive | 2000

First Order Dynamic Instance Selection

Peter Geczy; Shiro Usui; Ján Chmúrny

Training of adaptable systems such as neural networks indispensably depends on the training exemplar set. The most promising training algorithms utilize dynamic instance selection. Dynamic instance selection technique is capable of selecting instances dynamically at each iteration of adaptation procedure. Adaptable system is thus at each iteration presented with appropriately selected set of learning instances that can vary in size and content. Variability of the selected exemplar set contributes to the speed of learning and lowers its computational cost. Benefit of dynamic instance selection can also be found in improved properties of trained adaptable systems.


international conference on machine learning and applications | 2004

Effective dynamic sample selection algorithm

Peter Geczy; Shiro Usui

Data/information overload is becoming increasingly important issue. Training adaptable systems, or machine learning agents, on large data sets is computationally costly and often ineffective. Proper management of data utilized for adaptation can speed up learning and decrease computational costs. The article presents a sample selection algorithm easily implementable into first order adaptable systems. It effectively selects an appropriate set of training exemplars at each iteration of adaptation. The selected exemplar set may vary in size and chosen data. Dynamic sample selection algorithm is computationally inexpensive and positively contributes to the increased convergence speed of the first order learning methods. The presented dynamic sample selection is theoretically justified and practically demonstrated on the tasks of neural networks training. The simulation results indicate satisfactory performance.


international conference on neural information processing | 1999

Knowledge acquisition from networks of abstract bio-neurons

Peter Geczy; T. Hayasaka; Shiro Usui

The acquisition of knowledge from trained artificial neural networks and its representation in a logical formalism is a fast-growing subject that has attracted the attention of numerous researchers. This study focuses on the extraction of rules from artificial neural networks incorporating abstract bio-neurons. A model of an abstract bio-neuron represents a bridge between the wide applicability and the higher biological plausibility of neural networks. The issue of rule extraction is addressed at the theoretical level. The presented theoretical material introduces conditions which are generally applicable to rule extraction from nonlinear mappings. The results of the theoretical analysis are applied to three-layer network structures containing abstract bio-neuron computational elements. Appropriate conditions enabling the representation of the networks classification in the form of rules are formulated.


Behaviormetrika | 1999

RULE EXTRACTION FROM TRAINED ARTIFICAL NEURAL NETWORKS

Peter Geczy; Shiro Usui


IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 1998

Dynamic Sample Selection : Theory

Peter Geczy; Shiro Usui


IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2000

Superlinear Conjugate GradientMethod with Adaptable Step Length Step Length and Constant Momentum erm

Peter Geczy; Shiro Usui


IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2000

Novel First Order Optimization Classification Framework

Peter Geczy; Shiro Usui

Collaboration


Dive into the Peter Geczy's collaboration.

Top Co-Authors

Avatar

Shiro Usui

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

T. Hayasaka

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge