Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nikolaos Ampazis is active.

Publication


Featured researches published by Nikolaos Ampazis.


IEEE Transactions on Neural Networks | 2002

Two highly efficient second-order algorithms for training feedforward networks

Nikolaos Ampazis; Stavros J. Perantonis

We present two highly efficient second-order algorithms for the training of multilayer feedforward neural networks. The algorithms are based on iterations of the form employed in the Levenberg-Marquardt (LM) method for nonlinear least squares problems with the inclusion of an additional adaptive momentum term arising from the formulation of the training task as a constrained optimization problem. Their implementation requires minimal additional computations compared to a standard LM iteration. Simulations of large scale classical neural-network benchmarks are presented which reveal the power of the two methods to obtain solutions in difficult problems, whereas other standard second-order techniques (including LM) fail to converge.


international symposium on neural networks | 2000

Levenberg-Marquardt algorithm with adaptive momentum for the efficient training of feedforward networks

Nikolaos Ampazis; Stavros J. Perantonis

We present a highly efficient second order algorithm for the training of feedforward neural networks. The algorithm is based on iterations of the form employed in the Levenberg-Marquardt (LM) method for nonlinear least squares problems with the inclusion of an additional adaptive momentum term arising from the formulation of the training task as a constrained optimization problem. Its implementation requires minimal additional computations compared to a standard LM iteration which are compensated, however, from its excellent convergence properties. Simulations of large scale classical neural network benchmarks are presented which reveal the power of the method to obtain solutions in difficult problems whereas other standard second order techniques (including LM) fail to converge.


Neural Networks | 1999

Dynamics of multilayer networks in the vicinity of temporary minima

Nikolaos Ampazis; Stavros J. Perantonis; John G. Taylor

A dynamical system model is derived for a single-output, two-layer neural network, which learns according to the back-propagation algorithm. Particular emphasis is placed on the analysis of the occurrence of temporary minima. The Jacobian matrix of the system is derived, whose eigenvalues characterize the evolution of learning. Temporary minima correspond to critical points of the phase plane trajectories, and the bifurcation of the Jacobian matrix eigenvalues signifies their abandonment. Following this analysis, we show that the employment of constrained optimization methods can decrease the time spent in the vicinity of this type of minima. A number of numerical results illustrates the analytical conclusions.


Neural Processing Letters | 1998

Constrained Learning in Neural Networks: Application to Stable Factorization of 2-D Polynomials

Stavros J. Perantonis; Nikolaos Ampazis; S.J. Varoufakis; George E. Antoniou

Adaptive artificial neural network techniques are introduced and applied to the factorization of 2-D second order polynomials. The proposed neural network is trained using a constrained learning algorithm that achieves minimization of the usual mean square error criterion along with simultaneous satisfaction of multiple equality and inequality constraints between the polynomial coefficients. Using this method, we are able to obtain good approximate solutions for non-factorable polynomials. By incorporating stability constraints into the formalism, our method can be successfully used for the realization of stable 2-D second order IIR filters in cascade form.


Neural Processing Letters | 2004

LSISOM – A Latent Semantic Indexing Approach to Self-Organizing Maps of Document Collections

Nikolaos Ampazis; Stavros J. Perantonis

The Self Organizing Map (SOM) algorithm has been utilized, with much success, in a variety of applications for the automatic organization of full-text document collections. A great advantage of the SOM method is that document collections can be ordered in such a way so that documents with similar content are positioned at nearby locations of the 2-dimensional SOM lattice. The resulting ordered map thus presents a general view of the document collection which helps the exploration of information contained in the whole document space. The most notable example of such an application is the WEBSOM method where the document collection is ordered onto a map by utilizing word category histograms for representing the documents data vectors. In this paper, we introduce the LSISOM method which resembles WEBSOM in the sense that the document maps are generated from word category histograms rather than simple histograms of the words. However, a major difference between the two methods is that in WEBSOM the word category histograms are formed using statistical information of short word contexts whereas in LSISOM these histograms are obtained from the SOM clustering of the Latent Semantic Indexing representation of document terms.


Computational Management Science | 2004

Design of cellular manufacturing systems using Latent Semantic Indexing and Self Organizing Maps

Nikolaos Ampazis; Ioannis Minis

Abstract.A new, efficient clustering method for solving the cellular manufacturing problem is presented in this paper. The method uses the part-machine incidence matrix of the manufacturing system to form machine cells, each of which processes a family of parts. By doing so, the system is decomposed into smaller semi-independent subsystems that are managed more effectively improving overall performance. The proposed method uses Self Organizing Maps (SOMs), a class of unsupervised learning neural networks, to perform direct clustering of machines into cells, without first resorting to grouping parts into families as done by previous approaches. In addition, Latent Semantic Indexing (LSI) is employed to significantly reduce the complexity of the problem resulting in more effective training of the network, significantly improved computational efficiency, and, in many cases, improved solution quality. The robustness of the method and its computational efficiency has been investigated with respect to the dimension of the problem and the degree of dimensionality reduction. The effectiveness of grouping has been evaluated by comparing the results obtained with those of the k-means classical clustering algorithm.


hellenic conference on artificial intelligence | 2004

Pap-Smear Classification Using Efficient Second Order Neural Network Training Algorithms

Nikolaos Ampazis; Georgios Dounias; Jan Jantzen

In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (OptimizedLevenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier. The algorithms are methodologically similar, and are based on iterations of the form employed in the Levenberg-Marquardt (LM) method for non-linear least squares problems with the inclusion of an additional adaptive momentum term arising from the formulation of the training task as a constrained optimization problem. The classification results obtained from the application of the algorithms on a standard benchmark pap-smear data set reveal the power of the two methods to obtain excellent solutions in difficult classification problems whereas other standard computational intelligence techniques achieve inferior performances.


Neural Networks | 2001

A dynamical model for the analysis and acceleration of learning in feedforward networks

Nikolaos Ampazis; Stavros J. Perantonis; John G. Taylor

A dynamical system model is derived for feedforward neural networks with one layer of hidden nodes. The model is valid in the vicinity of flat minima of the cost function that rise due to the formation of clusters of redundant hidden nodes with nearly identical outputs. The derivation is carried out for networks with an arbitrary number of hidden and output nodes and is, therefore, a generalization of previous work valid for networks with only two hidden nodes and one output node. The Jacobian matrix of the system is obtained, whose eigenvalues characterize the evolution of learning. Flat minima correspond to critical points of the phase plane trajectories and the bifurcation of the eigenvalues signifies their abandonment. Following the derivation of the dynamical model, we show that identification of the hidden nodes clusters using unsupervised learning techniques enables the application of a constrained application (Dynamically Constrained Back Propagation-DCBP) whose purpose is to facilitate prompt bifurcation of the eigenvalues of the Jacobian matrix and, thus, accelerate learning. DCBP is applied to standard benchmark tasks either autonomously or as an aid to other standard learning algorithms in the vicinity of flat minima. Its application leads to significant reduction in the number of required epochs for convergence.


Annals of Operations Research | 2000

A Learning Framework for Neural Networks Using Constrained Optimization Methods

Stavros J. Perantonis; Nikolaos Ampazis; Vassilis Virvilis

Conventional supervised learning in neural networks is carried out by performing unconstrained minimization of a suitably defined cost function. This approach has certain drawbacks, which can be overcome by incorporating additional knowledge in the training formalism. In this paper, two types of such additional knowledge are examined: Network specific knowledge (associated with the neural network irrespectively of the problem whose solution is sought) or problem specific knowledge (which helps to solve a specific learning task). A constrained optimization framework is introduced for incorporating these types of knowledge into the learning formalism. We present three examples of improvement in the learning behaviour of neural networks using additional knowledge in the context of our constrained optimization framework. The two network specific examples are designed to improve convergence and learning speed in the broad class of feedforward networks, while the third problem specific example is related to the efficient factorization of 2-D polynomials using suitably constructed sigma-pi networks.


international symposium on neural networks | 2000

Training feedforward neural networks with the Dogleg method and BFGS Hessian updates

Stavros J. Perantonis; Nikolaos Ampazis; S. Spirou

We introduce an advanced optimization algorithm for training feedforward neural networks. The algorithm combines the Broyden-Fletcher-Goldfarb-Shanno (BFGS) Hessian update formula with a special case of trust region techniques, called the Dogleg method, as an alternative technique to line search methods. Simulations regarding classification and function approximation problems are presented which reveal a clear improvement both in convergence and success rates over standard BFGS implementations.

Collaboration


Dive into the Nikolaos Ampazis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Jantzen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Helen Iakovaki

University of the Aegean

View shared research outputs
Top Co-Authors

Avatar

Ioannis A. Stathopulos

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Ioannis F. Gonos

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Ioannis Minis

University of the Aegean

View shared research outputs
Researchain Logo
Decentralizing Knowledge