George S. Androulakis
University of Patras
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by George S. Androulakis.
Neural Networks | 1997
George D. Magoulas; Michael N. Vrahatis; George S. Androulakis
The issue of variable stepsize in the backpropagation training algorithm has been widely investigated and several techniques employing heuristic factors have been suggested to improve training time and reduce convergence to local minima. In this contribution, backpropagation training is based on a modified steepest descent method which allows variable stepsize. It is computationally efficient and posseses interesting convergence properties utilizing estimates of the Lipschitz constant without any additional computational cost. The algorithm has been implemented and tested on several problems and the results have been very satisfactory. Numerical evidence shows that the method is robust with good average performance on many classes of problems. Copyright 1996 Elsevier Science Ltd.
Neural Computation | 1999
George D. Magoulas; Michael N. Vrahatis; George S. Androulakis
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.
Journal of Computational and Applied Mathematics | 2000
Michael N. Vrahatis; George S. Androulakis; J.N. Lambrinos; George D. Magoulas
In this paper the development, convergence theory and numerical testing of a class of gradient unconstrained minimization algorithms with adaptive stepsize are presented. The proposed class comprises four algorithms: the first two incorporate techniques for the adaptation of a common stepsize for all coordinate directions and the other two allow an individual adaptive stepsize along each coordinate direction. All the algorithms are computationally efficient and possess interesting convergence properties utilizing estimates of the Lipschitz constant that are obtained without additional function or gradient evaluations. The algorithms have been implemented and tested on some well-known test cases as well as on real-life artificial neural network applications and the results have been very satisfactory.
Journal of Computational and Applied Mathematics | 1996
George S. Androulakis; Michael N. Vrahatis
Abstract A software package for analyzing and comparing optimization methods is presented. This package displays, using different colors, the regions of convergence to the minima of a given function for various optimization methods. It displays also the rate of their convergence as well as the regions of divergence of these methods. Moreover, this package gives quantitative information regarding the total convergence area in a specific domain for various minima. Using OPTAC (OPTimization Analysis and Comparisons) we are able to “see” in a picture the advantages and disadvantages of any optimization method as well as to compare various methods in order to choose the proper method for a given class of problems. The OPTAC package is self-contained and conforms to the ANSI 1977 Fortran standards.
international conference on mathematics of neural networks models algorithms and applications models algorithms and applications | 1997
George D. Magoulas; Michael N. Vrahatis; T. N. Grapsa; George S. Androulakis
In this contribution a new method for supervised training is presented. This method is based on a recently proposed root finding procedure for the numerical solution of systems of non-linear algebraic and/or transcendental equations in IR n . This new method reduces the dimensionality of the problem in such a way that it can lead to an iterative approximate formula for the computation of n−1 connection weights. The remaining connection weight is evaluated separately using the final approximations of the others. This reduced iterative formula generates a sequence of points in IR n−1 which converges quadratically to the proper n−1 connection weights. Moreover, it requires neither a good initial guess for one connection weight nor accurate error function evaluations. The new method is applied on some test cases in order to evaluate its performance. Subject classification: AMS(MOS) 65K10, 49D10, 68T05, 68G05.
international conference on electronics circuits and systems | 1996
George D. Magoulas; Michael N. Vrahatis; George S. Androulakis
We propose a method that proceeds solely with the minimal information of the error function and gradient which is their algebraic signs and takes minimization steps in each weight direction. This approach seems to be practically useful especially when training is affected by technology imperfections and environmental changes that cause unpredictable deviations of parameter values from the designed configuration. Therefore, it may be difficult or impossible to obtain very precise values for the error function and the gradient of error during training.
international multiconference on computer science and information technology | 2008
Eleni G. Lisgara; George S. Androulakis
Recently it was produced a backtrack technique for the efficient approximation of a time seriespsila future optima. Such an estimation is succeeded based on a selection of sequenced points produced from the repetitive process of the continuous optima finding. Additionally, it is shown that if any time series is treated as an objective function subject to the factors affecting its future values, the use of any optimization technique finally points local optimum and therefore enables accurate prediction making. In this paper the backtrack technique is compiled with a steepest descent methodology towards optimization.
International Journal of Computer Mathematics | 2015
Christina D. Nikolakakou; T. N. Grapsa; I.A. Nikas; George S. Androulakis
In this paper, an alternative optimization strategy incorporating the ideas of lexicographic optimization and evolutionary algorithms is presented. The given optimization problem is approximated by others in which priorities are given. Under the sequential optimization method, they are optimized, not exhaustively, in order to produce an initial point for the given problem. An important role in the proposed approach plays the way of generating the involved problems and the given priorities on them. General principles to produce the objective functions of the involved problems are proposed. An algorithm named LexOpt Algorithm, which implements the suggested process, is given. Numerical results via LexOpt Algorithm, on a set of widely used test problems show noticeable promising convergence behaviour of the proposed strategy in comparison with the utilized optimization methods.
international multiconference on computer science and information technology | 2008
Costantinos Rougeris; George S. Androulakis
The increment of number of services provided in World Wide Web lately drives more and more consumers to e-commerce. During the last years and due to the vast increase of e-stores (B2C), electronic marketplaces and electronic auction sites became more popular. Many researchers examined how buyers interact with the auction facts and sellers during the procedure. Questions such as ldquohow do millions of users decide about their e-biddingrdquo and ldquowhat are the factors affecting them and what is their order of importancerdquo are amongst the most significant ones in current research. In this paper these factors are initially located using auction literature and eBay interface as well as expanded with the addition of a new factor (communication with seller). Their weights derive from the statistical analysis of the answers given in a questionnaire that was filled electronically by eBay users during the period of February - March 2007.
panhellenic conference on informatics | 2015
Christina D. Nikolakakou; T. N. Grapsa; George S. Androulakis
The LexOpt and the TLSO algorithms have been proposed for unconstrained optimization. These algorithms transform the problem of minimizing the objective function into an equivalent Lexicographic Multiobjective Optimization (LMO) problem whose final (sub-)problem is identical to the given one, while the others are proper approximations of it. In the implementation of these algorithms, a preprocessing step produces an initial point that is then fed to the given optimization problem. All the problems in the preprocessing step are not optimized exhaustively. In this paper the motivation for the new proposed algorithm, named TrustLex-Opt, is the reduction of the computational cost of the above algorithms. To this end, the number of problems in the preprocessing step is reduced and a combination of trust region and a line search method substitutes the utilized line search method in LexOpt algorithm. Furthermore, since a key point in the usage of trust region methods is their initial radius, in this paper a way of defining a proper initial radius is proposed. The convergence of the new proposed algorithm is proved and preliminary promising numerical results in well known test problems are presented.