T. N. Grapsa
University of Patras
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by T. N. Grapsa.
Neurocomputing | 2009
A.E. Kostopoulos; T. N. Grapsa
This article presents some efficient training algorithms, based on conjugate gradient optimization methods. In addition to the existing conjugate gradient training algorithms, we introduce Perrys conjugate gradient method as a training algorithm [A. Perry, A modified conjugate gradient algorithm, Operations Research 26 (1978) 26-43]. Perrys method has been proven to be a very efficient method in the context of unconstrained optimization, but it has never been used in MLP training. Furthermore, a new class of conjugate gradient (CG) methods is proposed, called self-scaled CG methods, which are derived from the principles of Hestenes-Stiefel, Fletcher-Reeves, Polak-Ribiere and Perrys method. This class is based on the spectral scaling parameter introduced in [J. Barzilai, J.M. Borwein, Two point step size gradient methods, IMA Journal of Numerical Analysis 8 (1988) 141-148]. The spectral scaling parameter contains second order information without estimating the Hessian matrix. Furthermore, we incorporate to the CG training algorithms an efficient line search technique based on the Wolfe conditions and on safeguarded cubic interpolation [D.F. Shanno, K.H. Phua, Minimization of unconstrained multivariate functions, ACM Transactions on Mathematical Software 2 (1976) 87-94]. In addition, the initial learning rate parameter, fed to the line search technique, was automatically adapted at each iteration by a closed formula proposed in [D.F. Shanno, K.H. Phua, Minimization of unconstrained multivariate functions, ACM Transactions on Mathematical Software 2 (1976) 87-94; D.G. Sotiropoulos, A.E. Kostopoulos, T.N. Grapsa, A spectral version of Perrys conjugate gradient method for neural network training, in: D.T. Tsahalis (Ed.), Fourth GRACM Congress on Computational Mechanics, vol. 1, 2002, pp. 172-179]. Finally, an efficient restarting procedure was employed in order to further improve the effectiveness of the CG training algorithms. Experimental results show that, in general, the new class of methods can perform better with a much lower computational cost and better success performance.
Computer Physics Communications | 1995
Michael N. Vrahatis; O. Ragos; T. Skiniotis; F.A. Zafiropoulos; T. N. Grapsa
A portable software package, named RFSFNS, is presented for the localization and computation of the simple real zeros of the Bessel functions of first and second kind, J,, (z), Y~ (z), respectively, and their derivatives, where v > 0 and z > 0. This package implements the topological degree theory for the localization portion and a modified bisection method for the computation one. It localizes, isolates and computes with certainty all the desired zeros of the above functions in a predetermined interval within any accuracy (subject to relative machine precision). It has been implemented and tested on different machines utilizing the above Bessel functions of various orders and several intervals of the argument.
international conference on mathematics of neural networks models algorithms and applications models algorithms and applications | 1997
George D. Magoulas; Michael N. Vrahatis; T. N. Grapsa; George S. Androulakis
In this contribution a new method for supervised training is presented. This method is based on a recently proposed root finding procedure for the numerical solution of systems of non-linear algebraic and/or transcendental equations in IR n . This new method reduces the dimensionality of the problem in such a way that it can lead to an iterative approximate formula for the computation of n−1 connection weights. The remaining connection weight is evaluated separately using the final approximations of the others. This reduced iterative formula generates a sequence of points in IR n−1 which converges quadratically to the proper n−1 connection weights. Moreover, it requires neither a good initial guess for one connection weight nor accurate error function evaluations. The new method is applied on some test cases in order to evaluate its performance. Subject classification: AMS(MOS) 65K10, 49D10, 68T05, 68G05.
International Journal of Computer Mathematics | 1990
T. N. Grapsa; Michael N. Vrahatis
A method for the numerical solution of systems of nonlinear algebraic and/or transcendental equations in is presented. This method reduces the dimensionality of the system in such a way that it can lead to an iterative approximate formula for the computation of n−1 components of the solution, while the remaining component of the solution is evaluated separately using the final approximations of the other components. This (n−1)-dimensional iterative formula generates a sequence of points in which converges quadratically to n−1 components of the solution. Moreover, it does not require a good initial guess for one component of the solution and it does not directly perform function evaluations, thus it can be applied to problems with imprecise function values. A proof of convergence is given and numerical applications are presented.
International Journal of Computer Mathematics | 1989
T. N. Grapsa; Michael N. Vrahatis
A new method for solving systems of two simultaneous nonlinear and/or transcendental equations in , which is based on reduction to simpler one-dimensional non-linear equations is presented. This method to approximate a component of the solution does not require any information about the other component in each iteration. It generates a sequence of points in which converges quadratically to one component of the solution and afterwards it evaluates the other component using one simple computation. Moreover, it does not require a good initial guess of the solution for both components and it does not directly need function evaluations. A proof of convergence is given.
Journal of Computational and Applied Mathematics | 1996
T. N. Grapsa; Michael N. Vrahatis
Abstract A new method for unconstrained optimization in R n is presented. This method reduces the dimension of the problem in such a way that it can lead to an iterative approximate formula for the computation of (n − 1) components of the optimum while its remaining component is computed separately using the final approximations of the other components. It converges quadratically to a local optimum and it requires storage of order (n − 1) × (n − 1). Besides, it does not require a good initial guess for one component of the optimum and it does not directly perform gradient evaluations; thus it can be applied to problems with imprecise gradient values. Moreover, a procedure for transforming the matrix formed by our method into a symmetric as well as into a diagonal one is presented. Furthermore, the proposed dimension-reducing scheme using finite difference gradient and Hessian is presented. The methods have been implemented and tested. Performance information for well-known test functions is reported.
Applied Mathematics and Computation | 2009
I.A. Nikas; T. N. Grapsa
In this paper, we consider the problem of finding reliably and with certainty all zeros of an interval equation within a given search interval for continuously differentiable functions over real numbers. We propose a new formality of interval arithmetic which is treated in a theoretical manner to develop and prove a new method, lying on the context of interval Newton methods. Some important theoretical aspects of the new method are stated and proved. Finally, an algorithmic realization of our method is proposed to be applied on a set of test functions, where the promising theoretical results are verified.
Numerical Functional Analysis and Optimization | 1997
Michael N. Vrahatis; O. Ragos; T. Skiniotis; F.A. Zafiropoulos; T. N. Grapsa
We study the complex zeros of Bessel functions of real order of the first and second kind and their first derivatives. The notion of the topological degree is employed for the calculation of the exact number of these zeros within an open and bounded region of the complex plane, as well as for localization of these zeros. First, we prove that the value of the topological degree provides the total number of complex roots within this region. Subsequently, these roots are computed by a generalized bisection method. The method presented here computes complex zeros of Bessel functions, requiring only the algebraic signs of the real and imaginary part of these functions. It has been implemented and tested, and performance results are presented.
International Journal of Computer Mathematics | 1990
T. N. Grapsa; Michael N. Vrahatis; Tassos Bountis
A procedure which accelerates the convergence of iterative methods for the numerical solution of systems of nonlinear algebraic and/or transcendental equations in is introduced. This procedure uses a rotating hyperplane in , whose rotation axis depends on the current approximation n-1 of components of the solution. The proposed procedure is applied here on the traditional Newtons method and on a recently proposed “dimension-reducing” method [5] which incorporates the advantages of nonlinear SOR and Newtons algorithms. In this way, two new modified schemes for solving nonlinear systems are correspondingly obtained. For both of these schemes proofs of convergence are given and numerical applications are presented.
Applied Mathematics and Computation | 2005
D. G. Sotiropoulos; T. N. Grapsa
We present an interval branch-and-prune algorithm for computing verified enclosures for the global minimum and all global minimizers of univariate functions subject to bound constraints. The algorithm works within the branch-and-bound framework and uses first order information of the objective function. In this context, we investigate valuable properties of the optimal center of a mean value form and prove optimality. We also establish an inclusion function selection criterion between natural interval extension and an optimal mean value form for the bounding process. Based on optimal centers, we introduce linear (inner and outer) pruning steps that are responsible for the branching process. The proposed algorithm incorporates the above techniques in order to accelerate the search process. Our algorithm has been implemented and tested on a test set and compared with three other methods. The method suggested shows a significant improvement on previous methods for the numerical examples solved.