Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gábor Horváth is active.

Publication


Featured researches published by Gábor Horváth.


international symposium on neural networks | 2004

A sparse least squares support vector machine classifier

József Valyon; Gábor Horváth

Since the early 90s, support vector machines (SVM) are attracting more and more attention due to their applicability to a large number of problems. To overcome the high computational complexity of traditional support vector machines, previously a new technique, the least squares SVM (LS-SVM) has been introduced, but unfortunately a very attractive feature of SVM, namely its sparseness, was lost. LS-SVM simplifies the required computation to solving linear equation set. This equation set embodies all available information about the learning process. By applying modifications to this equation set, we present a least squares version of the least squares support vector machine (LS/sup 2/-SVM). The proposed modification speeds up the calculations and provides better results, but most importantly it concludes a sparse solution.


systems man and cybernetics | 2007

Kernel CMAC With Improved Capability

Gábor Horváth; Tamás Szabó

Cerebellar model articulation controller (CMAC) neural network is a real alternative to MLP and RBF and has some advantageous features: its training is fast and its architecture is especially suitable for digital hardware implementation. The price of these attractive features is its rather poor capability. CMAC may have significant approximation and generalization errors. The generalization error can be significantly reduced if a regularization term is applied during training, while the approximation capability can be improved if the complexity of the network is increased. The paper shows that using a kernel interpretation the approximation capability of the network can be improved without increasing the complexity. It also shows, that regularization can be applied in the kernel representation too, so a new version of the CMAC is proposed where both approximation and generalization capabilities are improved significantly.


IEEE Transactions on Instrumentation and Measurement | 2005

Nonlocal hysteresis function identification and compensation with neural networks

Péter Berényi; Gábor Horváth; Vincent Lampaert; Jan Swevers

This paper discusses the on-line identification of nonlocal static hysteresis functions, which are encountered in mechanical friction, magnetic materials, and piezoelectric actuators and cause problems in the design of controllers. In this article a new compensation method for friction in presliding regime is introduced that is based on the simplified Leuven Friction Model and on technology borrowed from neural networks. We present a solution how to identify the hysteresis caused by the friction and how to use this identified model for the compensation of the friction effects. The solution can be used for on-line identification and compensation. Results from both simulations and experiments will be shown.


international symposium on neural networks | 2004

Kernel CMAC with improved capability

Gábor Horváth

Cerebellar model articulation controller (CMAC) neural network is a real alternative to MLP and RBF and has some advantageous features: its training is fast and its architecture is especially suitable for digital hardware implementation. The price of these attractive features is its rather poor capability. CMAC may have significant approximation and generalization errors. The generalization error can be significantly reduced if a regularization term is applied during training, while the approximation capability can be improved if the complexity of the network is increased. The paper shows that using a kernel interpretation the approximation capability of the network can be improved without increasing the complexity. It also shows, that regularization can be applied in the kernel representation too, so a new version of the CMAC is proposed where both approximation and generalization capabilities are improved significantly.


intelligent data acquisition and advanced computing systems: technology and applications | 2003

Improved model order estimation for nonlinear dynamic systems

László Sragner; Gábor Horváth

In system modelling the choice of proper model structure is an essential task. Model structure is defined if both the model class and the size of the model within this class are determined. In dynamic system modelling model size is mainly determined by model order. We deal with the question of model order estimation when neural networks are used for modelling nonlinear dynamic systems. One of the possible ways of estimating the order of a neural model is the application of Lipschitz quotient. Although it is easy to use this method, its main drawback is the high sensitivity to noisy data. We propose a new way to reduce the effect of noise. The idea of the proposed method is to combine the original Lipschitz method and the errors in variables (EIV) approach. We present the details of the proposed combined method and gives the results of an extensive experimental study


IEEE Transactions on Instrumentation and Measurement | 1998

Finite word length computational effects of the principal component analysis networks

Tamás Szabó; Gábor Horváth

This paper deals with some of the effects of finite precision data representation and arithmetics in principal component analysis (PCA) neural networks. The PCA networks are single layer linear neural networks that use some versions of Ojas learning rule. The paper concentrates on the effects of premature convergence or early termination of the learning process. It determines an approximate analytical expression of the lower limit of the learning rate parameter. Selecting the learning rate below this limit-which depends on the statistical properties of the input data and the quantum size used in the finite precision arithmetics-the convergence will slow down significantly or the learning process will stop before converging to the proper weight vector.


industrial and engineering applications of artificial intelligence and expert systems | 2001

An Efficient Hardware Implementation of Feed-Forward Neural Networks

Tamás Szabó; Gábor Horváth

This paper proposes a new way of digital hardware implementation of nonlinear activation functions in feed-forward neural networks. The basic idea of this new realization is that the nonlinear functions can be implemented using a matrix-vector multiplication. Recently a new approach was proposed for the realization of matrix-vector multiplications which approach can also be applied for implementing the nonlinear functions if the nonlinear functions are approximated by simple basis functions. The paper proposes to use B-spline basis functions to the approximate nonlinear sigmoidal functions, it shows that this approximation fulfills the general requirements on the activation functions, presents the details of the proposed hardware implementation, and gives a summary of an extensive study about the effects of B-spline nonlinear function realization on the size and the trainability of feed-forward neural networks.


Speech Communication | 1999

Real time vector quantization of LSP parameters

Balazs Kovesi; Samir Saoudi; Jean Marc Boucher; Gábor Horváth

Abstract The distance measure is of great importance in both the design and coding stage of a vector quantizer. Due to its complexity, however, the spectral distance which best correlates with the perceptual quality is seldom used. On the other hand, various weighted squared Euclidean distance measures give close or even accurate estimation of the meaningful spectral distance. Since they are in general mathematically more tractable, these weighted squared Euclidean distance measures are more commonly used. Significant differences can be found in the performance of different distance measures suggested in previous literatures. In this paper, a complete study and comparison of weighted squared Euclidean distance measures is given. This paper also proposes a new weighted squared Euclidean distance measure for vector quantization of Line Spectrum Pairs (LSP) or Cosine of LSP (CLSP) parameters. It also presents an efficient adaptation apparatus for using the proposed distance measure in the case of split or multi-stage vector quantizers.


Computational Intelligence | 2007

Extended Least Squares LS-SVM

József Valyon; Gábor Horváth


Periodica Polytechnica Electrical Engineering | 2003

A WEIGHTED GENERALIZED LS-SVM

József Valyon; Gábor Horváth

Collaboration


Dive into the Gábor Horváth's collaboration.

Top Co-Authors

Avatar

József Valyon

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar

Tamás Szabó

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar

László Sragner

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar

Jan Swevers

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Péter Berényi

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Vincent Lampaert

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge