Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Věra Kůrková is active.

Publication


Featured researches published by Věra Kůrková.


Neural Networks | 1998

Representations and rates of approximation of real-valued Boolean functions by neural networks

Věra Kůrková; Petr Savický; K. Hlaváčková

We give upper bounds on rates of approximation of real-valued functions of d Boolean variables by one-hidden-layer perceptron networks. Our bounds are of the form c/n where c depends on certain norms of the function being approximated and n is the number of hidden units. We describe sets of functions where these norms grow either polynomially or exponentially with d.


Neural Computation | 1994

Functionally equivalent feedforward neural networks

Věra Kůrková; Paul C. Kainen

For a feedforward perceptron type architecture with a single hidden layer but with a quite general activation function, we characterize the relation between pairs of weight vectors determining networks with the same input-output function.


Neural Computation | 2009

An integral upper bound for neural network approximation

Paul C. Kainen; Věra Kůrková

Complexity of one-hidden-layer networks is studied using tools from nonlinear approximation and integration theory. For functions with suitable integral representations in the form of networks with infinitely many hidden units, upper bounds are derived on the speed of decrease of approximation error as the number of network units increases. These bounds are obtained for various norms using the framework of Bochner integration. Results are applied to perceptron networks.


Neurocomputing | 1999

Approximation by neural networks is not continuous

Paul C. Kainen; Věra Kůrková; Andrew Vogt

Abstract It is shown that in a Banach space X satisfying mild conditions, for its infinite, linearly independent subset G, there is no continuous best approximation map from X to the n-span, span n G . The hypotheses are satisfied when X is an L p -space, 1 span n G is not a subspace of X, it is also shown that there is no continuous map from X to span n G within any positive constant of a best approximation.


Mathematics of Operations Research | 2008

Approximate Minimization of the Regularized Expected Error over Kernel Models

Věra Kůrková; Marcello Sanguineti

Learning from data under constraints on model complexity is studied in terms of rates of approximate minimization of the regularized expected error functional. For kernel models with an increasing number n of kernel functions, upper bounds on such rates are derived. The bounds are of the form a/n+b/√n, where a and b depend on the regularization parameter and on properties of the kernel, and of the probability measure defining the expected error. As a special case, estimates of rates of approximate minimization of the regularized empirical error are derived.


Neural Computation | 2008

Minimization of error functionals over perceptron networks

Věra Kůrková

Supervised learning of perceptron networks is investigated as an optimization problem. It is shown that both the theoretical and the empirical error functionals achieve minima over sets of functions computable by networks with a given number n of perceptrons. Upper bounds on rates of convergence of these minima with n increasing are derived. The bounds depend on a certain regularity of training data expressed in terms of variational norms of functions interpolating the data (in the case of the empirical error) and the regression function (in the case of the expected error). Dependence of this type of regularity on dimensionality and on magnitudes of partial derivatives is investigated. Conditions on the data, which guarantee that a good approximation of global minima of error functionals can be achieved using networks with a limited complexity, are derived. The conditions are in terms of oscillatory behavior of the data measured by the product of a function of the number of variables d, which is decreasing exponentially fast, and the maximum of the magnitudes of the squares of the L1-norms of the iterated partial derivatives of the order d of the regression function or some function, which interpolates the sample of the data. The results are illustrated by examples of data with small and high regularity constructed using Boolean functions and the gaussian function.


Annals of Operations Research | 2001

Continuity of Approximation by Neural Networks in Lp Spaces

Paul C. Kainen; Věra Kůrková; Andrew Vogt

Devices such as neural networks typically approximate the elements of some function space X by elements of a nontrivial finite union M of finite-dimensional spaces. It is shown that if X=Lp(Ω) (1<p<∞ and Ω⊂Rd), then for any positive constant Γ and any continuous function φ from X to M, ‖f−φ(f)‖>‖f−M‖+Γ for some f in X. Thus, no continuous finite neural network approximation can be within any positive constant of a best approximation in the Lp-norm.


Journal of Approximation Theory | 2003

Best approximation by linear combinations of characteristic functions of half-spaces

Paul C. Kainen; Věra Kůrková; Andrew Vogt

It is shown that for any positive integer n and any function f in Lp([0,1]d) with p ∈ [1,∞) there exist n half-spaces such that f has a best approximation by a linear combination of their characteristic functions. Further, any sequence of linear combinations of n half-space characteristic functions converging in distance to the best approximation distance has a subsequence converging to a best approximation, i.e., the set of such n-fold linear combinations is an approximatively compact set.


international conference on adaptive and natural computing algorithms | 2007

Estimates of Approximation Rates by Gaussian Radial-Basis Functions

Paul C. Kainen; Věra Kůrková; Marcello Sanguineti

Rates of approximation by networks with Gaussian RBFs with varying widths are investigated. For certain smooth functions, upper bounds are derived in terms of a Sobolev-equivalent norm. Coefficients involved are exponentially decreasing in the dimension. The estimates are proven using Bessel potentials as auxiliary approximating functions.


Logic Journal of The Igpl \/ Bulletin of The Igpl | 2005

Neural Network Learning as an Inverse Problem

Věra Kůrková

Capability of generalization in learning of neural networks from examples can be modelled using regularization, which has been developed as a tool for improving stability of solutions of inverse problems. Such problems are typically described by integral operators. It is shown that learning from examples can be reformulated as an inverse problem defined by an evaluation operator. This reformulation leads to an analytical description of an optimal input/output function of a network with kernel units, which can be employed to design a learning algorithm based on a numerical solution of a system of linear equations.

Collaboration


Dive into the Věra Kůrková's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. Hlaváčková

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Petr Savický

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ongard Sirisaengtaksin

University of Houston–Downtown

View shared research outputs
Top Co-Authors

Avatar

Vladik Kreinovich

University of Texas at El Paso

View shared research outputs
Researchain Logo
Decentralizing Knowledge