Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jiří Šíma is active.

Publication


Featured researches published by Jiří Šíma.


Neural Computation | 2003

General-purpose computation with neural networks: a survey of complexity theoretic results

Jiří Šíma; Pekka Orponen

We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winner-take-all, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.


Neural Networks | 1995

Neural expert systems

Jiří Šíma

Abstract The advantages and disadvantages of classical rule-based and neural approaches to expert system design are complementary. We propose a strictly neural expert system architecture that enables the creation of the knowledge base automatically, by learning from example inferences. For this purpose, we employ a multilayered neural network, trained with generalized back propagation for interval training patterns, which also makes the learning of patterns with irrelevant inputs and outputs possible. We eliminate the disadvantages of the neural approach by enriching the system with the heuristics to work with incomplete information, and to explain the conclusions. The structure of the expert attributes is optional, and a user of the system can define the types of inputs and outputs (real, integer, scalar type, and set), and the manner of their coding (floating point, binary, and unary codes). We have tested our neural expert system on several nontrivial real-world problems (e.g., the diagnostics and progress prediction of hereditary muscular disease), and the results are very good.


Neural Networks | 1996

Back-propagation is not efficient

Jiří Šíma

The back-propagation learning algorithm for multi-layered neural networks, which is often successfully used in practice, appears very time consuming even for small network architectures or training tasks. However, no results are yet known concerning the complexity of this algorithm. Blum and Rivest proved that training even a three-node network is NP-complete for the case when a neuron computes the discrete linear threshold function. We generalize the technique from their NP-hardness proof for a continuous sigmoidal function used in back-propagation. We show that training a three-node sigmoid network with an additional constraint on the output neuron function (e.g., zero threshold) is NP-hard. As a consequence of this, we find training sigmoid feedforward networks, with a single hidden layer and with zero threshold of output neuron, to be intractable. This implies that back-propagation is generally not an efficient algorithm, unless at least P = NP. We take advantage of these results by showing the NP-hardness of a special nonlinear programming problem. Copyright 1996 Elsevier Science Ltd.


Neural Computation | 2002

Training a single sigmoidal neuron is hard

Jiří Šíma

We first present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies a sigmoidal activation function sigma: kappa --> [alpha, beta], satisfying certain natural axioms (e.g., the standard (logistic) sigmoid or saturated-linear function), to the weighted sum of n inputs is hard to train. In particular, the problem of finding the weights of such a unit that minimize the quadratic training error within (beta - alpha)(2) or its average (over a training set) within 5(beta - alpha)(2)/ (12n) of its infimum proves to be NP-hard. Hence, the well-known backpropagation learning algorithm appears not to be efficient even for one neuron, which has negative consequences in constructive learning.


Neural Computation | 1994

Loading deep networks is hard

Jiří Šíma

The loading problem formulated by J. S. Judd seems to be a relevant model for supervised connectionist learning of the feedforward networks from the complexity point of view. It is known that loading general network architectures is NP-complete (intractable) when the (training) tasks are also general. Many strong restrictions on architectural design and/or on the tasks do not help to avoid the intractability of loading. Judd concentrated on the width expanding architectures with constant depth and found a polynomial time algorithm for loading restricted shallow architectures. He suppressed the effect of depth on loading complexity and left as an open prototypical computational problem the loading of easy regular triangular architectures that might capture the crux of depth difficulties. We have proven this problem to be NP-complete. This result does not give much hope for the existence of an efficient algorithm for loading deep networks.


Neural Computation | 2000

On the Computational Complexity of Binary and Analog Symmetric Hopfield Nets

Jiří Šíma; Pekka Orponen; Teemu Antti-Poika

We investigate the computational properties of finite binary- and analog-state discrete-time symmetric Hopfield nets. For binary networks, we obtain a simulation of convergent asymmetric networks by symmetric networks with only a linear increase in network size and computation time. Then we analyze the convergence time of Hopfield nets in terms of the length of their bit representations. Here we construct an analog symmetric network whose convergence time exceeds the convergence time of any binary Hopfield net with the same representation length. Further, we prove that the MIN ENERGY problem for analog Hopfield nets is NP-hard and provide a polynomial time approximation algorithm for this problem in the case of binary nets. Finally, we show that symmetric analog nets with an external clock are computationally Turing universal.


Neural Computation | 2005

On the Nonlearnability of a Single Spiking Neuron

Jiří Šíma; Jiří Sgall

We study the computational complexity of training a single spiking neuron N with binary coded inputs and output that, in addition to adaptive weights and a threshold, has adjustable synaptic delays. A synchronization technique is introduced so that the results concerning the nonlearn-ability of spiking neurons with binary delays are generalized to arbitrary real-valued delays. In particular, the consistency problem for N with programmable weights, a threshold, and delays, and its approximation version are proven to be NP-complete. It follows that the spiking neurons with arbitrary synaptic delays are not properly PAC learnable and do not allow robust learning unless RP = NP. In addition, the representation problem for N, a question whether an n-variable Boolean function given in DNF (or as a disjunction of O(n) threshold gates) can be computed by a spiking neuron, is shown to be coNP-hard.


computer science symposium in russia | 2011

Almost k-wise independent sets establish hitting sets for width-3 1-branching programs

Jiří Šíma; Stanislav Žák

Recently, an interest in constructing pseudorandom or hitting set generators for restricted branching programs has increased, which is motivated by the fundamental problem of derandomizing space bounded computations. Such constructions have been known only in the case of width 2 and in very restricted cases of bounded width. In our previous work, we have introduced a so-called richness condition which is, in a certain sense, sufficient for a set to be a hitting set for read-once branching programs of width 3. In this paper, we prove that, for a suitable constant C, any almost C log n-wise independent set satisfies this richness condition. Hence, we achieve an explicit polynomial time construction of a hitting set for read-once branching programs of width 3 with the acceptance probability greater than √12/13 by using the result due to Alon et al. (1992).


Neural Computation | 2003

Continuous-time symmetric Hopfield nets are computationally universal

Jiří Šíma; Pekka Orponen

We establish a fundamental result in the theory of computation by continuous-time dynamical systems by showing that systems corresponding to so-called continuous-time symmetric Hopfield nets are capable of general computation. As is well known, such networks have very constrained Lyapunov-function controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent synchronous fully parallel computation by a recurrent network of n discrete-time binary neurons, with in general asymmetric coupling weights, can be simulated by a symmetric continuous-time Hopfield net containing only 18n 7 units employing the saturated-linear activation function. Moreover, if the asymmetric network has maximum integer weight size wmax and converges in discrete time t, then the corresponding Hopfield net can be designed to operate in continuous time (t/) for any > 0 such that wmax212n 21/. In terms of standard discrete computation models, our result implies that any polynomially space-bounded Turing machine can be simulated by a family of polynomial-size continuous-time symmetric Hopfield nets.


Neural Computation | 2014

Energy complexity of recurrent neural networks

Jiří Šíma

Recently a new so-called energy complexity measure has been introduced and studied for feedforward perceptron networks. This measure is inspired by the fact that biological neurons require more energy to transmit a spike than not to fire, and the activity of neurons in the brain is quite sparse, with only about 1% of neurons firing. In this letter, we investigate the energy complexity of recurrent networks, which counts the number of active neurons at any time instant of a computation. We prove that any deterministic finite automaton with m states can be simulated by a neural network of optimal size with the time overhead of per one input bit, using the energy O(e), for any e such that and e=O(s), which shows the time-energy trade-off in recurrent networks. In addition, for the time overhead satisfying , we obtain the lower bound of on the energy of such a simulation for some constant c>0 and for infinitely many s.

Collaboration


Dive into the Jiří Šíma's collaboration.

Top Co-Authors

Avatar

Stanislav Žák

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Petr Savický

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Jiří Sgall

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

Jiří Wiedermann

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Michal Šorel

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Radim Lněnička

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge