Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ivo Bukovsky is active.

Publication


Featured researches published by Ivo Bukovsky.


Fourth International Symposium on Uncertainty Modeling and Analysis, 2003. ISUMA 2003. | 2003

Quadratic and cubic neural units for identification and fast state feedback control of unknown nonlinear dynamic systems

Ivo Bukovsky; Sanjeevakumar Redlapalli; Madan M. Gupta

The main goal is to introduce quadratic and cubic neural units (QNU, CNU) as appropriate neural units for fast state feedback control of unknown, unstable, nonlinear systems and depict their ability to provide a more robust and faster response than as it is in the case of a common linear state feedback controllers. The concepts of CNU and QNU are briefly introduced. Stability criteria of a nonlinear control loop including CNU and QNU are discussed. The universal structure of a stable and fast controller of an unknown linear or nonlinear second order system with variable damping realized by a subset of CNU is proposed. Results of unknown nonlinear system identification and control are shown. A possible structure of a neural nonlinear state controller which uses a subset of CNU for parallel identification of an unknown, controlled nonlinear plant with varying parameter values and structure is considered


Entropy | 2013

Learning Entropy: Multiscale Measure for Incremental Learning

Ivo Bukovsky

First, this paper recalls a recently introduced method of adaptive monitoring of dynamical systems and presents the most recent extension with a multiscale-enhanced approach. Then, it is shown that this concept of real-time data monitoring establishes a novel non-Shannon and non-probabilistic concept of novelty quantification, i.e., Entropy of Learning, or in short the Learning Entropy. This novel cognitive measure can be used for evaluation of each newly measured sample of data, or even of whole intervals. The Learning Entropy is quantified in respect to the inconsistency of data to the temporary governing law of system behavior that is incrementally learned by adaptive models such as linear or polynomial adaptive filters or neural networks. The paper presents this novel concept on the example of gradient descent learning technique with normalized learning rate.


2014 Sensor Signal Processing for Defence (SSPD) | 2014

Learning entropy for novelty detection a cognitive approach for adaptive filters

Ivo Bukovsky; Cyril Oswald; Matous Cejnek; Peter Mark Benes

This paper recalls the practical calculation of Learning Entropy (LE) for novelty detection, extends it for various gradient techniques and discusses its use for multivariate dynamical systems with ability of distinguishing between data perturbations or system-function perturbations. LG has been recently introduced for novelty detection in time series via supervised incremental learning of polynomial filters, i.e. higher-order neural units (HONU). This paper demonstrates LG also on enhanced gradient descent adaptation techniques that are adopted and summarized for HONU. As an aside, LG is proposed as a new performance index of adaptive filters. Then, we discuss Principal Component Analysis and Kernel PCA for HONU as a potential method to suppress detection of data-measurement perturbations and to enforce LG for system-perturbation novelties.


Archive | 2010

Adaptive Evaluation of Complex Dynamical Systems Using Low-Dimensional Neural Architectures

Ivo Bukovsky; Jiri Bila

New methodology of adaptive monitoring and evaluation of complicated dynamic data is introduced. The major objectives are monitoring and evaluation of both instantaneous and long-term attributes of complex dynamic behavior, such as of chaotic systems and real-world dynamical systems. In the sense of monitoring, the methodology introduces a novel approach to quantification and visualization of cognitively observed system behavior in a real time without further processing of these observations. In the sense of evaluation, the methodology opens new possibilities for consequent qualitative and quantitative processing of cognitively monitored system behavior. Techniques and enhancements are introduced to improve the stability of low-dimensional neural architectures and to improve their capability in approximating nonlinear dynamical systems that behave complex in high-dimensional state space. Low-dimensional dynamic quadratic neural units enhanced as forced dynamic oscillators are introduced to improve the approximation quality of higher dimensional systems. However, the introduced methodology can be universally used for adaptive evaluation of dynamic behavior variability also with other neural architectures and adaptive models, and it can be used for theoretical chaotic systems as well as for real-word dynamical systems. Simulation results on applications to deterministic, however, highly chaotic time series are shown to explain the new methodology and to demonstrate its capability in sensitive and instantaneous detections of changing behavior, and these detections serve for monitoring and evaluating the level of determinism (predictability) in complex signals. Results of this new methodology are shown also for real-world data, and its limitations are discussed.


ieee international conference on cognitive informatics | 2007

Foundation of Notation and Classification of Nonconventional Static and Dynamic Neural Units

Ivo Bukovsky; Zeng-Guang Hou; Jiri Bila; Madan M. Gupta

The paper introduces basic types of nonconventional artificial neural units and focuses their notation and classification: namely; the notation and classification of dynamic higher-order nonlinear neural units, time-delay dynamic neural units, and time-delay higher-order nonlinear neural units is introduced. Brief introduction into the simplified parallel of higher-order nonlinear aggregating function of artificial nonconventional neural units and synaptic and somatic operation of biological neurons is made. Based on still simplified mathematical notation, it is proposed that nonlinear aggregating function of neural inputs should be understood as composition of synaptic as well as partial somatic neural operation also for static neural units. Thus it unravels novel, simplified, yet universal insight into understanding more computationally powerful neurons. The classification of nonconventional artificial neural units is founded first according to nonlinearity of aggregating function, second according to the dynamic order, third according to time-delay implementation within neural units.


international symposium on neural networks | 2010

Testing potentials of dynamic quadratic neural unit for prediction of lung motion during respiration for tracking radiation therapy

Ivo Bukovsky; Kei Ichiji; Noriyasu Homma; Makoto Yoshizawa; Ricardo Rodriguez

This paper presents a study of the dynamic (recurrent) quadratic neural unit (QNU) -a class of higher order network or a class of polynomial neural network- as applied to the prediction of lung respiration dynamics. Human lung motion during respiration features nonlinear dynamics and displays quasiperiodical or even chaotic behavior. An attractive approximation capability of the recurrent QNU are demonstrated on a long term prediction of time series generated by chaotic MacKey-Glass equation, by another highly nonlinear periodic time series, and on real lung motion measured during patients respiration. The real time recurrent learning (RTRL) rule is derived for dynamic QNU in a matrix form that is also efficient for implementation. It is shown that the standalone QNU gives promising results on a longer prediction times of the lung position compared to results in recent literature. In the end, we show even more precise results of two QNUs implemented as two local nonlinear predictive models and thus we present and discus a promising direction for high precision prediction of lung motion.


international symposium on neural networks | 2010

Quadratic neural unit and its network in validation of process data of steam turbine loop and energetic boiler

Ivo Bukovsky; Martin Lepold; Jiri Bila

This paper discusses results and advantages of the application of quadratic neural units and novel quadratic neural network to modeling of real data for purposes of validation of measured data in energetic processes. A feed forward network of quadratic neural units (a class of higher order neural network) with sequential learning is presented. This quadratic network with this learning technique reduces computational time for models with large number of inputs, sustains optimization convexity of a quadratic model, and also displays sufficient nonlinear approximation capability for the real processes. A comparison of performances of the quadratic neural units, quadratic neural networks, and the use of common multilayer feed forward neural networks all trained by Levenberg-Marquard algorithm is discussed.


ieee international conference on cognitive informatics | 2010

Quadratic neural unit is a good compromise between linear models and neural networks for industrial applications

Ivo Bukovsky; Noriyasu Homma; Ladislav Smetana; Ricardo Rodriguez; Martina Mironovova; Stanislav Vrána

The paper discusses the quadratic neural unit (QNU) and highlights its attractiveness for industrial applications such as for plant modeling, control, and time series prediction. Linear systems are still often preferred in industrial control applications for their solvable and single solution nature and for the clarity to the most application engineers. Artificial neural networks are powerful cognitive nonlinear tools, but their nonlinear strength is naturally repaid with the local minima problem, overfitting, and high demands for application-correct neural architecture and optimization technique that often require skilled users. The QNU is the important midpoint between linear systems and highly nonlinear neural networks because the QNU is relatively very strong in nonlinear approximation; however, its optimization and performance have fast and convex-like nature, and its mathematical structure and the derivation of the learning rules is very comprehensible and efficient for implementation.


International Journal of Cognitive Informatics and Natural Intelligence | 2008

Foundations of Nonconventional Neural Units and their Classification

Ivo Bukovsky; Zeng-Guang Hou; Jiri Bila; Madan M. Gupta

This article introduces basic types of nonconventional neural units and focuses on their notation and classification. Namely, the notation and classification of higher order nonlinear neural units, time-delay dynamic neural units, and time-delay higher order nonlinear neural units are introduced. Brief introduction into the simplified parallels of the higher order nonlinear aggregating function of higher order neural units with both the synaptic and somatic neural operation of biological neurons is made. Based on the mathematical notation of neural input intercorrelations of higher order neural units, it is shown that the higher order polynomial aggregating function of neural inputs can be understood as a single-equation representation of synaptic neural operation plus partial somatic neural operation. Thus, it unravels new simplified yet universal mathematical insight into understanding the higher computational power of neurons that also conforms to biological neuronal morphology. The classification of nonconventional neural units is founded first according to the nonlinearity of the aggregating function; second, according to the dynamic order; and third, according to time-delay implementation within neural units.


BioMed Research International | 2015

A Fast Neural Network Approach to Predict Lung Tumor Motion during Respiration for Radiation Therapy Applications

Ivo Bukovsky; Noriyasu Homma; Kei Ichiji; Matous Cejnek; Matous Slama; Peter Mark Benes; Jiri Bila

During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time.

Collaboration


Dive into the Ivo Bukovsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiri Bila

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Matous Cejnek

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Peter Mark Benes

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Madan M. Gupta

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zeng-Guang Hou

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cyril Oswald

Czech Technical University in Prague

View shared research outputs
Researchain Logo
Decentralizing Knowledge