Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Kotyrba is active.

Publication


Featured researches published by Martin Kotyrba.


Computers & Mathematics With Applications | 2013

Multi-classifier based on Elliott wave's recognition

Eva Volna; Martin Kotyrba; Robert Jarusek

This article deals with prediction by means of Elliott wave recognition. Our goal is to find and recognize important Elliott wave patterns which repeatedly appear in the market history for the purpose of prediction of subsequent traders action. The pattern recognition approach is based on neural networks. We focus on reliability of Elliott wave pattern recognition made by the developed algorithms which also causes the reduction of the calculation costs.


Cluster Computing | 2015

Knowledge discovery in dynamic data using neural networks

Michal Janosek; Eva Volna; Martin Kotyrba

The paper proposes a new approach to implement common neural network algorithms in the network environment. In our experimental study we have used three different types of neural networks based on Hebb, daline and backpropagation training rules. Our goal was to discover important market (Forex) patterns which repeatedly appear in the market history. Developed classifiers based upon neural networks should effectively look for the key characteristics of the patterns in dynamic data. We focus on reliability of recognition made by the described algorithms with optimized training patterns based on the reduction of the calculation costs. To interpret the data from the analysis we created a basic trading system and trade all recommendations provided by the neural network.


26th Conference on Modelling and Simulation | 2012

Cryptography Based On Neural Network.

Eva Volna; Martin Kotyrba; Vaclav Kocian; Michal Janosek

The goal of cryptography is to make it impossible to take a cipher and reproduce the original plain text without the corresponding key. With good cryptography, your messages are encrypted in such a way that brute force attacks against the algorithm or the key are all but impossible. Good cryptography gets its security by using incredibly long keys and using encryption algorithms that are resistant to other form attack. The neural net application represents a way of the next development in good cryptography. This paper deals with using neural network in cryptography, e.g. designing such neural network that would be practically used in the area of cryptography. This paper also includes an experimental demonstration. INTRODUCTION TO CRYPTOGRAPHY The cryptography deals with building such systems of security of news that secure any from reading of trespasser. Systems of data privacy are called the cipher systems. The file of rules are made for encryption of every news is called the cipher key. Encryption is a process, in which we transform the open text, e.g. message to cipher text according to rules. Cryptanalysis of the news is the inverse process, in which the receiver of the cipher transforms it to the original text. The cipher key must have several heavy attributes. The best one is the singularity of encryption and cryptanalysis. The open text is usually composed of international alphabet characters, digits and punctuation marks. The cipher text has the same composition as the open text. Very often we find only characters of international alphabet or only digits. The reason for it is the easier transport per media. The next cipher systems are the matter of the historical sequence: transposition ciphers, substitution ciphers, cipher tables and codes. Simultaneously with secrecy of information the tendency for reading the cipher news without knowing the cipher key was evolved. Cipher keys were watched very closely. The main goal of cryptology is to guess the cipher news and to reconstruct the used keys with the help of good analysis of cipher news. It makes use of mathematical statistics, algebra, mathematical linguistics, etc., as well as known mistakes made by ciphers too. The legality of the open text and the applied cipher key are reflected in every cipher system. Improving the cipher key helps to decrease this legality. The safety of the cipher system lies in its immunity against the decipher. The goal of cryptanalysis is to make it possible to take a cipher text and reproduce the original plain text without the corresponding key. Two major techniques used in encryption are symmetric and asymmetric encryption. In symmetric encryption, two parties share a single encryption-decryption key (Khaled, Noaman, Jalab 2005). The sender encrypts the original message (P), which is referred to as plain text, using a key (K) to generate apparently random nonsense, referred to as cipher text (C), i.e.: C = Encrypt (K,P) (1) Once the cipher text is produced, it may be transmitted. Upon receipt, the cipher text can be transformed back to the original plain text by using a decryption algorithm and the same key that was used for encryption, which can be expressed as follows: P = Dencrypt (K,C) (2) In asymmetric encryption, two keys are used, one key for encryption and another key for decryption. The length of cryptographic key is almost always measured in bits. The more bits that a particular cryptographic algorithm allows in the key, the more keys are possible and the more secure the algorithm becomes. The following key size recommendations should be considered when reviewing protection (Ferguson, Schneier, Kohno, 2010): Symmetric key: • Key sizes of 128 bits (standard for SSL) are sufficient for most applications Proceedings 26th European Conference on Modelling and Simulation ©ECMS Klaus G. Troitzsch, Michael Mohring, Ulf Lotzmann (Editors) ISBN: 978-0-9564944-4-3 / ISBN: 978-0-9564944-5-0 (CD) • Consider 168 or 256 bits for secure systems such as large financial transactions Asymmetric key: • Key sizes of 1280 bits are sufficient for most personal applications • 1536 bits should be acceptable today for most secure applications • 2048 bits should be considered for highly protected applications. Hashes: • Hash sizes of 128 bits (standard for SSL) are sufficient for most applications • Consider 168 or 256 bits for secure systems, as many hash functions are currently being revised (see above). NIST and other standards bodies will provide up to date guidance on suggested key sizes. BACKPROPAGATION NEURAL NETWORKS An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well. Figures 1: A general three layer neural network Backpropagation network is one of the most complex neural networks for supervised learning. Regarding topology, the network belongs to a multilayer feedforward neural network. See Fig. 1 (Volna 2000), usually a fully connected variant is used, so that each neuron from the n-th layer is connected to all neurons in the (n+1)-th layer, but it is not necessary and in general some connections may be missing – see dashed lines, however, there are no connections between neurons of the same layer. A subset of input units has no input connections from other units; their states are fixed by the problem. Another subset of units is designated as output units; their states are considered the result of the computation. Units that are neither input nor output are known as hidden units. Figures 2: A simple artificial neuron (http://encefalus.com/neurology-biology/neuralnetworks-real-neurons) A basic computational element is often called a neuron (Fig. 2), node or unit (Fausett 1994). It receives input from some other units, or perhaps from an external source. Each input has an associated weight w, which can be modified so as to model synaptic learning. The unit computes some function f of the weighted sum of its inputs (3):


Swarm and evolutionary computation | 2015

Unconventional modelling of complex system via cellular automata and differential evolution

Martin Kotyrba; Eva Volna; Petr Bujok

Abstract The article deals with principles and utilization possibilities of cellular automata and differential evolution within task resolution and simulation of an epidemic process. The modelling of the spread of epidemics is one of the most widespread and commonly used areas of a modelling of complex systems. The origins of such complexity can be investigated through mathematical models termed ‘cellular automata’. Cellular automata consist of many identical components, each simple, but together capable of complex behaviour. They are analysed both as discrete dynamical systems, and as information-processing systems. Cellular Automata (CA) are well known computational substrates for studying emergent collective behaviour, complexity, randomness and interaction between order and chaotic systems. For the purpose of the article, cellular automata and differential evolution are recognized as an intuitive modelling paradigm for complex systems. The proposed cellular automata supports to find rules of the transition function that represents the model of a studied epidemic. Search for models a studied epidemic belongs to inverse problems whose solution lies in a finding of local rules guaranteeing a desired global behaviour. The epidemic models have the control parameters and their setting significantly influences the behaviour of the models. One way how to get proper values of the control parameters is use evolutionary algorithms, especially differential evolution (DE). Simulations of illness lasting from one to ten days were performed using both described approaches. The aim of the paper is to show a course of simulations for different rules of the transition function and how to find a suitable model of a studied epidemic in the case of inverse problems using a sufficient amount of local rules of a transition function.


28th Conference on Modelling and Simulation | 2014

A Comparative Study To Evolutionary Algorithms.

Eva Volna; Martin Kotyrba

Evolutionary algorithms are general iterative algorithms for combinatorial optimization. The term evolutionary algorithm is used to refer to any probabilistic algorithm whose design is inspired by evolutionary mechanisms found in biological species. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. In this paper we perform a comparative study among Genetic Algorithms (GA), Simulated Annealing (SA), Differential Evolution (DE), and Self Organising Migrating Algorithms (SOMA). These algorithms have many similarities, but they also possess distinctive features, mainly in their strategies for searching the solution state space. The four heuristics are applied on the same optimization problem Travelling Salesman Problem (TSP) and compared with respect to (1) quality of the best solution identified by each heuristic, (2) progress of the search from an initial solution until stopping criteria are met. INTRODUCTION TO EVOLUTIONARY ALGORITHMS Evolutionary algorithms (EAs) have many interesting properties and have been widely used in various optimization problems from combinatorial problems such as job shop scheduling to real valued parameter optimization (Back et al. 1997). In computer science, evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) that involves combinatorial optimization problems. Evolutionary computation uses iterative progress, such as growth or development in a population. This population is then selected in a guided random search using parallel processing to achieve the desired end. Such processes are often inspired by biological mechanisms of evolution. As evolution can produce highly optimised processes and networks, it has many applications in computer science. Problem solution using evolutionary algorithms is shown in Figure 1. Figure 1: Problem solution using evolutionary algorithms (adapted from http://jpmc.sourceforge.net )


NOSTRADAMUS | 2013

Prediction by Means of Elliott Waves Recognition

Eva Volna; Martin Kotyrba; Robert Jarusek

This article deals with prediction by means of Elliott waves recognition. The goal is to find and recognize important Elliott wave patterns which repeatedly appear in the market history for the purpose of prediction of subsequent trader’s action. The pattern recognition approach is based on neural networks. The article is focused on reliability of Elliott wave patterns recognition made by developed algorithms which allows also causes the reduction of the calculation costs.


NOSTRADAMUS | 2013

Pattern Recognition Algorithm Optimization

Eva Volna; Michal Janosek; Martin Kotyrba; Vaclav Kocian

In this article, a short introduction into the field of pattern recognition in time series has been given. Our goal is to find and recognize important patterns which repeatedly appear in the market history. We focus on reliability of recognition made by the proposed algorithms with optimized patterns based on artificial neural networks. The performed experimental study confirmed that for the given class of tasks can be acceptable a simple Hebb classifier with a proposed modification that has been designed, tested, and used for the active mode of Hebb rule. Finally, we present comparison results of trading based on both recommendations: using proposed Hebb neural network implementation, and human expert.


Archive | 2012

Methodology for System Adaptation Based on Characteristic Patterns

Eva Volna; Michal Janosek; Vaclav Kocian; Martin Kotyrba; Zuzana Kominkova Oplatkova

This paper describes the methodology for system description and application so that the system can be managed using real time system adaptation. The term system here can represent any structure regardless its size or complexity (industrial robots, mobile robot navigation, stock market, systems of production, control systems, etc.). The methodology describes the whole development process from system requirements to software tool that will be able to execute a specific system adaptation. In this work, we propose approaches relying on machine learning methods (Bishop, 2006), which would enable to characterize key patterns and detect them in real time and in their altered form as well. Then, based on the pattern recognized, it is possible to apply a suitable intervention to system inputs so that the system responds in the desired way. Our aim is to develop and apply a hybrid approach based on machine learning methods, particularly based on soft-computing methods to identify patterns successfully and for the subsequent adaptation of the system. The main goal of the paper is to recognize important pattern and adapt the system’s behaviour based on the pattern desired way. The paper is arranged as follows: Section 1 introduces the critical topic of the article. Section 2 details the feature extraction process in order to optimize the patterns used as inputs into experiments. The pattern recognition algorithms using machine learning methods are discussed in section 3. Section 4 describes the used data-sets and covers the experimental results and a conclusion is given in section 5. We focus on reliability of recognition made by the described algorithms with optimized patterns based on the reduction of the calculation costs. All results are compared mutually.


26th Conference on Modelling and Simulation | 2012

Elliott Waves Recognition Via Neural Networks.

Martin Kotyrba; Eva Volna; David Brazina; Robert Jarusek

In this paper we introduce our method that is able to analyze and recognize Elliott waves in time series. Our method uses an artificial neural network that is adapted by backpropagation. Neural network uses Elliot wave’s patterns in order to extract them and recognize. Artificial neural networks are suitable for pattern recognition in time series mainly because of learning only from examples. There is no need to add additional information that could bring more confusion than recognition effect. Neural networks are able to generalize and are resistant to noise. On the other hand, it is generally not possible to determine exactly what a neural network learned and it is also hard to estimate possible recognition error. They are ideal especially when we do not have any other description of the observed series. This paper also includes experimental results of Elliott waves recognition carried out with our method.


Archive | 2016

Possibilities of Control and Optimization of Traffic at Crossroads Using Petri Nets

Jakub Gaj; Martin Kotyrba; Eva Volna

This paper deals with the modeling and optimization of the movement of vehicles in the transport network consisting of the busiest crossroads in Ostrava (a town in the Czech Republic), including a proposal of the logic control of traffic lights using Petri nets. The introduction shows the problem importance of creation of a crossroad with the formalism of Petri nets for the final optimization. A theoretical analysis including a basic concept and distribution of Petri nets follows. The core of our contribution is a proposal of the traffic optimization at crossroads with their following verification in experimental simulations. All results of the experimental study are summarized in the conclusion.

Collaboration


Dive into the Martin Kotyrba's collaboration.

Top Co-Authors

Avatar

Eva Volna

University of Ostrava

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jakub Gaj

University of Ostrava

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge