Adrian Horzyk
AGH University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adrian Horzyk.
international symposium on neural networks | 2004
Adrian Horzyk; Ryszard Tadeusiewicz
This paper describes an efficient construction of a partially-connected multilayer architecture and a computation of weight parameters of Self-Optimizing Neural Network 3 (SONN-3) that can be used as a universal classifier for various real, integer or binary input data, even for highly non-separable data. The SONN-3 consists of three types of neurons that play an important role in a process of extraction and transformation of important features of input data in order to achieve correct classification results. This method is able to collect and to appropriately reinforce values of the most important input features so that achieved generalization results can compete with results achieved by other existing classification methods. The most important aspect of this method is that it neither loses nor rounds off any important values of input features during this computation and propagation of partial results through a neural network, so the computed classification results are very exact and accurate. All the most important features and their most distinguishing ranges of values are effectively compressed and transformed into an appropriate network architecture with weight values. The automatic construction process of this method and all optimization algorithms are described here in detail. Classification and generalization results are compared by means of some examples.
Neurocomputing | 2009
Ewa Dudek-Dyduch; Ryszard Tadeusiewicz; Adrian Horzyk
The paper discusses and compares two different ways of adapting artificial intelligence systems. One is founded on a well known biological mechanism of gradual training of neurons or other parameters. The second one uses a significant extra feature of training data that ably makes us possible to adapt the artificial intelligence system in more effective way than nature does in biological systems. This extra feature is availability of all training data before the adaptation process begins till an end of which all these data have to be constant. This feature provides an ability to analyze training data globally and very quickly tune an artificial intelligence system with them. The paper focus the attention on this important difference between biological and artificial intelligence problems because in most cases of artificial intelligence problems training data are gathered, available and constant during the training process. On the other hand, the biological nervous systems gather training data during the whole life, have to change the inner model, so training is a very good solution for them because it makes them possible to tune with changing training data. Artificial intelligence systems can also use training inherent in biological systems but in most cases it is possible to find more quickly and effectively the solution if only the mentioned feature is met. The above thesis is illustrated by means of some examples.
Neurocomputing | 2014
Adrian Horzyk
This paper explains and models selected associative processes that take place in biological associative neural systems. Such associative systems allow us to form, expand, and exploit knowledge in a human-like way. They trigger artificial associations for previously trained and even new contexts taking into account the previous states of neurons of such systems, which are necessary for associative knowledge formation. The associative systems can automatically generalize and be creative. This paper reveals and explains the important generalization mechanisms of biological associative systems that can be modelled in artificial neural associative systems and used for practical associative neurocomputations. The associative mechanisms enable even generalization of rules or algorithms. These systems can reproduce generalization not only to train and classify static objects but also to form new sequences, which enables creativity of these systems. Because neuron groups are activated in a specific order, the associative conclusions are reached very quickly. Knowledge and active associations can also substitute many laborious and time-consuming searching processes that are used in contemporary computer science.
international work-conference on the interplay between natural and artificial computation | 2005
Adrian Horzyk; Ryszard Tadeusiewicz
The paper interplays between plasticity processes of natural neural networks [9] and Self-Optimizing Neural Networks (SONNs) [7]. The natural neural networks (NNNs) have great possibility in adaptation to environment. The possibility to adapt is based on the chemical processes changing synaptic plasticity and adapting neural network topology during life. The described SONNs are able to adapt their topology to the given problem (i.e. artificial neural network environment) in the functionally similar way the natural neural networks do. The SONNs as well as the NNNs solve together the two essential problems: neural networks topology optimization and weights parameters computation for the given environment. The ontogenic SONNs development gradually adapts network topology specializing the network to the given problem. The fully automatic deterministic self-adapting mechanism of SONNs does not use any a priori configuration parameters and is free from different training problems.
international conference on artificial intelligence and soft computing | 2015
Adrian Horzyk
This paper presents a new concept of representation of data and their relations in neural networks which allows to automatically associate, reproduce them, and generalize about them. It demonstrates an innovative way of developing emergent neural representation of knowledge using a new kind of neural networks whose structure is automatically constructed and parameters are automatically computed on the basis of plastic mechanisms implemented in a new associative model of neurons - called as-neurons. Inspired by the plastic mechanisms commonly occurring in a human brain, this model allows to quickly create associations and establish weighted connections between neural representations of data, their classes, and sequences. As-neurons are able to automatically interconnect representing similar or sequential data. This contribution describes generalized formulas for quick analytical computation of the structure and parameters of ANAKG neural graphs for representing and recalling of training sequences of objects.
ICMMI | 2009
Adrian Horzyk; Ryszard Tadeusiewicz
Fast development of internet services together with a need to automate maintains of internet services in order to reduce expenses force to use some artificial intelligence solutions that are able to interact between a man and a machine. Such interactions can be carried out using internet chatbots that are able to communicate with a human in natural language supplemented with voice synthesizers. A main drawback of today systems is that they do not recognize nor understand and weakly react to human needs. A conversation will satisfy a human if some implemented algorithms will be able to passively recognize and classify human needs and adjust a conversation and reactions to these needs. This paper describes a new personality model that can be successfully used by chatbots to achieve this goal. The presented personality model figures out words, phrases and sentence constructions that can be recognized in a conversation, describes personality needs and suitable intelligent reactions to these needs in order to provide a human with satisfaction.
computer information systems and industrial management applications | 2014
Ryszard Tadeusiewicz; Adrian Horzyk
Creators of numerous information systems frequently concentrate on the semantic aspects of communication during planning and forming of man-machine interactions. Meanwhile emotions and adequate reactions to needs play equally essential role as rational reasoning when intelligence of a partner is judged. Intelligent behaviours and reactions usually deliberately affect the partner’s needs. Therefore a computer could be accepted as an intelligent partner if it considers human needs affecting emotions. This paper presents a new method of automatic human needs recognition based on the extended personality typology that is described using characteristic verbal expressions. It enables to perform automatic passive classification of personality by means of psycholinguistic analysis during typical merit man-machine communication.
international conference on artificial intelligence and soft computing | 2012
Adrian Horzyk
Today, majority of collected data and information are usually passively stored in data bases and in various kinds of memory cells and storage media that let them do nothing more than waiting for being used by some algorithms that will read, write or modify them. Nowadays, the majority of computational techniques do not allow pieces of information to associate with each other automatically. This paper introduces a novelty theory that lets information be free and active. There is allowed that some pieces of information can automatically and autonomously associate with the other pieces of it after some introduced associative rules characteristic also for biological information systems. As a result of this, each new information has an automatic impact on information processing in a brainlike artificial neural structure that can enable machines to associate various pieces of information automatically and autonomously. It can also enable machines actively perform some cognitive and thinking processes and constitute real artificial intelligence in the future.
international conference on artificial neural networks | 2005
Adrian Horzyk
The paper introduces a new extension of the ontogenic Self-Optimizing Neural Networks (SONNs)[4] making possible to optimize a neural network (NN) topology for a whole training data (TR) set at once. The classical SONNs optimize a NN topology only for subnetworks related to trained classes. The described SONN extension enables to optimize topology for all classes at once. Moreover, this extension makes possible to compute a minimal SONN topology for given TD which can be sometimes insufficient in view of generalization. The SONN extension computes better discrimination coefficients and automatically develops the topology that reflects all well-discriminative data features into the NN topology in order to achieve a good generalization property. Furthermore, the SONN extension can also automatically reduce the input dimension space of any TD and automatically recognize and correctly classify inverted inputs (especially important for image classification). All extended SONN computations are fully automatic and deterministic. There is no need to use any parameters given by user. The SONNs are free from many training problems, e.i. initiation, convergence, overfitting. The extended SONNs can be also used to unsupervised training.
intelligent systems design and applications | 2005
Adrian Horzyk
Self-optimizing neural networks (SONNs) are very effective in solving different classification tasks. They have been successfully used to many different problems. The classical SONN adaptation process has been defined as supervised. This paper introduces a new very interesting SONN feature - the unsupervised clustering ability. The unsupervised SONNs (US-SONNs) are able to find out most differentiating features for some training data and recursively divide them into subgroups. US-SONNs can also characterize the importance of features differentiating these groups. The division of the data is recursively performed till the data in subgroups differ imperceptibly. The SONN clustering proceeds very fast in comparison to other unsupervised clustering methods.