Karol Gugała
Poznań University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Karol Gugała.
Applied Mathematics and Computation | 2015
Rafal Kapela; Karol Gugała; Pawel Sniatala; Aleksandra Swietlicka; Krzysztof Kolanowski
The article presents novel idea of a hardware accelerated image processing algorithm for embedded systems. The system is based on the well known Fast Retina Keypoint (FREAK) local image description algorithm. The solution utilizes Field Programmable Gate Array (FPGA) as a flexible module that is used to implement hardware acceleration of a given part of the image processing algorithm. The approach presented in this paper is slightly different. Since we are using very fast FREAK descriptor it is not our purpose to implement full feature extraction algorithm in hardware but just its most time-consuming part which is brute force matcher based on the Hamming distance. Moreover our goal was to design very flexible system so that the feature detection and extraction algorithm can be replaced without any interruption in the hardware accelerated part.
Computing | 2013
Karol Gugała; Aleksandra Świetlicka; Michał Burdajewicz; Andrzej Rybarczyk
The purpose of this work is to speed up simulations of neural tissues based on the stochastic version of the Hodgkin–Huxley model. Authors achieve that by introducing the system providing random values with desired distribution in simulation process. System consists of two parts. The first one is a high entropy fast parallel random number generator consisting of a hardware true random number generator and graphics processing unit implementation of pseudorandom generation algorithm. The second part of the system is Gaussian distribution approximation algorithm based on a set of generators of uniform distribution. Authors present hardware implementation details of the system, test results of the mentioned parts separately and of the whole system in neural cell simulation task.
Solid State Phenomena | 2013
Rafal Dlugosz; Marta Kolasa; Tomasz Talaśka; Jolanta Pauk; Ryszard Wojtyna; Michal Szulc; Karol Gugała; Pierre Andre Farine
This paper presents a new distance calculation circuit (DCC) that in artificial neural networks is used to calculate distances between vectors of signals. The proposed circuit is a digital, fully parallel and asynchronous solution. The complexity of the circuit strongly depends on the type of the distance measure. Considering two popular measures i.e. the Euclidean (L2) and the Manhattan (L1) one, it is shown that in the L2 case the number of transistors is even ten times larger than in the L1 case. Investigations carried out on the system level show that the L1 measure is a good estimate of the L2 one. For the L1 measure, for an example case of 4 inputs, for 10 bits of resolution of the signals, the number of transistors is equal to c. 2500. As transistors of minimum sizes can be used, the chip area of a single DCC, if realized in the CMOS 180 nm technology, is less than 0.015 mm2.
Solid State Phenomena | 2013
Aleksandra Świetlicka; Karol Gugała; Marta Kolasa; Jolanta Pauk; Andrzej Rybarczyk; Rafal Dlugosz
The paper presents a modification of the structure of a biological neural network (BNN) based on spiking neuron models. The proposed modification allows to influence the level of the stimulus response of particular neurons in the BNN. We consider an extended, three-dimensional Hodgkin-Huxley model of the neural cell. A typical BNN composed of such neural cells have been expanded by addition of resistors in each branch point. The resistors can be treated as the weights in such BNN. We demonstrate that adding these elements to the BNN significantly affects the waveform of the potential on the membrane of the neuron, causing an uncontrolled excitation. This provides a better description of processes that take place in nervous cell. Such BNN enables an easy adaptation of the learning rules used in artificial or spiking neural networks. The modified BNN has been implemented on Graphics Processing Unit (GPU) in the CUDA C language. This platform enables a parallel data processing, which is an important feature in such applications.
international conference mixed design of integrated circuits and systems | 2011
Karol Gugała; Aleksandra Figas; Agata Jurkowlaniec; Andrzej Rybarczyk
Neural Network World | 2015
Aleksandra Świetlicka; Karol Gugała; Agata Jurkowlaniec; Paweł Śniatała; Andrzej Rybarczyk
Archive | 2013
Aleksandra Świetlicka; Karol Gugała; Igor Karoń; Krzysztof Kolanowski; Mateusz Majchrzycki; Andrzej Rybarczyk
Biocybernetics and Biomedical Engineering | 2017
Aleksandra Świetlicka; Karol Gugała; Witold Pedrycz; Andrzej Rybarczyk
Archive | 2013
Kolanowski Krzysztof; Aleksandra Świetlicka; Mateusz Majchrzycki; Karol Gugała; Igor Karoń; Andrzej Rybarczyk
International Journal of Electronics and Telecommunications | 2012
Agata Jurkowlaniec; Michal Szulc; Tomasz Dybizbański; Slawomir Michalak; Aleksandra Świetlicka; Karol Gugała; Andrzej Rybarczyk