Uroš Lotrič
University of Ljubljana
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Uroš Lotrič.
Neurocomputing | 2004
Uroš Lotrič
A denoising unit based on wavelet multiresolution analysis is added ahead of the multilayered perceptron. The cost function used in neural network learning is also applied as the denoising criterion and hence denoising itself is treated as a part of the integrated model. By introducing continuously derivable generalized soft thresholding function and infinite thresholds, a gradient based learning algorithm for simultaneous setting of all free parameters of the model is derived. The proposed model outmatches the classical multilayered perceptron and the multilayered perceptron with statistical denoising in noisy time series prediction problems.
systems man and cybernetics | 2010
Catarina Silva; Uroš Lotrič; Bernardete Ribeiro; Andrej Dobnikar
Constructing a single text classifier that excels in any given application is a rather inviable goal. As a result, ensemble systems are becoming an important resource, since they permit the use of simpler classifiers and the integration of different knowledge in the learning process. However, many text-classification ensemble approaches have an extremely high computational burden, which poses limitations in applications in real environments. Moreover, state-of-the-art kernel-based classifiers, such as support vector machines and relevance vector machines, demand large resources when applied to large databases. Therefore, we propose the use of a new systematic distributed ensemble framework to tackle these challenges, based on a generic deployment strategy in a cluster distributed environment. We employ a combination of both task and data decomposition of the text-classification system, based on partitioning, communication, agglomeration, and mapping to define and optimize a graph of dependent tasks. Additionally, the framework includes an ensemble system where we exploit diverse patterns of errors and gain from the synergies between the ensemble classifiers. The ensemble data partitioning strategy used is shown to improve the performance of baseline state-of-the-art kernel-based machines. The experimental results show that the performance of the proposed framework outperforms standard methods both in speed and classification.
Neural Computing and Applications | 2005
Uroš Lotrič; Andrej Dobnikar
To avoid the need to pre-process noisy data, two special denoising layers based on wavelet multiresolution analysis have been integrated into layered neural networks. A gradient-based learning algorithm has been developed that uses the same cost function to set both the neural network weights and the free parameters of the denoising layers. The denoising layers, when integrated into feedforward and recurrent neural networks, were validated on three time series prediction problems: the logistic map, a rubber hardness time series, and annual average sunspot numbers. Use of the denoising layers improved the prediction accuracy in both cases.
Neurocomputing | 2012
Uroš Lotrič; Patricio Bulić
In recent years there has been a growing interest in hardware neural networks, which express many benefits over conventional software models, mainly in applications where speed, cost, reliability, or energy efficiency are of great importance. These hardware neural networks require many resource-, power- and time-consuming multiplication operations, thus special care must be taken during their design. Since the neural network processing can be performed in parallel, there is usually a requirement for designs with as many concurrent multiplication circuits as possible. One option to achieve this goal is to replace the complex exact multiplying circuits with simpler, approximate ones. The present work demonstrates the application of approximate multiplying circuits in the design of a feed-forward neural network model with on-chip learning ability. The experiments performed on a heterogeneous Proben1 benchmark dataset show that the adaptive nature of the neural network model successfully compensates for the calculation errors of the approximate multiplying circuits. At the same time, the proposed designs also profit from more computing power and increased energy efficiency.
BMC Bioinformatics | 2014
Davor Sluga; Tomaz Curk; Blaz Zupan; Uroš Lotrič
BackgroundThe extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested.ResultsWe have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort.ConclusionsGeneral purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.
international conference on adaptive and natural computing algorithms | 2009
Uroš Lotrič; Andrej Dobnikar
Neural networks have proved to be effective in solving a wide range of problems. As problems become more and more demanding, they require larger neural networks, and the time used for learning is consequently greater. Parallel implementations of learning algorithms are therefore vital for a useful application. Implementation, however, strongly depends on the features of the learning algorithm and the underlying hardware architecture. For this experimental work a dynamic problem was chosen which implicates the use of recurrent neural networks and a learning algorithm based on the paradigm of learning automata. Two parallel implementations of the algorithm were applied - one on a computing cluster using MPI and OpenMP libraries and one on a graphics processing unit using the CUDA library. The performance of both parallel implementations justifies the development of parallel algorithms.
Archive | 2001
Uroš Lotrič; Andrej Dobnikar
To reduce the influence of noise in time series prediction, a neural network, the multilayered perceptron, is combined with smoothing units based on the wavelet multiresolution analysis. Two approaches are compared: smoothing based on the statistical criterion and smoothing which uses the prediction error as the criterion. For the latter an algorithm for simultaneous setting of free parameters of the smoothing unit and the multilayered perceptron is derived. Prediction of noisy time series is shown to be better with the model based on the prediction error.
international conference on adaptive and natural computing algorithms | 2011
Uroš Lotrič; Patricio Bulić
Neural networks on chip have found some niche areas of applications, ranging from massive consumer products requiring small costs to real-time systems requiring real time response. Speaking about latter, iterative logarithmic multipliers show a great potential in increasing performance of the hardware neural networks. By relatively reducing the size of the multiplication circuit, the concurrency and consequently the speed of the model can be greatly improved. The proposed hardware implementation of the multilayer perceptron with on chip learning ability confirms the potential of the concept. The experiments performed on a Proben1 benchmark dataset show that the adaptive nature of the proposed neural network model enables the compensation of the errors caused by inexact calculations by simultaneously increasing its performance and reducing power consumption.
international conference on machine learning and applications | 2005
Catarina Silva; Bernardete Ribeiro; Uroš Lotrič
The amount of texts available in digital form has dramatically increased, giving rise to the need of fast text classifiers. The tasks involved can be parallelized and distributed in a grid environment. This paper reports a study conducted on Reuters-21578 corpus, using a SVM learning machine. The task of text categorization is distributed in several platforms. The results achieved are very promising for speeding-up text categorization tasks and are valid independently of the learning machine.
Entropy | 2017
Davor Sluga; Uroš Lotrič
We propose a novel feature selection method based on quadratic mutual information which has its roots in Cauchy–Schwarz divergence and Renyi entropy. The method uses the direct estimation of quadratic mutual information from data samples using Gaussian kernel functions, and can detect second order non-linear relations. Its main advantages are: (i) unified analysis of discrete and continuous data, excluding any discretization; and (ii) its parameter-free design. The effectiveness of the proposed method is demonstrated through an extensive comparison with mutual information feature selection (MIFS), minimum redundancy maximum relevance (MRMR), and joint mutual information (JMI) on classification and regression problem domains. The experiments show that proposed method performs comparably to the other methods when applied to classification problems, except it is considerably faster. In the case of regression, it compares favourably to the others, but is slower.