Igor R. Krcmar
University of Banja Luka
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Igor R. Krcmar.
IEEE Transactions on Industrial Electronics | 2014
Darko P. Marcetic; Igor R. Krcmar; Marko Gecic; Petar Matic
In numerous motor drive applications, high rotor speed is the key factor for system cost, performance, and overall energy efficiency. As a result of energy crises and global market competition, the specified rotor speed and fundamental frequency of the induction motor (IM) in many drive applications noticeably go up. For the same cost and efficiency reasons, that increase of inverter fundamental output frequency cannot be followed with the increase of pulsewidth modulation (PWM) frequency. Therefore, a very low ratio between the PWM and motor fundamental frequencies is to be expected in the near future. In this paper, the shaft-sensorless drive performance is investigated at high speeds, with a very low sampling to fundamental frequency ratio. As a result, two main problems with rotor flux estimators were discovered: the integration problem in the current-based rotor flux model and the phase error in the voltage-based rotor flux model. Both problems were addressed, and a proper joint solution is suggested. The effectiveness of the proposed solution is tested in a model-reference-adaptive-system-based high-speed shaft-sensorless IM drive. The experimental results collected from the digitally controlled IM drive with a low frequency ratio validate the proposed solution.
international conference on acoustics, speech, and signal processing | 2001
Igor R. Krcmar; Danilo P. Mandic
A fully adaptive normalized nonlinear gradient descent (FANNGD) algorithm for neural adaptive filters employed for nonlinear system identification is proposed. This full adaptation is achieved using the instantaneous squared prediction error to adapt the free parameter of the NNGD algorithm. The convergence analysis of the proposed algorithm is undertaken using the contractivity property of the nonlinear activation function of a neuron. Simulation results show that a fully adaptive NNGD algorithm outperforms the standard NNGD algorithm for nonlinear system identification.
international conference on acoustics, speech, and signal processing | 2001
Robert J. Foxall; Igor R. Krcmar; Gavin C. Cawley; Stephen Dorling; Danilo P. Mandic
An analysis of predictability of a nonlinear and nonstationary ozone time series is provided. For rigour, the deterministic versus stochastic (DVS) analysis is first undertaken to detect and measure inherent nonlinearity of the data. Based upon this, neural and linear adaptive predictors are compared on this time series for various filter orders, hence indicating the embedding dimension. Simulation results confirm the analysis and show that for this class of air pollution data, neural, especially recurrent neural predictors, perform best.
Archive | 2001
Rob Foxall; Igor R. Krcmar; Gavin C. Cawley; Stephen Dorling; Danilo P. Mandic
Three methods — DVS plots, attractor reconstruction, and variance analysis of delay vectors — for detecting nonlinearities in time series are compared on an air pollution dataset. For rigour each method is also used on a surrogate dataset, based on a high-order linear fit to the original data. Finally, a comparison of a standard linear analysis to a neural network model analysis of the air pollution dataset is provided.
international conference on acoustics, speech, and signal processing | 2002
Warren Sherliker; Igor R. Krcmar; Milorad M. Bozic; Danilo P. Mandic
Sensitivity analysis of neural adaptive filters with respect to the slope parameter of a neuron activation function is performed. The analysis is provided both for a feedforward neural adaptive filter and a recurrent perceptron. The slope affects stability and convergence characteristics of a filter via inherent relationship between the slope and the learning rate parameter. In addition, it determines character of an activation function, i.e. whether it is contractive or expansive mapping. Presented analysis shows that gradient-descent based learning algorithms with an adaptive learning rate significantly reduce sensitivity of a neural adaptive filter with respect to the slope parameter, when compared with learning algorithms with a constant learning rate. Experimental results on the test speech and HRV signals support the analysis.
Archive | 2001
Igor R. Krcmar; Danilo P. Mandic; Robert J. Foxall
Atmospheric pollution is a health hazard. Thus, an accurate prediction of atmospheric pollution time series is almost a necessity nowdays. The existence of missing data further complicates this challenging problem. The cubic spline interpolation method is applied on the hourly measurements of nitrogen oxide (NO), nitrogen dioxide (NO 2), ozone (O 3), and dust partides (PM10). In order to asses predictability of an air pollution time series, a class of gradient-descent based neural adaptive filters is employed. Results indicate that, yet simple, this class of neural adaptive filters is a suitable solution.
Proceedings of the 5th Seminar on Neural Network Applications in Electrical Engineering. NEUREL 2000 (IEEE Cat. No.00EX287) | 2000
Danilo P. Mandic; Igor R. Krcmar
Relationships between the learning rate /spl eta/ and the slopes /spl beta/ in the tanh activation function for a feedforward neural network (NN) are provided. The analysis establishes the equivalence in the static and dynamic sense between a referent and an arbitrary feedforward NN which helps to reduce the number of degrees of freedom in learning algorithms for NNs.
Proceedings of the 5th Seminar on Neural Network Applications in Electrical Engineering. NEUREL 2000 (IEEE Cat. No.00EX287) | 2000
Igor R. Krcmar; Milorad M. Bozic; Danilo P. Mandic
Conditions for global asymptotic stability of a nonlinear relaxation process realized by a recurrent neural network with a hyperbolic tangent activation function are provided. This analysis is based upon the contraction mapping theorem and corresponding fixed point iteration. The derived results find their application in the wide area of neural networks for optimization and signal processing.
symposium on neural network applications in electrical engineering | 2010
Igor R. Krcmar; Petar S. Maric; Milorad M. Bozic
Load prediction is a necessity in a deregulated electrical energy sector. It is important financially and technically. In order to cope with nonlinear and non stationary character of a load signal, an efficient adaptive predictor should be employed. Also, power utilities manage load information as a complex-valued signal. To this cause, performance of a class of complex-valued gradient descent (GD) neural adaptive finite impulse response (FIR) filters is analyzed. It is shown that fully complex nonlinear GD algorithms have the best performance in a load prediction task. To support the analysis, experiments are carried out on the test load signal, metered on a medium voltage feeder.
Archive | 2001
Danilo P. Mandic; Igor R. Krcmar; Warren Sherliker; George D. Smith
A data-reusing stochastic approximation algorithm for adaptation of a neural adaptive filter is derived. The proposed algorithm is of the gradient-descent (GD) type and incorporates the data-reusing technique and the learning-rate annealing schedule. The convergence analysis is undertaken upon contraction mapping, and bounds on the learning rate parameter ŋ are provided. This algorithm outperforms the linear LMS and NLMS for prediction of speech.