Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dmytro Rusanovskyy is active.

Publication


Featured researches published by Dmytro Rusanovskyy.


advanced concepts for intelligent vision systems | 2005

Video denoising algorithm in sliding 3D DCT domain

Dmytro Rusanovskyy; Karen O. Egiazarian

The problem of denoising of video signals corrupted by additive Gaussian noise is considered in this paper. A novel 3D DCT-based video-denoising algorithm is proposed. Video data are locally filtered in sliding/running 3D windows (arrays) consisting of highly correlated spatial layers taken from consecutive frames of video. Their selection is done by the use of a block matching or similar techniques. Denoising in local windows is performed by a hard thresholding of 3D DCT coefficients of each 3D array. Final estimates of reconstructed pixels are obtained by a weighted average of the local estimates from all overlapping windows. Experimental results show that the proposed algorithm provides a competitive performance with state-of-the-art video denoising methods both in terms of PSNR and visual quality.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

Video Coding With Low-Complexity Directional Adaptive Interpolation Filters

Dmytro Rusanovskyy; Kemal Ugur; Antti Hallapuro; Jani Lainema; Moncef Gabbouj

A novel adaptive interpolation filter structure for video coding with motion-compensated prediction is presented in this letter. The proposed scheme uses an independent directional adaptive interpolation filter for each sub-pixel location. The Wiener interpolation filter coefficients are computed analytically for each inter-coded frame at the encoder side and transmitted to the decoder. Experimental results show that the proposed method achieves up to 1.1 dB coding gain and a 15% average bit-rate reduction for high-resolution video materials compared to the standard nonadaptive interpolation scheme of H.264/AVC, while requiring 36% fewer arithmetic operations for interpolation. The proposed interpolation can be implemented in exactly 16-bit arithmetic, thus it can have important use-cases in mobile multimedia environments where the computational resources are severely constrained.


international symposium on circuits and systems | 2008

Video coding with pixel-aligned directional adaptive interpolation filters

Dmytro Rusanovskyy; Kemal Ugur; Moncef Gabbouj; Jani Lainema

In this paper a novel adaptive interpolation filter structure is proposed to improve the coding efficiency of video coders. Proposed scheme utilizes one dimensional directional adaptive filter for every of the sub-pixel location, whose coefficients are calculated analytically for every frame by minimizing the prediction error energy. The direction of the interpolation filter is different for every sub-pixel position and it is determined based on the alignment of the corresponding sub- pixel with integer pixel samples. Experimental results show that, the proposed method achieves up-to 1.1 dB gain compared to the standard non-adaptive interpolation scheme of H.264/AVC, requiring less number of operations for interpolation. Compared to two-dimensional non-separable adaptive interpolation, proposed scheme has practically the same coding efficiency with approximately 3 times less complexity. Since significant coding efficiency is achieved without increasing the complexity, it is believed that proposed method has important use-cases in mobile multimedia environments where the resources are severely constrained.


international conference on image processing | 2009

Adaptive interpolation with flexible filter structures for video coding

Dmytro Rusanovskyy; Kemal Ugur; Moncef Gabbouj

Two novel algorithms are proposed for improving the coding efficiency of adaptive interpolation schemes for video codecs, without increasing implementation complexity. Proposed algorithms utilize two different filter structures with equal tap length, but with complementary frequency responses. Depending on the content being coded, encoder selects which one of the two filter structure is optimal and signals this information to the decoder. In addition, the symmetry of filters is not pre-defined but is flexible. Encoder selects the optimal filter symmetry depending on the coding rate and the content and signals this information to the decoder. Experimental results show, that proposed improvements bring up to 7% of bit-rate reduction at high bit-rate over conventional adaptive interpolation. When compared to H.264/AVC, average gain over the test set is 11%. Coding efficiency is improved without increasing the complexity, thus proposed algorithms are suitable for mobile multimedia use-cases, where the computational resources are very limited.


multimedia signal processing | 2007

Spatial and Temporal Adaptation of Interpolation Filter For Low Complexity Encoding/Decoding

Dmytro Rusanovskyy; Moncef Gabbouj; Kemal Ugur

Compared to video coding with non-adaptive interpolation filtering, adaptive filters achieve higher compression ratios, with an increase in encoding and decoding complexity. In our earlier work, we significantly reduced the decoding complexities of adaptive filtering schemes with a minimal impact on the coding efficiency by making use of different filters and adapting them spatially and temporally. However, our previous scheme required high encoder complexity, as several encoding passes per frame were needed to analyze the input image and optimize the selection of interpolation filters. In this paper, a novel algorithm that does not require multiple encoding passes, but still give similar or better performance is proposed. This is achieved by using a modified decision making function that does not require full reconstruction of coded frame and use motion and prediction information more efficiently. In addition, we generalized our previous scheme by introducing additional filters, so that better Rate-Distortion-Complexity tradeoffs are possible. Experimental results show that up-to 50-70% reduction in interpolation complexity is achieved, with less than 0.13 dB penalty on coding efficiency.


international conference on acoustics, speech, and signal processing | 2008

Efficient calculation of adaptive interpolation filter with distortion modelling

Kemal Ugur; Dmytro Rusanovskyy; Moncef Gabbouj

A novel method is proposed to calculate the coefficients of adaptive interpolation filter used in hybrid video coders for improving the coding efficiency. The proposed algorithm first selects the motion blocks where the majority of prediction errors result from mismatches in motion estimation and from aliasing present in the signal. This is realized by using a second order distortion model to estimate the effect of quantization on motion prediction error and coding results of the previous frames. Then, the filter coefficients are calculated analytically by minimizing the prediction error of those selected blocks. Experimental results show that the proposed method achieves up-to 0.6 dB gain compared to the standard H.264/AVC. Compared to other methods that calculate the filter coefficients using all motion blocks of the frame, the proposed method has significantly less encoding complexity (83% on average) with practically no penalty on coding efficiency.


multimedia signal processing | 2008

Fast encoding algorithms for video coding with adaptive interpolation filters

Dmytro Rusanovskyy; Kemal Ugur; Moncef Gabbouj

In order to compensate for the temporally changing effect of aliasing and improve the coding efficiency of video coders, adaptive interpolation filtering schemes have been recently proposed. In such schemes, encoder computes the interpolation filter coefficients for each frame and then re-encodes the frame with the new adaptive filter. However, the coding efficiency benefit comes with the expense of increased encoding complexity due to this additional encoding pass. In this paper, we present two novel algorithms to reduce the encoding complexity of adaptive interpolation filtering schemes. First algorithm reduces the complexity of the second encoding pass by using a very lightweight motion estimation algorithm that reuses the data already computed in the first encoding pass. Second algorithm eliminates the second coding pass and re-uses the filter coefficients already computed for previous frames. Experimental results show that the proposed methods achieve between 1.5 to 2 times encoding complexity reduction with practically negligible penalty on coding efficiency.


electronic imaging | 2007

Implementation of DCT-based video denoising algorithm with OMAP innovator development kit

Dmytro Rusanovskyy; Jari J. Koivusaari; Karen O. Egiazarian; Jarmo Takala

The recent development of in the field of embedded systems has enabled mobile devices with significant computation power and long battery life. However, there are still a limited number of video applications for such platforms. Due to high computational requirements of video processing algorithms, an intensive assembler optimization or even hardware design is required to meet the resource constraints of the mobile platforms. One example of such challenging video processing problem is video denoising. In this paper, we present a software implementation of a state-of-the-art video denoising algorithm on a mobile computational platform. The chosen algorithm is based on the three-dimensional discrete cosine transform (3D DCT) and block-matching. Apart from its architectural simplicity, algorithm allows the computational scalability due to the sliding window-style processing. In addition, main components of this algorithm are 8-point DCT and block matching which can be efficiently calculated with hardware acceleration of the modern DSP. Our target platform is the OMAP Innovator development kit, a dual processor environment including ARM 925 RISC general purpose processor (GPP) and TMS320C55x digital signal processor (DSP). The C55x DSP offers a hardware acceleration support for computing of the DCT and block-matching intensively used in the chosen denoising algorithm. Hardware acceleration can offer a significant speed-up in comparison to assembler optimization of source codes. The results demonstrate a possibility to implement an efficient video denoising algorithm on a mobile computational platform with limited computational resources.


electronic imaging | 2004

Integer orthogonal transforms: design, fast algorithms, and applications

Karen O. Egiazarian; Dmytro Rusanovskyy; Jaakko Astola

The paper is devoted to design, fast implementation and applications of a family of 8-points integer orthogonal transforms based on a parametric matrix. A unified algorithm for their efficient computations is developed. Derived fast transforms have close coding gain performance to the optimal Karhunen-Loeve transform for the first order Markov process. Among them are also such that closely approximate the DCT-II and, at the same time, have a larger coding gain. For a particular set of parameters, integer transforms with reduced computational complexity are obtained. The comparative analysis of these transforms with the DCT-II in the framework of image denoising and video coding is performed.


electronic imaging | 2004

Feature extraction by best anisotropic Haar bases in an OCR system

Atanas P. Gotchev; Dmytro Rusanovskyy; Roumen Popov; Karen O. Egiazarian; Jaakko Astola

In this contribution, we explore the best basis paradigm for in feature extraction. According to this paradigm, a library of bases is built and the best basis is found for a given signal class with respect to some cost measure. We aim at constructing a library of anisotropic bases that are suitable for the class of 2-D binarized character images. We consider two, a dyadic and a non-dyadic generalization scheme of the Haar wavelet packets that lead to anisotropic bases. For the non-dyadic case, generalized Fibonacci p-trees are used to derive the space division structure of the transform. Both schemes allow for an efficient O(NlogN) best basis search algorithm. The so built extended library of anisotropic Haar bases is used in the problem of optical character recognition. A special case, namely recognition of characters from very low resolution, noisy TV images is investigated. The best Haar basis found is then used in the feature extraction stage of a standard OCR system. We achieve very promising recognition rates for experimental databases of synthetic and real images separated into 59 classes.

Collaboration


Dive into the Dmytro Rusanovskyy's collaboration.

Top Co-Authors

Avatar

Moncef Gabbouj

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Karen O. Egiazarian

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jaakko Astola

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Atanas P. Gotchev

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jari J. Koivusaari

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jarmo Takala

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kostadin Dabov

Tampere University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge