E.A.B. da Silva
Federal University of Rio de Janeiro
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by E.A.B. da Silva.
IEEE Transactions on Image Processing | 1996
E.A.B. da Silva; D.G. Sampson; Mohammed Ghanbari
A coding method for wavelet coefficients of images using vector quantization, called successive approximation vector quantization (SA-W-VQ) is proposed. In this method, each vector is coded by a series of vectors of decreasing magnitudes until a certain distortion level is reached. The successive approximation using vectors is analyzed, and conditions for convergence are derived. It is shown that lattice codebooks are an efficient tool for meeting these conditions without the need for very large codebooks. Regular lattices offer the extra advantage of fast encoding algorithms. In SA-W-VQ, distortion equalization of the wavelet coefficients can be achieved together with high compression ratio and precise bit-rate control. The performance of SA-W-VQ for still image coding is compared against some of the most successful image coding systems reported in the literature. The comparison shows that SA-W-VQ performs remarkably well at several bit rates and in various test images.
IEEE Transactions on Image Processing | 1996
E.A.B. da Silva; Mohammed Ghanbari
The behavior of linear phase wavelet transforms in low bit-rate image coding is investigated. The influence of certain characteristics of these transforms such as regularity, number of vanishing moments, filter length, coding gain, frequency selectivity, and the shape of the wavelets on the coding performance is analyzed. The wavelet transforms performance is assessed based on a first-order Markov source and on the image quality, using subjective tests. More than 20 wavelet transforms of a test image were coded with a product code lattice quantizer with the image quality rated by different viewers. The results show that, as long as the wavelet transforms perform reasonably well, features like regularity and number of vanishing moments do not have any important impact on final image quality. The influence of the coding gain by itself is also small. On the other hand, the shape of the synthesis wavelet, which determines the visibility of coding errors on reconstructed images, is very important. Analysis of the data obtained strongly suggests that the design of good wavelet transforms for low bit-rate image coding should take into account chiefly the shape of the synthesis wavelet and, to a lesser extent, the coding.
IEEE Transactions on Image Processing | 2013
Andreas Ellmauthaler; Carla L. Pagliari; E.A.B. da Silva
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
IEEE Transactions on Biomedical Engineering | 2008
Eddie Batista de Lima Filho; Nuno M. M. Rodrigues; E.A.B. da Silva; S.M.M. de Faria; V.M.M. da Silva; M.B. de Carvalho
In this brief, we present new preprocessing techniques for electrocardiogram signals, namely, DC equalization and complexity sorting, which when applied can improve current 2-D compression algorithms. The experimental results with signals from the Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) database outperform the ones from many state-of-the-art schemes described in the literature.
IEEE Transactions on Signal Processing | 2005
Lisandro Lovisolo; E.A.B. da Silva; M.A.M. Rodrigues; P.S.R. Diniz
This paper presents coherent representations of electric power systems signals. These representations are obtained by employing adaptive signal decompositions. They provide a tool to identify structures composing a signal and constitute an approach to represent a signal from its identified components. We use the matching pursuits algorithm, which is a greedy adaptive decomposition, that has the potential of decomposing a signal into coherent components. The dictionary employed is composed of damped sinusoids in order to obtain signal components closely related to power systems phenomena. In addition, we present an effective method to suppress the pre-echo and post-echo artifacts that often appear when using the matching pursuits. However, the use of a dictionary of damped sinusoids alone does not ensure that the decomposition will be meaningful in physical terms. To overcome this constraint, we develop a technique leading to efficient coherent damped-sinusoidal decompositions that are closely related to the physical phenomena being observed. The effectiveness of the proposed method for compression of synthetic and natural signals is tested, obtaining high compression ratios along with high signal-to-noise ratio.
IEEE Transactions on Image Processing | 2008
E.B. de Lima Filho; E.A.B. da Silva; M.B. de Carvalho; F. S. Pinage
In this work, we further develop the multidimensional multiscale parser (MMP) algorithm, a recently proposed universal lossy compression method which has been successfully applied to images as well as other types of data, as video and ECG signals. The MMP is based on approximate multiscale pattern matching, encoding blocks of an input signal using expanded and contracted versions of patterns stored in a dictionary. The dictionary is updated using expanded and contracted versions of concatenations of previously encoded blocks. This implies that MMP builds its own dictionary while the input data is being encoded, using segments of the input itself, which lends it a universal flavor. It presents a flexible structure, which allows for easily adding data-specific extensions to the base algorithm. Often, the signals to be encoded belong to a narrow class, as the one of smooth images. In these cases, one expects that some improvement can be achieved by introducing some knowledge about the source to be encoded. In this paper, we use the assumption about the smoothness of the source in order to create good context models for the probability of blocks in the dictionary. Such probability models are estimated by considering smoothness constraints around causal block boundaries. In addition, we refine the obtained probability models by also exploiting the existing knowledge about the original scale of the included blocks during the dictionary updating process. Simulation results have shown that these developments allow significant improvements over the original MMP for smooth images, while keeping its state-of-the-art performance for more complex, less smooth ones, thus improving MMPs universal character.
IEEE Transactions on Circuits and Systems | 2005
E.B. de Lima Filho; E.A.B. da Silva; M.B. de Carvalho; W.S. da Silva Junior; J. Koiller
In this paper, we use the multidimensional multiscale parser (MMP) algorithm, a recently developed universal lossy compression method, to compress data from electrocardiogram (ECG) signals. The MMP is based on approximate multiscale pattern matching , encoding segments of an input signal using expanded and contracted versions of patterns stored in a dictionary. The dictionary is updated using concatenated and displaced versions of previously encoded segments, therefore MMP builds its own dictionary while the input data is being encoded. The MMP can be easily adapted to compress signals of any number of dimensions, and has been successfully applied to compress two-dimensional (2-D) image data. The quasi-periodic nature of ECG signals makes them suitable for compression using recurrent patterns, like MMP does. However, in order for MMP to be able to efficiently compress ECG signals, several adaptations had to be performed, such as the use of a continuity criterion among segments and the adoption of a prune-join strategy for segmentation. The rate-distortion performance achieved was very good. We show simulation results were MMP performs as well as some of the best encoders in the literature, although at the expense of a high computational complexity.
IEEE Transactions on Biomedical Engineering | 2008
Eddie Batista de Lima Filho; E.A.B. da Silva; M.B. de Carvalho
In this paper, the multidimensional multiscale parser (MMP) is employed for encoding electromyographic signals. The experiments were carried out with real signals acquired in laboratory and show that the proposed scheme is effective, outperforming even wavelet-based state-of- the-art schemes present in the literature in terms of percent root mean square difference times compression ratio.
IEEE Transactions on Smart Grid | 2012
Cristiano Augusto Gomes Marques; Moisés Vidal Ribeiro; Carlos A. Duque; Paulo F. Ribeiro; E.A.B. da Silva
The increasing use of power electronics in power systems causes a high injection of harmonic components which can in turn interfere with utility equipment and customer loads. Therefore, the correct estimation and measurement of harmonics have become an important issue. If the power frequency of the signal is steady and near the nominal value, the discrete Fourier transform (DFT) can be used and good estimation performance is achieved. However, there are considerable power frequency variations on isolated systems such as shipboard power systems, micro-grids and smart-grids. When these variations occur there may be significant errors in the estimates using the DFT. In order to deal with this problem, this work presents a novel technique based on demodulation of the power line signal and subsequent filtering for harmonics estimation. The main features of the proposed harmonic estimation technique are: precise and accurate estimation of harmonics of off-nominal frequencies and fast estimation of harmonics (about two cycles of the fundamental component). Simulation results show that the proposed technique performs well in comparison with the DFT and can be a good candidate to replace it in cases where the power frequency is subject to considerable variations.
international conference on image processing | 2005
Nuno M. M. Rodrigues; E.A.B. da Silva; M.B. de Carvalho; S.M.M. de Faria; V.M.M. da Silva
In this paper we present a new method for image coding that is able to achieve good results over a wide range of image types. This work is based on the multidimensional multiscale parser (MMP) algorithm (M. de Carvalho et al., 2002), allied with an intra frame image predictive coding scheme. MMP has been shown to have, for a large class of image data, including texts, graphics, mixed images and textures, a compression efficiency comparable (and, in several cases, well above) to the one of state-of-the-art encoders. However, for smooth grayscale images, its performance lags behind the one of wavelet-based encoders, as JPEG2000. In this paper we propose a novel encoder using MMP with intra predictive coding, similar to the one used in the H.264/AVC video coding standard. Experimental results show that this method closes the performance gap to JPEG-2000 for smooth images, with PSNR gains of up to 1.5 dB. Yet, it maintains the excellent performance level of the MMP for other types of image data, as text, graphics and compound images, lending it a useful universal character.