M.B. de Carvalho
Federal Fluminense University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M.B. de Carvalho.
IEEE Transactions on Biomedical Engineering | 2008
Eddie Batista de Lima Filho; Nuno M. M. Rodrigues; E.A.B. da Silva; S.M.M. de Faria; V.M.M. da Silva; M.B. de Carvalho
In this brief, we present new preprocessing techniques for electrocardiogram signals, namely, DC equalization and complexity sorting, which when applied can improve current 2-D compression algorithms. The experimental results with signals from the Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) database outperform the ones from many state-of-the-art schemes described in the literature.
IEEE Transactions on Image Processing | 2008
E.B. de Lima Filho; E.A.B. da Silva; M.B. de Carvalho; F. S. Pinage
In this work, we further develop the multidimensional multiscale parser (MMP) algorithm, a recently proposed universal lossy compression method which has been successfully applied to images as well as other types of data, as video and ECG signals. The MMP is based on approximate multiscale pattern matching, encoding blocks of an input signal using expanded and contracted versions of patterns stored in a dictionary. The dictionary is updated using expanded and contracted versions of concatenations of previously encoded blocks. This implies that MMP builds its own dictionary while the input data is being encoded, using segments of the input itself, which lends it a universal flavor. It presents a flexible structure, which allows for easily adding data-specific extensions to the base algorithm. Often, the signals to be encoded belong to a narrow class, as the one of smooth images. In these cases, one expects that some improvement can be achieved by introducing some knowledge about the source to be encoded. In this paper, we use the assumption about the smoothness of the source in order to create good context models for the probability of blocks in the dictionary. Such probability models are estimated by considering smoothness constraints around causal block boundaries. In addition, we refine the obtained probability models by also exploiting the existing knowledge about the original scale of the included blocks during the dictionary updating process. Simulation results have shown that these developments allow significant improvements over the original MMP for smooth images, while keeping its state-of-the-art performance for more complex, less smooth ones, thus improving MMPs universal character.
IEEE Transactions on Circuits and Systems | 2005
E.B. de Lima Filho; E.A.B. da Silva; M.B. de Carvalho; W.S. da Silva Junior; J. Koiller
In this paper, we use the multidimensional multiscale parser (MMP) algorithm, a recently developed universal lossy compression method, to compress data from electrocardiogram (ECG) signals. The MMP is based on approximate multiscale pattern matching , encoding segments of an input signal using expanded and contracted versions of patterns stored in a dictionary. The dictionary is updated using concatenated and displaced versions of previously encoded segments, therefore MMP builds its own dictionary while the input data is being encoded. The MMP can be easily adapted to compress signals of any number of dimensions, and has been successfully applied to compress two-dimensional (2-D) image data. The quasi-periodic nature of ECG signals makes them suitable for compression using recurrent patterns, like MMP does. However, in order for MMP to be able to efficiently compress ECG signals, several adaptations had to be performed, such as the use of a continuity criterion among segments and the adoption of a prune-join strategy for segmentation. The rate-distortion performance achieved was very good. We show simulation results were MMP performs as well as some of the best encoders in the literature, although at the expense of a high computational complexity.
IEEE Transactions on Biomedical Engineering | 2008
Eddie Batista de Lima Filho; E.A.B. da Silva; M.B. de Carvalho
In this paper, the multidimensional multiscale parser (MMP) is employed for encoding electromyographic signals. The experiments were carried out with real signals acquired in laboratory and show that the proposed scheme is effective, outperforming even wavelet-based state-of- the-art schemes present in the literature in terms of percent root mean square difference times compression ratio.
international conference on image processing | 2005
Nuno M. M. Rodrigues; E.A.B. da Silva; M.B. de Carvalho; S.M.M. de Faria; V.M.M. da Silva
In this paper we present a new method for image coding that is able to achieve good results over a wide range of image types. This work is based on the multidimensional multiscale parser (MMP) algorithm (M. de Carvalho et al., 2002), allied with an intra frame image predictive coding scheme. MMP has been shown to have, for a large class of image data, including texts, graphics, mixed images and textures, a compression efficiency comparable (and, in several cases, well above) to the one of state-of-the-art encoders. However, for smooth grayscale images, its performance lags behind the one of wavelet-based encoders, as JPEG2000. In this paper we propose a novel encoder using MMP with intra predictive coding, similar to the one used in the H.264/AVC video coding standard. Experimental results show that this method closes the performance gap to JPEG-2000 for smooth images, with PSNR gains of up to 1.5 dB. Yet, it maintains the excellent performance level of the MMP for other types of image data, as text, graphics and compound images, lending it a useful universal character.
international conference on image processing | 2008
Nelson C. Francisco; Nuno M. M. Rodrigues; E.A.B. da Silva; M.B. de Carvalho; S.M.M. de Faria; V.M.M. da Silva; Michel Silva Reis
In this paper we present a new segmentation method for the multidimensional multiscale parser (MMP) algorithm. In previous works we have shown that, for text and compound images, MMP has better compression efficiency than state-of-the-art transform-based encoders like JPEG2000 and H.264/AVC; however, it is still inferior to them for smooth images. In this paper we improve the performance of MMP for smooth images by employing a more flexible block segmentation scheme than the one defined in the original algorithm. The new partition scheme allows MMP to exploit the images structure in a much more adaptive and effective way. Experimental tests have shown consistent performance gains, mainly for smooth images. When employing the new block segmentation scheme, MMP outperforms the state-of-the-art JPEG2000 and H.264/AVC Intra-frame image coding algorithms for both smooth and non-smooth images, at low to medium compression ratios.
IEEE Transactions on Biomedical Engineering | 2009
Eddie Batista de Lima Filho; Nuno M. M. Rodrigues; E.A.B. da Silva; M.B. de Carvalho; S.M.M. de Faria; V.M.M. da Silva
This paper presents the results of a multiscale pattern-matching-based ECG encoder, which employs simple preprocessing techniques for adapting the input signal. Experiments carried out with records from the Massachusetts Institute of Technology-Beth Israel Hospital database show that the proposed scheme is effective, outperforming some state-of-the-art schemes described in the literature.
ieee international telecommunications symposium | 2006
Nuno M. M. Rodrigues; E.A.B. da Silva; M.B. de Carvalho; S.M.M. de Faria; V.M.M. da Silva
The multidimensional multiscale parser (MMP) is a lossy multidimensional signal encoder, that uses an adaptive dictionary for approximating the original signal using multiscale recurrent pattern matching. In previous work we have shown the efficiency of MMP for image coding and we have also described new techniques to improve its performance, using predictive coding (MMP-Intra) and innovative strategies for reducing the dictionary redundancy. The combination of these methods for image coding achieves much better results than the state-of-the-art JPEG2000 and H.264/AVC Intra image encoders for text and compound images, but for smooth natural images it still presents small losses. In this work we present a new technique to improve the dictionary adaptation process of the MMP-Intra, based on enhanced updating techniques. Experimental results have showed that, when combined with dictionary growth control methods, this technique achieves consistent image quality gains for all image types. Furthermore we present some methods that eliminate the intrinsic substantial increase of the computational complexity associated with more rapidly growing dictionaries, without compromising the final quality of the decoded image.
international symposium on circuits and systems | 2006
Nuno M. M. Rodrigues; E.A.B. da Silva; M.B. de Carvalho; S.M.M. de Faria; V.M.M. da Silva; F. S. Pinage
MMP-Intra was recently proposed as a recurrent patterns based image encoder that combines the multidimensional multiscale parser (MMP) algorithm with intra prediction techniques. Our results show that this method is able to achieve considerable gains over state-of-the-art transform-based image encoders for a wide variety of types of images, like text, composed (text and graphics) and texture images, while having performance close to the one of traditional algorithms for smooth images. Because of this universal character, MMP-Intra can be regarded as a viable alternative to transform-based image coding. MMP-Intra uses a multiscale adaptive dictionary to approximate the original data blocks. It is composed of dilations, contractions and concatenations of previously encoded patterns. In this work we present a new method for controlling the dictionary adaptability, in which the dictionary is only updated if a certain distortion criterion is met in the block being encoded. Experimental results show that this scheme is able to consistently outperform the original method, while achieving relevant reductions in its computational complexity
international conference on image processing | 1999
M.B. de Carvalho; E.A.B. da Silva
A universal algorithm to compress multi-dimensional data is presented. Besides being able to explore multidimensional correlations of the data, it incorporates two other fundamental innovations when compared to its predecessors, the Lossy Lempel-Ziv (LLZ) and Hierarchical String Matching algorithms: first, instead of building a dictionary of strings to match the data, it builds a dictionary of basis functions in which the data will be decomposed in the spirit of Mallats matching pursuits; second, any basis function in the dictionary can be dilated or contracted when used to match the data. Simulation results show that it has good coding performance for a large class of image data. With Gaussian sources it has shown good performance, outperforming LLZ algorithms for low data rates, being close to the R(D). With real image data, when LLZ fails at all rates, it has performed even better, showing a great improvement over LLZ.A universal algorithm to compress multi-dimensional data is presented. Besides being able to explore multidimensional correlations of the data, it incorporates two other fundamental innovations when compared to its predecessors, the Lossy Lempel-Ziv (LLZ) and Hierarchical String Matching algorithms: first, instead of building a dictionary of strings to match the data, it builds a dictionary of basis functions in which the data will be decomposed in the spirit of Mallats matching pursuits; second, any basis function in the dictionary can be dilated or contracted when used to match the data. Simulation results show that it has good coding performance for a large class of image data. With Gaussian sources it has shown good performance, outperforming LLZ algorithms for low data rates, being close to the R(D). With real image data, when LLZ fails at all rates, it has performed even better, showing a great improvement over LLZ.