Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leonardo Vidal Batista is active.

Publication


Featured researches published by Leonardo Vidal Batista.


Medical Engineering & Physics | 2001

Compression of ECG signals by optimized quantization of discrete cosine transform coefficients

Leonardo Vidal Batista; Elmar Uwe Kurt Melcher; Luis Carlos Carvalho

This paper presents an ECG compressor based on optimized quantization of Discrete Cosine Transform (DCT) coefficients. The ECG to be compressed is partitioned in blocks of fixed size, and each DCT block is quantized using a quantization vector and a threshold vector that are specifically defined for each signal. These vectors are defined, via Lagrange multipliers, so that the estimated entropy is minimized for a given distortion in the reconstructed signal. The optimization method presented in this paper is an adaptation for ECG of a technique previously used for image compression. In the last step of the compressor here proposed, the quantized coefficients are coded by an arithmetic coder. The Percent Root-Mean-Square Difference (PRD) was adopted as a measure of the distortion introduced by the compressor. To assess the performance of the proposed compressor, 2-minute sections of all 96 records of the MIT-BIH Arrhythmia Database were compressed at different PRD values, and the corresponding compression ratios were computed. We also present traces of test signals before and after the compression/decompression process. The results show that the proposed method achieves good compression ratios (CR) with excellent reconstruction quality. An average CR of 9.3:1 is achieved for PRD equal to 2.5%. Experiments with ECG records used in other results from the literature revealed that the proposed method compares favorably with various classical and state-of-the-art ECG compressors.


brazilian symposium on computer graphics and image processing | 2009

2D-DCT Distance Based Face Recognition Using a Reduced Number of Coefficients

Derzu Omaia; JanKees van der Poel; Leonardo Vidal Batista

Automatic face recognition is a challenging problem, since human faces have a complex pattern. This paper presents a technique for recognition of frontal human faces on gray scale images. In this technique, the distance between the Discrete Cosine Transform (DCT) of the face under evaluation and all the DCTs of the faces database are computed. The faces with the shortest distances probably belong to the same person; therefore this evaluating face is attributed to this person. The distance is calculated as the sum of the differences between the modules of DCT coefficients. Only a few coefficients are used in this computation; they are selected from the low frequency of the DCT. Experimental tests on the ORL database reaches a recognition rate of 99.75%, with low computational cost and no preprocessing step. Additionally, the method achieved 100.0% of recognition accuracy when applying a zooming normalization over the ORL database.


international conference hybrid intelligent systems | 2005

A new multiscale, curvature-based shape representation technique for image retrieval based on DSP techniques

J. van der Poel; C.W.D. de Almeida; Leonardo Vidal Batista

This work presents a new multiscale, curvature-based shape representation technique for planar curves. One limitation of the well-known curvature scale space (CSS) method is that it uses only curvature zero-crossings to characterize shapes and thus there is no CSS descriptor for convex shapes. The proposed method, on the other hand, uses bidimensional-unidimensional-bidimensional transformations together with resampling techniques to retain the full curvature information for shape characterization. It also employs the correlation coefficient as a measure of similarity. In the evaluation tests, the proposed method achieved a high correct classification rate (CCR), even when the shapes were severely corrupted by noise. Results clearly showed that the proposed method is more robust to noise than CSS.


international conference on document analysis and recognition | 2009

Author Identification Using Compression Models

Daniel Pavelec; Luiz S. Oliveira; Edson J. R. Justino; F. D. Nobre Neto; Leonardo Vidal Batista

In this paper we discuss the use of compression algorithmsfor author identification. We present the basic backgroundabout compression algorithms and introduce the Prediction by Partial Matching algorithm, which has been used in our experiments. To better compare the results produced by the PPM algorithm, we present some experiments using stylometric features used very often by forensic examiners.In this case the authors are modeled using Support Vector Machines. Comprehensive experiments performed on a database composed of 20 different authors show that the PPM algorithm is an interesting alternative for author identification, since all the process of feature definition, extraction, and selection can be avoided.


Archive | 2007

Near-Lossless Compression of ECG Signals using Perceptual Masks in the DCT Domain

Rodrigo Cartaxo Marques Duarte; Fabrizia M. Matos; Leonardo Vidal Batista

This paper describes a perceptual masking method to compress ECG signals. Perceptual mask definition demands a visual assessment of the resulting quality, instead of relying on purely mathematical distortion measures. The proposed method uses thresholding and numerical masks for DCT coefficients to obtain the maximum number of zeroed coefficients without perceptual distortions in the reconstructed signal. An average 52.42% zeroed coefficients was achieved for average PRD equal to 1.24.


acm symposium on applied computing | 2008

Author identification using writer-dependent and writer-independent strategies

Daniel Pavelec; Edson J. R. Justino; Leonardo Vidal Batista; Luiz S. Oliveira

In this work we discuss author identification for documents written in Portuguese. Two different approaches were compared. The first is the writer-independent model which reduces the pattern recognition problem to a single model and two classes, hence, makes it possible to build robust system even when few genuine samples per writer are available. The second is the personal model, which very often performs better but needs a bigger number of samples per writer. We also introduce a stylometric feature set based on the conjunctions and adverbs of the Portuguese language. Experiments on a database composed of short articles from 30 different authors and Support Vector Machine (SVM) as classifier demonstrate that the proposed strategy can produced results comparable to the literature.


international symposium on neural networks | 2009

Compression and stylometry for author identification

Daniel Pavelec; Luiz S. Oliveira; Edson J. R. Justino; F. D. Nobre Neto; Leonardo Vidal Batista

In this paper we compare two different paradigms for author identification. The first one is based on compression algorithms where the entire process of defining and extracting features and training a classifier is avoided. The second paradigm, on the other hand, takes into account the classical pattern recognition framework, where linguistic features proposed by forensic experts are used to train a Support Vector Machine classifier. Comprehensive experiments performed on a database composed of 20 writers show that both strategies achieve similar performance but with an interesting degree of complementarity demonstrated through the confusion matrices. Advantages and drawback of both paradigms are also discussed.


PLOS ONE | 2016

A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.

Daniel Miranda de Brito; Vinicius Maracaja-Coutinho; Sávio Torres de Farias; Leonardo Vidal Batista; Thais G. Rêgo

Genomic Islands (GIs) are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP—Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me.


Giscience & Remote Sensing | 2014

Multispectral image unsupervised segmentation using watershed transformation and cross-entropy minimization in different land use

Eduardo Freire Santana; Leonardo Vidal Batista; Richarde Marques da Silva; Celso Augusto Guimarães Santos

A general-purpose unsupervised segmentation algorithm based on cross-entropy minimization by pixel was developed; this algorithm, known as the SCEMA (Segmentation Cross-Entropy Minimization Algorithm), starts from an initial segmentation and iteratively searches the best statistical model, estimating the probability density of the image to reduce the cross-entropy with respect to the previous iteration. The SCEMA was tested using satellite images from the Landsat Thematic Mapper sensor of Landsat 5 for the Amazon region (12 images for testing and 15 for validation). Theme classes identified in the image were (1) water, (2) vegetation, and (3) agriculture. Using the Kappa index and other statistics parameters, the comparison of classifications is made with the following segmentation methods: (1) cross-entropy minimization by pixel, (2) cross-entropy minimization by region, (3) K-means, and (4) maximum likelihood. The results indicate that cross-entropy minimization by pixel results in a consistent segmenta...A general-purpose unsupervised segmentation algorithm based on cross-entropy minimization by pixel was developed; this algorithm, known as the SCEMA (Segmentation Cross-Entropy Minimization Algorithm), starts from an initial segmentation and iteratively searches the best statistical model, estimating the probability density of the image to reduce the cross-entropy with respect to the previous iteration. The SCEMA was tested using satellite images from the Landsat Thematic Mapper sensor of Landsat 5 for the Amazon region (12 images for testing and 15 for validation). Theme classes identified in the image were (1) water, (2) vegetation, and (3) agriculture. Using the Kappa index and other statistics parameters, the comparison of classifications is made with the following segmentation methods: (1) cross-entropy minimization by pixel, (2) cross-entropy minimization by region, (3) K-means, and (4) maximum likelihood. The results indicate that cross-entropy minimization by pixel results in a consistent segmentation of images. The algorithm also compares favorably to other well-known image segmentation methods, and the numerical test results illustrate the efficiency of our approach for image segmentation.


issnip biosignals and biorobotics conference biosignals and robotics for better and safer living | 2011

Heart arrhythmia classification using the PPM algorithm

Thiago Fernandes Lins de Medeiros; Erick Vagner Cabral De Lima Borges; Berg Élisson Sampaio Cavalcante; Amanda Barreto Cavalvanti; Igor Lucena Peixoto Andrezza; Leonardo Vidal Batista

This paper describes a method of heart arrhythmia classification based on the heart rate variability (HRV) signal and the compression algorithm Prediction by Partial Matching. The arrhythmias to be identified are: Normal Sinus Rhythm, Premature Ventricular Contraction, 2nd Heart Block and Sinus Bradycardia. The extraction of the HRV signal is performed by analyzing the electrocardiogram to detect the R peak from the QRS complex of the heartbeats, and then generating the signal. The classification of the arrhythmias is done in two steps. In the learning stage the PPM algorithm builds statistical models for the extracted tachogram. In the classification stage, the tachograms are compressed by the obtained models and attributed to the class whose models results in the best compression rate. The tests were performed with 1558 segments from the MIT-BIH Arrhythmia Database. The classifier was tested for several context sizes, k, and different training/classification sets. The performance of the classifier was measured according to sensitivity, specificity and accuracy. The best results were obtained when context size was equal to two (k=2), achieving 91,74% of sensitivity, 99,37% of specificity and 99,14% of accuracy, results comparable to those of the best modern classifiers.

Collaboration


Dive into the Leonardo Vidal Batista's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

JanKees van der Poel

Federal University of Paraíba

View shared research outputs
Top Co-Authors

Avatar

Daniel Pavelec

Pontifícia Universidade Católica do Paraná

View shared research outputs
Top Co-Authors

Avatar

Edson J. R. Justino

Pontifícia Universidade Católica do Paraná

View shared research outputs
Top Co-Authors

Avatar

Eduardo Freire Santana

Federal University of Paraíba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luiz S. Oliveira

Federal University of Paraná

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicomedes L. Cavalcanti

Federal University of Paraíba

View shared research outputs
Top Co-Authors

Avatar

Thais G. Rêgo

Federal University of Paraíba

View shared research outputs
Researchain Logo
Decentralizing Knowledge