Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Menotti is active.

Publication


Featured researches published by David Menotti.


IEEE Transactions on Consumer Electronics | 2007

Multi-Histogram Equalization Methods for Contrast Enhancement and Brightness Preserving

David Menotti; Laurent Najman; J. Facon; A. A. de Araujo

Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, it tends to change the mean brightness of the image to the middle level of the gray-level range, which is not desirable in the case of images from consumer electronics products. In the latter case, preserving the input brightness of the image is required to avoid the generation of non-existing artifacts in the output image. To surmount this drawback, Bi- HE methods for brightness preserving and contrast enhancement have been proposed. Although these methods preserve the input brightness on the output image with a significant contrast enhancement, they may produce images with do not look as natural as the input ones. In order to overcome this drawback, this work proposes a novel technique called Multi-HE, which consists of decomposing the input image into several sub-images, and then applying the classical HE process to each one. This methodology performs a less intensive image contrast enhancement, in a way that the output image presents a more natural look. We propose two discrepancy functions for image decomposing, conceiving two new Multi-HE methods. A cost function is also used for automatically deciding in how many sub-images the input image will be decomposed on. Experiments show that our methods preserve more the brightness and produce more natural looking images than the other HE methods.


IEEE Transactions on Information Forensics and Security | 2015

Deep Representations for Iris, Face, and Fingerprint Spoofing Detection

David Menotti; Giovani Chiachia; Allan da Silva Pinto; William Robson Schwartz; Helio Pedrini; Alexandre X. Falcão; Anderson Rocha

Biometrics systems have significantly improved person identification and authentication, playing an important role in personal, national, and global security. However, these systems might be deceived (or spoofed) and, despite the recent advances in spoofing detection, current solutions often rely on domain knowledge, specific biometric reading systems, and attack types. We assume a very limited knowledge about biometric spoofing at the sensor to derive outstanding spoofing detection systems for iris, face, and fingerprint modalities based on two deep learning approaches. The first approach consists of learning suitable convolutional network architectures for each domain, whereas the second approach focuses on learning the weights of the network via back propagation. We consider nine biometric spoofing benchmarks - each one containing real and fake samples of a given biometric modality and attack type - and learn deep representations for each benchmark by combining and contrasting the two learning approaches. This strategy not only provides better comprehension of how these approaches interplay, but also creates systems that exceed the best known results in eight out of the nine benchmarks. The results strongly indicate that spoofing detection systems based on convolutional networks can be robust to attacks already known and possibly adapted, with little effort, to image-based attacks that are yet to come.


Computer Methods and Programs in Biomedicine | 2016

ECG-based heartbeat classification for arrhythmia detection

Eduardo José da S. Luz; William Robson Schwartz; Guillermo Cámara-Chávez; David Menotti

An electrocardiogram (ECG) measures the electric activity of the heart and has been widely used for detecting heart diseases due to its simplicity and non-invasive nature. By analyzing the electrical signal of each heartbeat, i.e., the combination of action impulse waveforms produced by different specialized cardiac tissues found in the heart, it is possible to detect some of its abnormalities. In the last decades, several works were developed to produce automatic ECG-based heartbeat classification methods. In this work, we survey the current state-of-the-art methods of ECG-based automated abnormalities heartbeat classification by presenting the ECG signal preprocessing, the heartbeat segmentation techniques, the feature description methods and the learning algorithms used. In addition, we describe some of the databases used for evaluation of methods indicated by a well-known standard developed by the Association for the Advancement of Medical Instrumentation (AAMI) and described in ANSI/AAMI EC57:1998/(R)2008 (ANSI/AAMI, 2008). Finally, we discuss limitations and drawbacks of the methods in the literature presenting concluding remarks and future challenges, and also we propose an evaluation process workflow to guide authors in future works.


Expert Systems With Applications | 2013

ECG arrhythmia classification based on optimum-path forest

Eduardo José da S. Luz; Thiago M. Nunes; Victor Hugo C. de Albuquerque; João Paulo Papa; David Menotti

An important tool for the heart disease diagnosis is the analysis of electrocardiogram (ECG) signals, since the non-invasive nature and simplicity of the ECG exam. According to the application, ECG data analysis consists of steps such as preprocessing, segmentation, feature extraction and classification aiming to detect cardiac arrhythmias (i.e., cardiac rhythm abnormalities). Aiming to made a fast and accurate cardiac arrhythmia signal classification process, we apply and analyze a recent and robust supervised graph-based pattern recognition technique, the optimum-path forest (OPF) classifier. To the best of our knowledge, it is the first time that OPF classifier is used to the ECG heartbeat signal classification task. We then compare the performance (in terms of training and testing time, accuracy, specificity, and sensitivity) of the OPF classifier to the ones of other three well-known expert system classifiers, i.e., support vector machine (SVM), Bayesian and multilayer artificial neural network (MLP), using features extracted from six main approaches considered in literature for ECG arrhythmia analysis. In our experiments, we use the MIT-BIH Arrhythmia Database and the evaluation protocol recommended by The Association for the Advancement of Medical Instrumentation. A discussion on the obtained results shows that OPF classifier presents a robust performance, i.e., there is no need for parameter setup, as well as a high accuracy at an extremely low computational cost. Moreover, in average, the OPF classifier yielded greater performance than the MLP and SVM classifiers in terms of classification time and accuracy, and to produce quite similar performance to the Bayesian classifier, showing to be a promising technique for ECG signal analysis.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2013

Combining Multiple Classification Methods for Hyperspectral Data Interpretation

A. B. Santos; Arnaldo de Albuquerque Araújo; David Menotti

In the past few years, Hyperspectral image analysis has been used for many purposes in the field of remote sensing and importantly for land cover classification. Land cover classification is a challenging task and the production of accurate thematic maps is a common goal among researchers. A hyperspectral image is composed of hundreds of spectral channels, where each channel refers to a specific wavelength. Such a large amount of information may lead us to a deeper investigation of the materials on Earths surface, and thus, a more precise interpretation of them. In this work, we aim to produce a thematic map that is more accurate by combining multiple classification methods. Three feature representations based on spectral and spatial data and two learning algorithms (Support Vector Machines (SVM) and Multilayer Perceptron Neural Network (MLP)) were used to produce six different classification methods to perform the combination. Our combining approach is based on Weighted Linear Combination (WLC), in which weights are found using a Genetic Algorithm (GA)-WLC-GA. Experiments were carried out with two well-known datasets: Indian Pines and Pavia University. In order to evaluate the robustness of the proposed combiner, experiments using different training sizes were conducted. They show promising results for both datasets for our WLC-GA proposal and are better than the widely used Majority Vote (MV) and Average rules in terms of accuracy. By using only 10% of training samples, our proposal was able to find the best weights and overcome the drawbacks of the traditional combination rules.


international conference on systems signals and image processing | 2007

A Fast Hue-Preserving Histogram Equalization Method for Color Image Enhancement using a Bayesian Framework

David Menotti; Laurent Najman; A. De Albuquerque Araujo; Jacques Facon

In this paper, we introduce a new hue-preserving histogram equalization method based on the ROB color space for image enhancement. We use fi-red, G-green, and B-blue 1D histograms to estimate the histogram to be equalized using a Naive Bayes rule. The histogram equalization is performed by shift hue-preserving transformations. Our method has linear time and space complexities, which complies with realtime applications requirements. A subjective assessment comparing our method and other three is performed. Experiments showed that our method is more robust than the others in the sense that neither unrealistic colors nor over-enhancement are produced.


brazilian symposium on computer graphics and image processing | 2009

A Semi-Automatic Method for Segmentation of the Coronary Artery Tree from Angiography

Daniel S. D. Lara; Alexandre W. C. Faria; Arnaldo de Albuquerque Araújo; David Menotti

Nowadays, medical diagnostics using images has a considerable importance in many areas of medicine. It promotes and makes easier the acquisition, transmission and analysis of medical images. The use of digital images for diseases evaluations or diagnostics is still growing up and new application modalities are always appearing. This paper presents a methodology for a semi-automatic segmentation of the coronary artery tree in 2D X-Ray angiographies. It combines a region growing algorithm and a differential geometry approach. The proposed segmentation method identifies about 90% of the main coronary artery tree.


Expert Systems With Applications | 2014

Evaluating the use of ECG signal in low frequencies as a biometry

Eduardo José da S. Luz; David Menotti; William Robson Schwartz

Traditional strategies, such as fingerprinting and face recognition, are becoming more and more fraud susceptible. As a consequence, new and more fraud proof biometrics modalities have been considered, one of them being the heartbeat pattern acquired by an electrocardiogram (ECG). While methods for subject identification based on ECG signal work with signals sampled in high frequencies (>100Hz), the main goal of this work is to evaluate the use of ECG signal in low frequencies for such aim. In this work, the ECG signal is sampled in low frequencies (30Hz and 60Hz) and represented by four feature extraction methods available in the literature, which are then feed to a Support Vector Machines (SVM) classifier to perform the identification. In addition, a classification approach based on majority voting using multiple samples per subject is employed and compared to the traditional classification based on the presentation of single samples per subject each time. Considering a database composed of 193 subjects, results show identification accuracies higher than 95% and near to optimality (i.e., 100%) when the ECG signal is sampled in 30Hz and 60Hz, respectively, being the last one very close to the ones obtained when the signal is sampled in 360Hz (the maximum frequency existing in our database). We also evaluate the impact of: (1) the number of training and testing samples for learning and identification, respectively; (2) the scalability of the biometry (i.e., increment on the number of subjects); and (3) the use of multiple samples for person identification.


Neural Computing and Applications | 2018

Robust automated cardiac arrhythmia detection in ECG beat signals

Victor Hugo C. de Albuquerque; Thiago M. Nunes; Danillo Roberto Pereira; Eduardo José da S. Luz; David Menotti; João Paulo Papa; João Manuel R. S. Tavares

Nowadays, millions of people are affected by heart diseases worldwide, whereas a considerable amount of them could be aided through an electrocardiogram (ECG) trace analysis, which involves the study of arrhythmia impacts on electrocardiogram patterns. In this work, we carried out the task of automatic arrhythmia detection in ECG patterns by means of supervised machine learning techniques, being the main contribution of this paper to introduce the optimum-path forest (OPF) classifier to this context. We compared six distance metrics, six feature extraction algorithms and three classifiers in two variations of the same dataset, being the performance of the techniques compared in terms of effectiveness and efficiency. Although OPF revealed a higher skill on generalizing data, the support vector machines (SVM)-based classifier presented the highest accuracy. However, OPF shown to be more efficient than SVM in terms of the computational time for both training and test phases.


brazilian symposium on computer graphics and image processing | 2015

An Approach to Iris Contact Lens Detection Based on Deep Image Representations

Pedro Silva; Eduardo José da S. Luz; Rafael Baeta; Helio Pedrini; Alexandre X. Falcão; David Menotti

Spoofing detection is a challenging task in biometric systems, when differentiating illegitimate users from genuine ones. Although iris scans are far more inclusive than fingerprints, and also more precise for person authentication, iris recognition systems are vulnerable to spoofing via textured cosmetic contact lenses. Iris spoofing detection is also referred to as liveness detection (binary classification of fake and real images). In this work, we focus on a three-class detection problem: images with textured (colored) contact lenses, soft contact lenses, and no lenses. Our approach uses a convolutional network to build a deep image representation and an additional fully-connected single layer with soft max regression for classification. Experiments are conducted in comparison with a state-of-the-art approach (SOTA) on two public iris image databases for contact lens detection: 2013 Notre Dame and IIIT-Delhi. Our approach can achieve a 30% performance gain over SOTA on the former database (from 80% to 86%) and comparable results on the latter. Since IIIT-Delhi does not provide segmented iris images and, differently from SOTA, our approach does not segment the iris yet, we conclude that these are very promising results.

Collaboration


Dive into the David Menotti's collaboration.

Top Co-Authors

Avatar

Eduardo José da S. Luz

Universidade Federal de Ouro Preto

View shared research outputs
Top Co-Authors

Avatar

William Robson Schwartz

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Gladston J. P. Moreira

Universidade Federal de Ouro Preto

View shared research outputs
Top Co-Authors

Avatar

Arnaldo de Albuquerque Araújo

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Alexandre W. C. Faria

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Daniel S. D. Lara

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

A. B. Santos

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. De Albuquerque Araujo

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Gabriel Resende Gonçalves

Universidade Federal de Minas Gerais

View shared research outputs
Researchain Logo
Decentralizing Knowledge