Pedro de Azevedo Berger
University of Brasília
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pedro de Azevedo Berger.
Physiological Measurement | 2006
Pedro de Azevedo Berger; Francisco Assis de Oliveira Nascimento; Jake do Carmo; Adson Ferreira da Rocha
This paper presents a hybrid adaptive algorithm for the compression of surface electromyographic (S-EMG) signals recorded during isometric and/or isotonic contractions. This technique is useful for minimizing data storage and transmission requirements for applications where multiple channels with high bandwidth data are digitized, such as telemedicine applications. The compression algorithm proposed in this work uses a discrete wavelet transform for spectral decomposition and an intelligent dynamic bit allocation scheme implemented by an approach using the Kohonen layer, which improves the bit allocation for sections of the S-EMG with different characteristics. Finally, data and overhead information are packed by entropy coding. The results for the compression of isometric EMG signals showed that this algorithm has a better performance than standard wavelet compression algorithms presented in the literature (presenting a decrease of at least 5% in per cent residual difference (PRD) for the same compression ratio), and a performance that is comparable with the performance of algorithms based on an embedded zero-tree wavelet. For isotonic EMG signals, its performance is better than the performance of the algorithms based on embedded zero-tree wavelets (presenting a decrease in PRD of about 3.6% for the same compression ratios, in the useful compression range).
Biomedical Engineering Online | 2010
Salvador A Melo; Bruno Macchiavello; Marcelino Monteiro de Andrade; João La Carvalho; Hervaldo Sampaio Carvalho; Daniel França Vasconcelos; Pedro de Azevedo Berger; Adson Ferreira da Rocha; Francisco Ao Nascimento
BackgroundTwo-dimensional echocardiography (2D-echo) allows the evaluation of cardiac structures and their movements. A wide range of clinical diagnoses are based on the performance of the left ventricle. The evaluation of myocardial function is typically performed by manual segmentation of the ventricular cavity in a series of dynamic images. This process is laborious and operator dependent. The automatic segmentation of the left ventricle in 4-chamber long-axis images during diastole is troublesome, because of the opening of the mitral valve.MethodsThis work presents a method for segmentation of the left ventricle in dynamic 2D-echo 4-chamber long-axis images over the complete cardiac cycle. The proposed algorithm is based on classic image processing techniques, including time-averaging and wavelet-based denoising, edge enhancement filtering, morphological operations, homotopy modification, and watershed segmentation. The proposed method is semi-automatic, requiring a single user intervention for identification of the position of the mitral valve in the first temporal frame of the video sequence. Image segmentation is performed on a set of dynamic 2D-echo images collected from an examination covering two consecutive cardiac cycles.ResultsThe proposed method is demonstrated and evaluated on twelve healthy volunteers. The results are quantitatively evaluated using four different metrics, in a comparison with contours manually segmented by a specialist, and with four alternative methods from the literature. The methods intra- and inter-operator variabilities are also evaluated.ConclusionsThe proposed method allows the automatic construction of the area variation curve of the left ventricle corresponding to a complete cardiac cycle. This may potentially be used for the identification of several clinical parameters, including the area variation fraction. This parameter could potentially be used for evaluating the global systolic function of the left ventricle.
international conference of the ieee engineering in medicine and biology society | 2008
Marcus Vinícius Chaffim Costa; Pedro de Azevedo Berger; Adson Ferreira da Rocha; João Luiz Azevedo de Carvalho; Francisco Assis de Oliveira Nascimento
Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.
international conference of the ieee engineering in medicine and biology society | 2009
Marcus Vinícius Chaffim Costa; João Luiz Azevedo de Carvalho; Pedro de Azevedo Berger; Alexandre Zaghetto; Adson Ferreira da Rocha; Francisco Assis de Oliveira Nascimento
We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.
southern conference programmable logic | 2011
Jones Y. Mori; Camilo Sánchez-Ferreira; Daniel M. Muñoz; Carlos H. Llanos; Pedro de Azevedo Berger
Currently the market and the academic community have required applications of image and video processing with several real-time constraints. In order to seek an alternative design that allows the rapid development of real time image processing systems this paper proposes an unified hardware architecture for some image filtering algorithms in space domain, such as windowing-based operations, which are implemented on FPGAs (Field Programmable Gate Arrays). For achieving this, six different filters have been implemented in a parallel approach, separating them in simple hardware structures, allowing the algorithms to explore their parallel capabilities by using a simple systolic architecture. In this system all implemented algorithms run in parallel allowing the user to select a defined output for depicting it in a display. Both image processing and synthesis results have demonstrated the feasibility of FPGAs for implementing the proposed filtering algorithms in a full parallel approach.
symposium on integrated circuits and systems design | 2012
Jones Yudi Mori; Carlos H. Llanos; Pedro de Azevedo Berger
Intending to improve design trade offs in image processing architectures this work presents some kernel analysis for convolution-based image filtering. Some well-known filter kernels have been analyzed in order to identify symmetries and to allow the development of alternative architectures that can contribute to reduce power consumption and/or FPGA resources, maintaining or improving the overall system throughput. The separable kernel technique is also analyzed, and two architectures were developed and tested. Additionally, a technique based on overlapping kernels have been developed, analyzed and tested. All architectures were implemented and synthesized using Altera Quartus II EDA software and prototyped in four real-time image processing platforms. These platforms are composed by a CMOS camera, four Terasic FPGA development kits (with Altera devices) and an LCD. Synthesis, simulations and real-time results show the suitability of such architectures for this kind of design trade off.
symposium on integrated circuits and systems design | 2012
Renato Coral Sampaio; Pedro de Azevedo Berger; Ricardo P. Jacobi
This paper presents a HW/SW Co-design of an AAC-LC audio decoder implemented on an FPGA. The complexity of each decoding step is analyzed and the decoding modules are classified by their computational requirements. The result is a balanced design with software modules running on a processor used to implement the various types of AAC input formats (MP4 Standard files and LATM/LOAS Stream) as well as the bitstream parser. Hardware modules are used for the calculation intensive parts of the algorithm (Huffman Decoding, Spectral Tools, Filterbank). The integrated design is implemented on an Altera Cyclone II FPGA with NIOS II/s as a processor and was able to decode 5.1 (6 channels) audio wavefiles running at 50MHz while other FPGA designs seen on literature decode only 2 channels with half the frequency.
The International Journal of Forensic Computer Science | 2008
Frederico Quadros D’Almeida; Francisco Assis de Oliveira Nascimento; Pedro de Azevedo Berger; Lúcio Martins da Silva
Multiconditional Modeling is widely used to create noise-robust speaker recognition systems. However, the approach is computationally intensive. An alternative is to optimize the training condition set in order to achieve maximum noise robustness while using the smallest possible number of noise conditions during training. This paper establishes the optimal conditions for a noise-robust training model by considering audio material at different sampling rates and with different coding methods. Our results demonstrate that using approximately four training noise conditions is sufficient to guarantee robust models in the 60 dB to 10 dB Signal-to-Noise Ratio (SNR) range.
Biomedical Engineering Online | 2013
Guillermo A Camacho; Carlos H. Llanos; Pedro de Azevedo Berger; Cristiano Jacques Miosso; Adson Ferreira da Rocha
BackgroundThe information of electromyographic signals can be used by Myoelectric Control Systems (MCSs) to actuate prostheses. These devices allow the performing of movements that cannot be carried out by persons with amputated limbs. The state of the art in the development of MCSs is based on the use of individual principal component analysis (iPCA) as a stage of pre-processing of the classifiers. The iPCA pre-processing implies an optimization stage which has not yet been deeply explored.MethodsThe present study considers two factors in the iPCA stage: namely A (the fitness function), and B (the search algorithm). The A factor comprises two levels, namely A1 (the classification error) and A2 (the correlation factor). Otherwise, the B factor has four levels, specifically B1 (the Sequential Forward Selection, SFS), B2 (the Sequential Floating Forward Selection, SFFS), B3 (Artificial Bee Colony, ABC), and B4 (Particle Swarm Optimization, PSO). This work evaluates the incidence of each one of the eight possible combinations between A and B factors over the classification error of the MCS.ResultsA two factor ANOVA was performed on the computed classification errors and determined that: (1) the interactive effects over the classification error are not significative (F0.01,3,72 = 4.0659 > fAB = 0.09), (2) the levels of factor A have significative effects on the classification error (F0.02,1,72 = 5.0162 < fA = 6.56), and (3) the levels of factor B over the classification error are not significative (F0.01,3,72 = 4.0659 > fB = 0.08).ConclusionsConsidering the classification performance we found a superiority of using the factor A2 in combination with any of the levels of factor B. With respect to the time performance the analysis suggests that the PSO algorithm is at least 14 percent better than its best competitor. The latter behavior has been observed for a particular configuration set of parameters in the search algorithms. Future works will investigate the effect of these parameters in the classification performance, such as length of the reduced size vector, number of particles and bees used during optimal search, the cognitive parameters in the PSO algorithm as well as the limit of cycles to improve a solution in the ABC algorithm.
The International Journal of Forensic Computer Science | 2009
Frederico Quadros D’Almeida; Francisco Assis de Oliveira Nascimento; Pedro de Azevedo Berger; Lúcio Martins da Silva
Gaussian Mixture Models (GMMs) are the most widely used technique for voice modeling in automatic speaker recognition systems. In this paper, we introduce a variation of the traditional GMM approach that uses models with variable complexity (resolution). Termed Multi-resolution GMMs (MR-GMMs); this new approach yields more than a 50% reduction in the computational costs associated with proper speaker identification, as compared to the traditional GMM approach. We also explore the noise robustness of the new method by investigating MR-GMM performance under noisy audio conditions using a series of practical identification tests.