Maria Cláudia Cavalcanti
Instituto Militar de Engenharia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maria Cláudia Cavalcanti.
IEEE Transactions on Industrial Electronics | 2009
Fabricio Bradaschia; Maria Cláudia Cavalcanti; Francisco de Assis das Neves; H.E.P. de Souza
This paper presents a modulation technique based on the generalized pulsewidth-modulation strategy for matrix converters. The proposed technique uses a discontinuous modulation to clamp each output leg of the converter during 120deg of the output voltage period, achieving a reduced number of switchings compared with the traditional modulation techniques. Aside from that, the major attraction of the proposed technique is an additional algorithm that lags the clamping of each output leg of the converter to synchronize it with the peak of the corresponding output current (load current), avoiding high switching losses (switching at high currents). Therefore, this technique reduces the number of switchings as well as guarantees only medium and low current switchings. Simulation and experimental results show the efficiency of the proposed technique.
IEEE Transactions on Industrial Electronics | 2010
Francisco A. S. Neves; H.E.P. de Souza; Fabricio Bradaschia; Maria Cláudia Cavalcanti; Mario Rizo; Francisco Rodríguez
In this paper, a space-vector discrete-time Fourier transform is proposed for fast and precise detection of the fundamental-frequency and harmonic positive- and negative-sequence vector components of three-phase input signals. The discrete Fourier transform is applied to the three-phase signals represented by Clarkes αβ vector. It is shown that the complex numbers output from the Fourier transform are the instantaneous values of the positive- and negative-sequence harmonic component vectors of the input three-phase signals. The method allows the computation of any desired positive- or negative-sequence fundamental-frequency or harmonic vector component of the input signal. A recursive algorithm for low-effort online implementation is also presented. The detection performance for variable-frequency and interharmonic input signals is discussed. The proposed and other usual method performances are compared through simulations and experiments.
data and knowledge engineering | 2005
Maria Cláudia Cavalcanti; Rafael Targino; Fernanda Araujo Baião; Shaila C. Rössle; Paulo Mascarello Bisch; Paulo F. Pires; Maria Luiza Machado Campos; Marta Mattoso
In silico scientific experiments encompass multiple combinations of program and data resources. Each resource combination in an execution flow is called a scientific workflow. In bioinformatics environments, program composition is a frequent operation, requiring complex management. A scientist faces many challenges when building an experiment: finding the right program to use, the adequate parameters to tune, managing input/output data, building and reusing workflows. Typically, these workflows are implemented using script languages because of their simplicity, despite their specificity and difficulty of reuse. In contrast, Web service technology was specially conceived to encapsulate and combine programs and data, providing interoperation between applications from different platforms. The Web services approach is superior to scripts with regard to interoperability, scalability and flexibility issues. We have combined metadata support with Web services within a framework that supports scientific workflows. While most works are focused on metadata issues to manage and integrate heterogeneous scientific data sources, in this work we concentrate on metadata support to program management within workflows. We have used this framework with a real structural genomic workflow, showing its viability and evidencing its advantages.
Bioinformatics | 2005
Alberto M. R. Dávila; Daniel Macedo Lorenzini; Pablo N. Mendes; Thiago S. Satake; Gabriel R. Sousa; Linair Maria Campos; Camila J. Mazzoni; Glauber Wagner; Paulo F. Pires; Edmundo C. Grisard; Maria Cláudia Cavalcanti; Maria Luiza Machado Campos
SUMMARY Growth of genome data and analysis possibilities have brought new levels of difficulty for scientists to understand, integrate and deal with all this ever-increasing information. In this scenario, GARSA has been conceived aiming to facilitate the tasks of integrating, analyzing and presenting genomic information from several bioinformatics tools and genomic databases, in a flexible way. GARSA is a user-friendly web-based system designed to analyze genomic data in the context of a pipeline. EST and GGS data can be analyzed using the system since it accepts (1) chromatograms, (2) download of sequences from GenBank, (3) Fasta files stored locally or (4) a combination of all three. Quality evaluation of chromatograms, vector removing and clusterization are easily performed as part of the pipeline. A number of local and customizable Blast and CDD analyses can be performed as well as Interpro, complemented with phylogeny analyses. GARSA is being used for the analyses of Trypanosoma vivax (GSS and EST), Trypanosoma rangeli (GSS, EST and ORESTES), Bothrops jararaca (EST), Piaractus mesopotamicus (EST) and Lutzomyia longipalpis (EST). AVAILABILITY The GARSA system is freely available under GPL license (http://www.biowebdb.org/garsa/). For download requests visit http://www.biowebdb.org/garsa/ or contact Dr Alberto Dávila.
Nucleic Acids Research | 2007
Alberto M. R. Dávila; Pablo N. Mendes; Glauber Wagner; Diogo A. Tschoeke; Rafael R. C. Cuadrat; Felipe Liberman; Luciana Matos; Thiago S. Satake; Kary A. C. S. Ocaña; Omar Triana; Sérgio Manuel Serra da Cruz; Henrique Jucá; Juliano C. Cury; Fabrício Nogueira da Silva; Guilherme A. Geronimo; Margarita Ruiz; Eduardo Ruback; Floriano P. Silva; Christian M. Probst; Edmundo Carlos Grisard; Marco Aurélio Krieger; Samuel Goldenberg; Maria Cláudia Cavalcanti; Milton Ozório Moraes; Maria Luiza Machado Campos; Marta Mattoso
ProtozoaDB (http://www.biowebdb.org/protozoadb) is being developed to initially host both genomics and post-genomics data from Plasmodium falciparum, Entamoeba histolytica, Trypanosoma brucei, T. cruzi and Leishmania major, but will hopefully host other protozoan species as more genomes are sequenced. It is based on the Genomics Unified Schema and offers a modern Web-based interface for user-friendly data visualization and exploration. This database is not intended to duplicate other similar efforts such as GeneDB, PlasmoDB, TcruziDB or even TDRtargets, but to be complementary by providing further analyses with emphasis on distant similarities (HMM-based) and phylogeny-based annotations including orthology analysis. ProtozoaDB will be progressively linked to the above-mentioned databases, focusing in performing a multi-source dynamic combination of information through advanced interoperable Web tools such as Web services. Also, to provide Web services will allow third-party software to retrieve and use data from ProtozoaDB in automated pipelines (workflows) or other interoperable Web technologies, promoting better information reuse and integration. We also expect ProtozoaDB to catalyze the development of local and regional bioinformatics capabilities (research and training), and therefore promote/enhance scientific advancement in developing countries.
international conference on management of data | 2001
Luc Bouganim; Maria Cláudia Cavalcanti; Françoise Fabret; Maria Luiza Machado Campos; François Llirbat; Marta Mattoso; Rubens Nascimento Melo; Ana Maria de Carvalho Moura; Esther Pacitti; Fabio Porto; Margareth Simões; Eric Simon; Asterio Kiyoshi Tanaka; Patrick Valduriez
A very large number of data sources on environment, energy, and natural resources are available worldwide. Unfortunately, users usually face several problems when they want to search and use relevant information. In the Ecobase project, we address these problems in the context of several environmental applications in Brazil and Europe. We propose a distributed architecture for environmental information systems (EIS) based on the Le Select middleware developed at INRIA. In this paper, we present this architecture and its capabilities, and discuss the lessons learned and open issues.
systems man and cybernetics | 2010
Herminio Camargo de Souza; Ana Maria de Carvalho Moura; Maria Cláudia Cavalcanti
Large organizations usually have difficulties in dealing with the exponential growth of information. Therefore, there is a high demand for innovative solutions to deal with such growth and to integrate such information. This paper proposes a new approach, called Emergent Ontologies (EOs), toward the generation of a single organizational ontology through which it becomes possible to browse all information of an organization. This proposal considers that, typically, an organizations information is distributed in peers, and that in each peer, this information could be represented through a different ontology. As each peer of an organization needs to exchange information, peer-to-peer mappings are created to bridge these ontologies. Based on these mappings, this paper proposes a set of heuristics, which are used to generate the EO. These heuristics have been incorporated into the OntoEmerge system, a prototype developed to facilitate the creation of an initial organization ontology. In order to evaluate such a system and the heuristics behind it, some experiments have been performed. A quantitative and qualitative analysis of these experiments is also presented in this paper. This approach presents encouraging results, and this fact can be considered as a starting point for the creation of organizational ontologies.
acm symposium on applied computing | 2002
Maria Cláudia Cavalcanti; Marta Mattoso; Maria Luiza Machado Campos; François Llirbat; Eric Simon
Environmental applications have been stimulating the cooperation among scientists from different disciplines. There are many examples where this cooperation takes place through exchanging scientific resources, such as data, programs and mathematical models. Finding the right model to apply in an environmental problem is a difficult task. Usually, this decision is based on previous experience. To facilitate the exchange and dissemination of information we propose a scientific resources architecture, where scientists may share their data, programs and models. We also present a scientific publishing metamodel for scientific resources description. The main goal of the proposed architecture is to provide scientific metadata support to effective model sharing, representing an innovative contribution to environmental applications. Scientific experiments and workflows are also considered as scientific resources that need to be shared. We believe that the proposed scientific publishing metamodel could be naturally extended to include these other scientific resources.
database and expert systems applications | 2003
Maria Cláudia Cavalcanti; Fernanda Araujo Baião; Shaila C. Rössle; Paulo Mascarello Bisch; Rafael Targino; Paulo F. Pires; Maria Luiza Machado Campos; Marta Mattoso
In silico experiments encompass multiple combinations of program and data resources, which are complex to be managed. Typically, script languages are used due to their ease of use, despite their specifity and difficulty of reuse. In contrast, Web service technology was specially conceived to encapsulate and combine programs and data, providing interoperability, scalability and flexibility issues. We have combined metadata support with Web services within a framework that supports scientific workflows. We have experimented this framework with a real structural genomic workflow, showing its viability and evidencing its advantages.
statistical and scientific database management | 2002
Maria Cláudia Cavalcanti; Marta Mattoso; Maria Luiza Machado Campos; Eric Simon; François Llirbat
There are many examples where cooperation among scientists takes place by exchanging scientific resources, such as data, programs and mathematical models. This is particularly true for environmental applications. Finding the right resource to apply in an environmental problem is a difficult task. Usually, this decision is based on previous experience. Scientists have to cooperate in order to solve such problems. To facilitate the exchange, reuse and dissemination of information we propose an architecture for managing distributed scientific resources. Our proposal combines a mediation-based heterogeneous distributed database system and an enhanced metadata support system for effective management of distributed scientific models and data.