Rodrigo Fernandes de Mello
Spanish National Research Council
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rodrigo Fernandes de Mello.
Telecommunication Systems | 2008
Rodrigo Fernandes de Mello; Jose Augusto Andrade Filho; Luciano José Senger; Laurence T. Yang
In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.
ieee international conference on high performance computing data and analytics | 2006
Rodrigo Fernandes de Mello; Luciano José Senger
This paper proposes a new model to predict the process execution behavior on heterogeneous multicomputing environments. This model considers the process execution costs such as processing, hard disk acessing, message transmitting and memory allocation. A simulator of this model was developed which help to predict the execution behavior of processes on distributed environments under different scheduling techniques. Besides the simulator, it was developed a suite of benchmark tools in order to parameterize the proposed model with data collected from real environments. Experiments were conduced to evaluate the proposed model which used a parallel application executing on a heterogeneous system. The obtained results show the model ability to predict the actual system performance, providing an useful model for developing and evaluating techniques for scheduling and resource allocation over heterogeneous and distributed systems.
high performance computing and communications | 2005
Luciano José Senger; Rodrigo Fernandes de Mello; Marcos José Santana; Regina Helena Carlucci Santana; Laurence T. Yang
This paper presents a process scheduling algorithm that uses information about the capacity of the processing elements over the communication network and parallel applications in order to allocate resources on heterogeneous and distributed environments. The information about the applications is composed by the resources usage behavior (percentage values related to CPU’s utilization, network send and network receive) and by the prediction of the execution time of tasks that make up a parallel distribution. The knowledge about the resources usage is obtained by means of the Art2A self-organizing artificial neural network and by a specific labeling algorithm; the knowledge about the execution time is obtained through the learning techniques based on instances. The knowledge about the application execution features, combined with the information about the computing capacity of the resources available in the environment, are used as an entry to improve the decisions of the proposed scheduling algorithm. Such algorithm uses genetic algorithm techniques to find out the most appropriate computing resources subset to support the applications. The proposed algorithm is evaluated through simulation by using a model parameterized with the features obtained from a real distributed scenario. The results obtained by the evaluation show that the scheduling that uses the genetic search allows a better allocation of computing resources on environments composed of tens of computers on which the parallel applications are composed by tens of tasks.
Cluster Computing | 2006
Rodrigo Fernandes de Mello; Luis Carlos Trevelin; Maria Stela Veludo de Paiva; Laurence T. Yang
The availability of low cost microcomputers and the evolution of computer networks have increased the development of distributed systems. In order to get a better process allocation on distributed environments, several load balancing algorithms have been proposed. Generally, these algorithms consider as the information policy’s load index the length of the CPU’s process waiting queue. This paper modifies the Server-Initiated Lowest algorithm by using a load index based on the resource occupation. Using this load index the Server-Initiated Lowest algorithm is compared to the Stable symmetrically initiated, which nowadays is defined as the best choice. The comparisons are made by using simulations. The simulations showed that the modified Server-Initiated Lowest algorithm had better results than the Symmetrically Initiated one.
embedded and ubiquitous computing | 2005
Evgueni Dodonov; Rodrigo Fernandes de Mello; Laurence T. Yang
The performance of network protocols on different usage scenarios differs significantly, making the protocol choice a difficult question. This had motivated a work that aims to evaluate the TCP, UDP and Sendfile (a POSIX-defined zero-copy TCP access technique) protocols on LAN, MAN and WAN environments, in order to find the most adequate configuration for each protocol. The protocols were evaluated on default configurations, without any application-specific optimizations.
international symposium on parallel and distributed processing and applications | 2006
Evgueni Dodonov; Rodrigo Fernandes de Mello; Laurence T. Yang
The distributed computing performance is usually limited by the data transfer rate and access latency. Techniques such as data caching and prefetching were developed to overcome this limitation. However, such techniques require the knowledge of application behavior in order to be effective. In this sense, we propose new application communication behavior discovery techniques that, by classifying and analyzing application access patterns, is able to predict future application data accesses. The proposed techniques use stochastic methods for application state change prediction and neural networks for access pattern discovery based on execution history, and is evaluated using the NAS Parallel Benchmark suite.
international conference on communications | 2006
Rodrigo Fernandes de Mello; Rafael G. Cuenca; Laurence T. Yang
Work on coverage and redundancy of wireless sensor networks has motivated this paper which proposes a technique based on genetic algorithms to organize different radii sensors on fields. The genetic algorithm optimizes the organization of sensors maximizing the coverage and redundant area. Based on the obtained results this paper also presents a study of the coverage and redundancy behavior having random sensor failures. By using these results it is possible to define the best number of sensors for an specific area.
ieee international conference on high performance computing data and analytics | 2002
Rodrigo Fernandes de Mello; Maria Stela Veludo de Paiva; Luis Carlos Trevelin; Adilson Gonzaga
The search for the high performance in the traditional computing systems has been related to the development of specific applications executed over parallel machines. However, the cost of this kind of system, which is high due to the hardware involved in these projects, has limited the continuing development in these areas, as just a small part of the community has access to those systems. With the aim of using a low cost hardware to achieve a high performance, this paper presents the OpenTella, a protocol based on the peer-to-peer models to update the information related to the occupation of resources and an analysis of this occupation for a post migration of processes among computers of a cluster.
Revista De Informática Teórica E Aplicada | 2016
Yule Vaz; Rodrigo Fernandes de Mello
As Interfaces Cerebro-Computador (BCI) sao sistemas que provem uma alternativa para que pessoas com perda severa ou total do controle motor possam inte- ragir com o ambiente externo. Para mapear intencoes individuais em operacoes de ma- quina, os sistemas de BCI empregam um conjunto de etapas que envolvem a captura e pre-processamento dos sinais cerebrais, a extracao e selecao de suas caracteristicas mais relevantes e a classificacao das intencoes. Neste trabalho, diferentes abordagens para a extracao de caracteristicas de sinais cerebrais foram avaliadas, a mencionar: i) Padroes Espectro-Espaciais Comuns (CSSP); ii) Padroes Esparsos Espectro-Espaciais Comuns (CSSSP); iii) CSSP com banco de filtros (FBCSSP); e, finalmente, iv) CSSSP com banco de filtros (FBCSSSP). Em comum, essas tecnicas utilizam de filtragem de banda de frequencias e reconstrucao de espacos para ressaltar similaridades entre si- nais. A tecnica de Selecao de Caracteristicas baseada em Informacao Mutua (MIFS) foi adotada para a reducao de dimensionalidade das caracteristicas extraidas e, em se- guida, Maquinas de Vetores de Suporte (SVM) foram utilizadas para a classificacao do espaco de exemplos. Os experimentos consideraram o conjunto de dados BCI Compe- tition IV-2b, o qual conta com sinais produzidos pelos eletrodos nas posicoes C3, Cz e C4 a fim de identificar as intencoes de movimentacao das maos direita e esquerda. Conclui-se, a partir dos indices kappa obtidos, que os extratores de caracteristicas adotados podem apresentar resultados equiparaveis ao estado da arte.
International Journal of Autonomous and Adaptive Communications Systems | 2008
Adenilso da Silva Simão; Rodrigo Fernandes de Mello; Luciano José Senger; Laurence T. Yang
Regression testing applies a previously developed test case suite to new software versions. A traditional approach is the execution of all test cases, although, this may be time consuming and, sometimes, not necessary as the source code modification may affect only a test case subset. Some initiatives have addressed this issue. For instance, one of the most promising ones is the modified-based technique that selects test cases based on whether they execute the modified parts of the program. This technique is conservative, but it often selects test cases that are not relevant. This article presents an approach to select test case subsets by using an Adaptive Resonance Theory-2A self-organising neural network architecture. In this approach, test cases are summarised in feature vectors with code coverage information, which are classified by the neural network in clusters. Clusters are labelled representing each software functionality evaluated by the coverage criterion. A new software version is then analysed to determine modified points and, then, clusters, which represent the related functionalities, are chosen. The test case subset is obtained from these clusters. Experiments were conducted to evaluate the approach using feature vectors based on all-uses and -nodes code coverage information against a modification-based technique.