Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eduardo Cunha de Almeida is active.

Publication


Featured researches published by Eduardo Cunha de Almeida.


international conference on data engineering | 2015

ControVol: A framework for controlled schema evolution in NoSQL application development

Stefanie Scherzinger; Thomas Cerqueus; Eduardo Cunha de Almeida

Building scalable web applications on top of NoSQL data stores is becoming common practice. Many of these data stores can easily be accessed programmatically, and do not enforce a schema. Software engineers can design the data model on the go, a flexibility that is crucial in agile software development. The typical tasks of database schema management are now handled within the application code, usually involving object mapper libraries. However, todays Integrated Development Environments (IDEs) lack the proper tool support when it comes to managing the combined evolution of the application code and of the schema. Yet simple refactorings such as renaming an attribute at the source code level can cause irretrievable data loss or runtime errors once the application is serving in production. In this demo, we present ControVol, a framework for controlled schema evolution in application development against NoSQL data stores. ControVol is integrated into the IDE and statically type checks object mapper class declarations against the schema evolution history, as recorded by the code repository. ControVol is capable of warning of common yet risky cases of mismatched data and schema. ControVol is further able to suggest quick fixes by which developers can have these issues automatically resolved.


Empirical Software Engineering | 2010

Testing peer-to-peer systems

Eduardo Cunha de Almeida; Gerson Sunyé; Yves Le Traon; Patrick Valduriez

Peer-to-peer (P2P) offers good solutions for many applications such as large data sharing and collaboration in social networks. Thus, it appears as a powerful paradigm to develop scalable distributed applications, as reflected by the increasing number of emerging projects based on this technology. However, building trustworthy P2P applications is difficult because they must be deployed on a large number of autonomous nodes, which may refuse to answer to some requests and even leave the system unexpectedly. This volatility of nodes is a common behavior in P2P systems and may be interpreted as a fault during tests (i.e., failed node). In this work, we present a framework and a methodology for testing P2P applications. The framework is based on the individual control of nodes, allowing test cases to precisely control the volatility of nodes during their execution. We validated this framework through implementation and experimentation on an open-source P2P system. The experimentation tests the behavior of the system on different conditions of volatility and shows how the tests were able to detect complex implementation problems.


Information & Software Technology | 2014

Model-based testing of global properties on large-scale distributed systems

Gerson Sunyé; Eduardo Cunha de Almeida; Yves Le Traon; Benoit Baudry; Jean-Marc Jézéquel

Context: Large-scale distributed systems are becoming commonplace with the large popularity of peerto-peer and cloud computing. The increasing importance of these systems contrasts with the lack of integrated solutions to build trustworthy software. A key concern of any large-scale distributed system is the validation of global properties, which cannot be evaluated on a single node. Thus, it is necessary to gather data from distributed nodes and to aggregate these data into a global view. This turns out to be very challenging because of the system’s dynamism that imposes very frequent changes in local values that affect global properties. This implies that the global view has to be frequently updated to ensure an accurate validation of global properties. Objective: In this paper, we present a model-based approach to define a dynamic oracle for checking global properties. Our objective is to abstract relevant aspects of such systems into models. These models are updated at runtime, by monitoring the corresponding distributed system. Method: We conduce real-scale experimental validation to evaluate the ability of our approach to check global properties. In this validation, we apply our approach to test two open-source implementations of distributed hash tables. The experiments are deployed on two clusters of 32 nodes. Results: The experiments reveal an important defect on one implementation and show clear performance differences between the two implementations. The defect would not be detected without a global view of the system. Conclusion: Testing global properties on distributed software consists of gathering data from different nodes and building a global view of the system, where properties are validated. This process requires a distributed test architecture and tools for representing and validating global properties. Model-based techniques are an expressive mean for building oracles that validate global properties on distributed systems.


international conference on software testing verification and validation | 2012

Peer-to-Peer Load Testing

Jorge Augusto Meira; Eduardo Cunha de Almeida; Yves Le Traon; Gerson Sunyé

Nowadays the large-scale systems are common-place in any kind of applications. The popularity of the web created a new environment in which the applications need to be highly scalable due to the data tsunami generated by a huge load of requests (i.e., connections and business operations). In this context, the main question is to validate how far the web applications can deal with the load generated by the clients. Load testing is a technique to analyze the behavior of the system under test upon normal and heavy load conditions. In this work we present a peer-to-peer load testing approach to isolate bottleneck problems related to centralized testing drivers and to scale up the load. Our approach was tested in a DBMS as study case and presents satisfactory results.


international workshop on big data software engineering | 2015

Safely managing data variety in big data software development

Thomas Cerqueus; Eduardo Cunha de Almeida; Stefanie Scherzinger

We consider the task of building Big Data software systems, offered as software-as-a-service. These applications are commonly backed by NoSQL data stores that address the proverbial Vs of Big Data processing: NoSQL data stores can handle large volumes of data and many systems do not enforce a global schema, to account for structural variety in data. Thus, software engineers can design the data model on the go, a flexibility that is particularly crucial in agile software development. However, NoSQL data stores commonly do not yet account for the veracity of changes when it comes to changes in the structure of persisted data. Yet this is an inevitable consequence of agile software development. In most NoSQL-based application stacks, schema evolution is completely handled within the application code, usually involving object mapper libraries. Yet simple code refactorings, such as renaming a class attribute at the source code level, can cause data loss or runtime errors once the application has been deployed to production. We address this pain point by contributing type checking rules that we have implemented within an IDE plug in. Our plug in ControVol statically type checks the object mapper class declarations against the code release history. ControVol is thus capable of detecting common yet risky cases of mismatched data and schema, and can even suggest automatic fixes.


Proceedings of the 2013 International Workshop on Testing the Cloud | 2013

On the necessity of model checking NoSQL database schemas when building SaaS applications

Stefanie Scherzinger; Eduardo Cunha de Almeida; Felipe Ickert; Marcos Didonet Del Fabro

The design of the NoSQL schema has a direct impact on the scalability of web applications. Especially for developers with little experience in NoSQL stores, the risks inherent in poor schema design can be incalculable. Worse yet, the issues will only manifest once the application has been deployed, and the growing user base causes highly concurrent writes. In this paper, we present a model checking approach to reveal scalability bottlenecks in NoSQL schemas. Our approach draws on formal methods from tree automata theory to perform a conservative static analysis on both the schema and the expected write-behavior of users. We demonstrate the impact of schema-inherent bottlenecks for a popular NoSQL store, and show how concurrent writes can ultimately lead to a considerable share of failed transactions.


automated software engineering | 2010

PeerUnit: a framework for testing peer-to-peer systems

Eduardo Cunha de Almeida; João Eugenio Marynowski; Gerson Sunyé; Patrick Valduriez

Testing distributed systems is challenging. Peer-to-peer (P2P) systems are composed of a high number of concurrent nodes distributed across the network. The nodes are also highly volatile (i.e., free to join and leave the system at any time). In this kind of system, a great deal of control should be carried out by the test harness, including: volatility of nodes, test case deployment and coordination. In this demonstration we present the PeerUnit framework for testing P2P systems. The original characteristic of this framework is the individual control of nodes, allowing test cases to precisely control their volatility during execution. We validated this framework through implementation and experimentation on two popular open-source P2P systems.


international conference on testing software and systems | 2010

Efficient distributed test architectures for large-scale systems

Eduardo Cunha de Almeida; João Eugenio Marynowski; Gerson Sunyé; Yves Le Traon; Patrick Valduriez

Typical testing architectures for distributed software rely on a centralized test controller, which decomposes test cases in steps and deploy them across distributed testers. The controller also guarantees the correct execution of test steps through synchronization messages. These architectures are not scalable while testing large-scale distributed systems due to the cost of synchronization management, which may increase the cost of a test and even prevent its execution. This paper presents a distributed architecture to synchronize the test execution sequence. This approach organizes the testers in a tree, where messages are exchanged among parents and children. The experimental evaluation shows that the synchronization management overhead can be reduced by several orders of magnitude. We conclude that testing architectures should scale up along with the distributed system under test.


extending database technology | 2008

Action synchronization in P2P system testing

Eduardo Cunha de Almeida; Gerson Sunyé; Patrick Valduriez

Testing peer-to-peer (P2P) systems is difficult because of the high numbers of nodes which can be heterogeneous and volatile. A test case may be composed of several ordered actions that may be executed on different nodes. To ensure action ordering and the correct behavior of the test case, a synchronization mechanism is required. In this paper, we propose a synchronization algorithm for executing test case actions in P2P systems. The main goal of the algorithm is to progressively dispatch the actions of a test case to a set of nodes and ensure that all nodes completed the execution of an action before dispatching the next one. We validated our synchronization algorithm through implementation and experimentation on an open-source P2P system. The experimentation shows how the algorithm was able to detect implementation problems on the P2P system.


design, automation, and test in europe | 2017

Operand size reconfiguration for big data processing in memory

Paulo C. Santos; Geraldo F. Oliveira; Diego G. Tome; Marco Antonio Zanata Alves; Eduardo Cunha de Almeida; Luigi Carro

Nowadays, applications that predominantly perform lookups over large databases are becoming more popular with column-stores as the database system architecture of choice. For these applications, Hybrid Memory Cubes (HMCs) can provide bandwidth of up to 320 GB/s and represents the best choice to keep the throughput for these ever increasing databases. However, even with the high available memory bandwidth and processing power, in order to achieve the peak performance, data movements through the memory hierarchy consumes an unnecessary amount of time and energy. In order to accelerate database operations, and reduce the energy consumption of the system, this paper presents the Reconfigurable Vector Unit (RVU) that enables massive and adaptive in-memory processing, extending the native HMC instructions and also increasing its effectiveness. RVU enables the programmer to reconfigure it to perform as a large vector unit or multiple small vectors units to better adjust for the application needs during different computation phases. Due to its adaptability, RVU is capable of achieving performance increase of 27 χ on average and reduce the DRAM energy consumption in 29% when compared to an x86 processor with 16 cores. Compared with the state-of-the-art mechanism capable of performing large vector operations with fixed size, inside the HMC, RVU performed up to 12% better in terms of performance and improve in 53% the energy consumption.

Collaboration


Dive into the Eduardo Cunha de Almeida's collaboration.

Top Co-Authors

Avatar

Yves Le Traon

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefanie Scherzinger

Regensburg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Antonio Zanata Alves

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcos Sfair Sunyé

Federal University of Paraná

View shared research outputs
Top Co-Authors

Avatar

Thomas Cerqueus

University College Dublin

View shared research outputs
Top Co-Authors

Avatar

Thomas Cerqueus

University College Dublin

View shared research outputs
Top Co-Authors

Avatar

Daniel Weingaertner

Federal University of Paraná

View shared research outputs
Researchain Logo
Decentralizing Knowledge