Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcos Lordello Chaim is active.

Publication


Featured researches published by Marcos Lordello Chaim.


annual software engineering workshop | 2009

JaBUTiService: A Web Service for Structural Testing of Java Programs

Marcelo Medeiros Eler; Andre Takeshi Endo; Paulo Cesar Masiero; Márcio Eduardo Delamaro; José Carlos Maldonado; Auri Marcelo Rizzo Vincenzi; Marcos Lordello Chaim; Delano Medeiros Beder

Web services are an emerging Service-Oriented Architecture technology to integrate applications using open standards based on XML. Software Engineering tools integration is a promising area since companies adopt different software processes and need different tools on each activity. Software engineers could take advantage of software engineering tools available as web services and create their own workflow for integrating the required tools. In this paper, we propose the development of testing tools designed as web services and discuss the pros and cons of this idea. We developed a web service for structural testing of Java programs called JaBUTiService, which is based on the stand-alone tool JaBUTi. We also present an usage example of this service with the support of a desktop front-end and pre prepared scripts. A set of 62 classes of the library Apache-Commons-BeanUtils was used for this test and the results are discussed.


International Journal of Distributed Sensor Networks | 2013

FlexFT: A Generic Framework for Developing Fault-Tolerant Applications in the Sensor Web

Delano Medeiros Beder; Jo Ueyama; João Porto de Albuquerque; Marcos Lordello Chaim

Fault-tolerant systems are expected to operate in a variety of devices ranging from standard PCs to embedded devices. In addition, the emergence of new software technologies has required these applications to meet the needs of heterogeneous software platforms. However, the existing approaches to build fault-tolerant systems are often targeted at a particular platform and software technology. The objective of this paper is to discuss the use of F l e x F T —a generic component-based framework for the construction of adaptive fault-tolerant systems that can integrate and reuse technologies and deploy them across heterogeneous devices. Furthermore, F l e x F T provides a standardized and interoperable interface for sensor observations by relying upon the “Sensor Web” paradigm established by the Open Geospatial Consortium (OGC). We have implemented a Java prototype of our framework and evaluated the potential benefits by carrying out case studies and performance measurements. By implementing and deploying these case studies in standard PCs as well as in sensor nodes, we show that F l e x F T can cope with the problem of a wide degree of heterogeneity with minimal resource overheads.


Information Processing Letters | 2013

An efficient bitwise algorithm for intra-procedural data-flow testing coverage

Marcos Lordello Chaim; Roberto Paulo Andrioli de Araujo

Data-flow (DF) testing was introduced to achieve more comprehensive structural evaluation of programs. It requires tests that traverse a path in which the definition of a variable and its subsequent use, i.e., a definition-use association (dua), is exercised. However, DF testing has rarely been adopted in industry because it is considered too costly by practitioners. A factor precluding broad adoption of DF testing is the cost of tracking duas exercised by tests. Previous approaches rely on complex computations and expensive data structures to collect dua coverage. We present an algorithm which utilizes efficient bitwise operations and inexpensive data structures to track intra-procedural duas. RAM memory requirements are restricted to three bit vectors the size of the number of duas. Conservative simulations indicate that the new algorithm imposes less execution slowdown.


brazilian symposium on software engineering | 2011

Twenty-Five Years of Research in Structural and Mutation Testing

Márcio Eduardo Delamaro; Marcos Lordello Chaim; Auri Marcelo Rizzo Vincenzi; Mario Jino; José Carlos Maldonado

Research in software testing has been carried out for approximately forty years, but its importance has escalated very quickly in the last ten or fifteen years. In particular, structural and mutation testing are techniques which havereceived a large amount of investment in both academia and software development industry. In this paper, we draw a historical perspective on how they appeared and how they evolved. In particular, the main contributions on structural and mutation testing from two Brazilian researching groups -- ICMC-USP and FEEC-UNICAMP -- are described. We highlight the workproduced and published in these twenty-five years in the Brazilian Symposium on Software Engineering and elsewhere, as well its impact in the community of software testing.


secure software integration and reliability improvement | 2010

Sensitivity of Two Coverage-Based Software Reliability Models to Variations in the Operational Profile

Odair Jacinto da Silva; Adalberto Nobiato Crespo; Marcos Lordello Chaim; Mario Jino

Software in field use may be utilized by users with diverse profiles. The way software is used affects the reliability perceived by its users, that is, software reliability may not be the same for different operational profiles. Two software reliability growth models based on structural testing coverage were evaluated with respect to their sensitivity to variations in operational profile. An experiment was performed on a real program (SPACE) with real defects, submitted to three distinct operational profiles. Distinction among the operational profiles was assessed by applying the Kolmogorov-Smirnov test. Testing coverage was measured according to the following criteria: all-nodes, all-arcs, all-uses, and all-potential-uses. Reliability measured for each operational profile was compared to the reliabilities estimated by the two models, estimated reliabilities were obtained using the coverage for the four criteria. Results from the experiment show that the predictive ability of the two models is not affected by variations in the operational profile of the program.


international conference on agile software development | 2010

An Automated Approach for Acceptance Web Test Case Modeling and Executing

Felipe M. Besson; Delano M. Beder; Marcos Lordello Chaim

This paper proposes an approach for modeling and executing acceptance web test cases and describes a suite of tools to support it. The main objective is to assist the use of Acceptance Test-Driven Development (ATDD) in web applications by providing mechanisms to support customer-developer communication and by helping test case creation. Initially, the set of web pages and relations (links) associated with a user story is modeled. Functional test possibilities involving these relations are automatically summarized in a graph, being each path of the graph a user story testing scenario. Once a testing scenario is accepted by the customer, a testing script is automatically created. A web testing framework then executes the script, triggering the ATDD process.


Journal of Software Maintenance and Evolution: Research and Practice | 2004

A debugging strategy based on the requirements of testing

Marcos Lordello Chaim; José Carlos Maldonado; Mario Jino

Testing and debugging activities consume a significant amount of the software development and maintenance budget. To reduce this cost, the use of testing information for debugging purposes has been advocated. In general, heuristics are used to select structural testing requirements (nodes, branches and definition–use associations) more closely related to the manifestation of a failure, which are then mapped into a piece of code. The intuition is that the selected piece of code is likely to contain the fault. However, this approach has its drawbacks. Heuristics that select a manageable piece of code are less likely to hit the fault and the piece of code itself does not provide enough guidance for program understanding—a major factor in program debugging. These problems occur because this approach relies only on static information—a fragment of code. We introduce a strategy for fault localization that addresses these problems. The strategy—called the debugging strategy based on the requirements of testing (DRT)—is based on the investigation of indications (or hints) provided at run-time by data-flow testing requirements (definition–use associations). Our claim is that the selected definition–use associations may fail to hit the fault site, but still provide information useful for fault localization. The strategys novelty and attractiveness are threefold: (i) the focus on dynamic information related to testing data; (ii) implementation in state-of-the-practice symbolic debuggers with a low overhead; and (iii) the use of algorithms which consume constant memory and are linear on the number of branches in the program. A case study shows that our claim is valid (for the subject program) and a prototype tool implements the strategy. Copyright


international conference on software testing verification and validation | 2014

Data-Flow Testing in the Large

Roberto Paulo Andrioli de Araujo; Marcos Lordello Chaim

Data-flow (DF) testing was introduced more than thirty years ago aiming at extensively evaluating a program structure. It requires tests that traverse a path in which the definition of a variable and its subsequent use, i.e., a definition-use association (dua), is exercised. While control-flow testing tools have being able to tackle big systems-large and long running programs, DF testing tools have failed to do so. This situation is in part due to the costs associated with tracking duas at run-time. Recently, an algorithm, called Bitwise Algorithm (BA), which uses bit vectors and bitwise operations for tracking intra-procedural duas at run-time, was proposed. This paper presents the implementation of BA for programs compiled into bytecodes. Previous approaches were able to deal with small to medium size programs with high penalties in terms of execution and memory. Our experimental results show that by using BA we are able to tackle large systems with more than 200 KLOCs and 300K required duas. Furthermore, for several programs the execution penalty was comparable with that imposed by a popular control-flow testing tool.


automated software engineering | 2013

Adding context to fault localization with integration coverage

Higor Amario de Souza; Marcos Lordello Chaim

Fault localization is a costly task in the debugging process. Several techniques to automate fault localization have been proposed aiming at reducing effort and time spent. Some techniques use heuristics based on code coverage data. The goal is to indicate program code excerpts more likely to contain faults. The coverage data mostly used in automated debugging is based on white-box unit testing (e.g., statements, basic blocks, predicates). This paper presents a technique which uses integration coverage data to guide the fault localization process. By ranking most suspicious pairs of method invocations, roadmaps-sorted lists of methods to be investigated-are created. At each method, unit coverage (e.g., basic blocks) is used to locate the fault site. Fifty-five bugs of four programs containing 2K to 80K lines of code (LOC) were analyzed. The results indicate that, by using the roadmaps, the effectiveness of the fault localization process is improved: 78% of all the faults are reached within a fixed amount of basic blocks; 40% more than an approach based on the Tarantula technique. Furthermore, fewer blocks have to be investigated until reaching the fault.


networked embedded systems for enterprise applications | 2011

A generic policy-free framework for fault-tolerant systems: Experiments on WSNs

Delano Medeiros Beder; Jo Ueyama; Marcos Lordello Chaim

Fault-tolerant systems are expected to run in a variety of devices ranging from standard PCs to embedded devices. In addition, the emergence of new software technologies has required these applications to meet the needs of heterogeneous software platforms. However, the existing approaches to build fault-tolerant systems are often targeted at a particular platform and software technology. The objective of this paper paper is to discuss the use of a generic component-based framework for the construction of adaptive fault tolerant systems that can integrate and re-use technologies and deploy them across heterogeneous devices. We have implemented a Java prototype of our framework and evaluated the potential benefits by means of development case-studies and performance measures. We show that we can overcome the problem of a wide degree of heterogeneity with minimal resource overheads by implementing and deploying these case studies in standard PCs as well as in sensor nodes.

Collaboration


Dive into the Marcos Lordello Chaim's collaboration.

Top Co-Authors

Avatar

Mario Jino

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Delano Medeiros Beder

Federal University of São Carlos

View shared research outputs
Top Co-Authors

Avatar

Fabio Kon

University of São Paulo

View shared research outputs
Top Co-Authors

Avatar

Márcio Eduardo Delamaro

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adriana Carniello

National Institute for Space Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jo Ueyama

University of São Paulo

View shared research outputs
Researchain Logo
Decentralizing Knowledge