Aniello Cimitile
University of Sannio
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aniello Cimitile.
Information & Software Technology | 1998
Gerardo Canfora; Aniello Cimitile; Andrea De Lucia
Abstract Slicing is a technique to decompose programs based on the analysis of the control and data flow. In the original Weisers definition, a slice consists of any subset of program statements preserving the behaviour of the original program with respect to a program point and a subset of the program variables (slicing criterion), for any execution path. We present conditioned slicing, a general slicing model based on statement deletion. A conditioned slice consists of a subset of program statements which preserves the behaviour of the original program with respect to a slicing criterion for a given set of execution paths. The set of initial states of the program that characterise these paths is specified in the form of a first order logic formula on the input variables. We also show how slices deriving from other statement deletion based slicing models can be defined as conditioned slices. This is used to formally define a partial ordering relation between slicing models and to build a classification framework.
Journal of Systems and Software | 1995
Aniello Cimitile; Giuseppe Visaggio
Abstract The main goal of reuse reengineering processes for existing software is to obtain reusable software modules by clustering old software components that implement functional or data abstractions. The first problem to solve in the functional abstraction is how to search old software for components that may constitute a module. This article proposes candidature criteria founded on the dominance relation between the nodes of a call-directed graph. The proposed criteria have been adopted in a real-life reuse reengineering process on PASCAL software.
Software - Practice and Experience | 1996
Gerardo Canfora; Aniello Cimitile; Malcolm Munro
The identification of abstractions within existing software systems is an important problem to be solved to facilitate program comprehension and the construction of a set of reusable artifacts. In particular, of interest is the identification of object-like features in procedural programs. Existing techniques and algorithms achieve some level of success but do not, in general, always precisely identify a coherent set of objects. The identified objects tend to contain spurious methods that are only tenuously related to the object and require a great deal of human effort and understanding to unravel. This paper presents an improved algorithm that overcomes these drawbacks and enables the precise identification of objects with less human intervention and understanding by exploiting simple statistical techniques. The algorithm is applied to several sample programs and the results are compared with existing algorithms. Finally, the application of the algorithm to a real medium-size system is described and discussed. The algorithm was developed as part of the RE 2 project in which the identification of object-like features in existing systems is the basis for a re-engineering process aimed at populating a repository of reusable assets.
Journal of Systems and Software | 2000
Gerardo Canfora; Aniello Cimitile; Andrea De Lucia; Giuseppe A. Di Lucca
Abstract A solution to the problem of salvaging the past investments in centralised, mainframe-oriented software development, while keeping competitive in the dynamic business world, consists of migrating legacy systems towards more modern environments, in particular client–server platforms. However, a migration process entails costs and risks that depend on the characteristics of both the architecture of the source system and the target client–server platform. We propose an approach to program decomposition as a preliminary step for the migration of legacy systems. A program slicing algorithm is defined to identify the statements implementing the user interface component. An interactive re-engineering tool is also presented that supports the software engineer in the comprehension of the source code during the decomposition of a program. The focus of this paper is on the partition of a legacy system, while issues related to the re-engineering, encapsulation, and wrapping of the legacy components and to the definition of the middleware layer through which they communicate are not tackled.
international symposium on empirical software engineering | 2006
Gerardo Canfora; Aniello Cimitile; Félix García; Mario Piattini; Corrado Aaron Visaggio
Test driven development (TDD) is gaining interest among practitioners and researchers: it promises to increase the quality of the code. Even if TDD is considered a development practice, it relies on the use of unit testing. For this reason, it could be an alternative to the testing after coding (TAC), which is the usual approach to run and execute unit tests after having written the code. We wondered which are the differences between the two practices, from the standpoint of quality and productivity. In order to answer our research question, we carried out an experiment in a Spanish Software House. The results suggest that TDD improves the unit testing but slows down the overall process.
IEEE Transactions on Software Engineering | 2004
Giuliano Antoniol; Aniello Cimitile; G.A. Di Lucca; M. Di Penta
We present an approach based on queuing theory and stochastic simulation to help planning, managing, and controlling the project staffing and the resulting service level in distributed multiphase maintenance processes. Data from a Y2K massive maintenance intervention on a large COBOL/JCL financial software system were used to simulate and study different service center configurations for a geographically distributed software maintenance project. In particular, a monolithic configuration corresponding to the customers point-of-view and more fine-grained configurations, accounting for different process phases as well as for rework, were studied. The queuing theory and stochastic simulation provided a means to assess staffing, evaluate service level, and assess the likelihood to meet the project deadline while executing the project. It turned out to be an effective staffing tool for managers, provided that it is complemented with other project-management tools, in order to prioritize activities, avoid conflicts, and check the availability of resources.
Journal of Systems and Software | 2007
Gerardo Canfora; Aniello Cimitile; Félix García; Mario Piattini; Corrado Aaron Visaggio
Pair programming has attracted an increasing interest from practitioners and researchers: there is initial empirical evidence that it has positive effects on quality and overall delivery time, as demonstrated by several controlled experiments. The practice does not only regard coding, since it can be applied to any other phase of the software process: analysis, design, and testing. Because of the asymmetry between design and coding, applying pair programming to the design phase might not produce the same benefits as those it produces in the development phase. In this paper, we report the findings of a controlled experiment on pair programming, applied to the design phase and performed in a software company. The results of the experiment suggest that pair programming slows down the task, yet improves quality. Furthermore we compare our results with those of a previous exploratory experiment involving students, and we demonstrate how the outcomes exhibit very similar trends.
Journal of Systems and Software | 1999
Aniello Cimitile; Andrea De Lucia; Guiseppe Antonio Di Lucca; Anna Rita Fasolino
Abstract Many organisations are migrating towards object-oriented technology. However, owing to the business value of legacy software, new object-oriented development has to be weighed against salvaging strategies. The incremental migration of procedurally oriented systems to object-oriented platforms seems to be a feasible approach, although it must be considered as risky as redevelopment. This approach uses reverse engineering activities to abstract an object-oriented model from legacy code. The paper presents a method for decomposing legacy systems into objects. The identification of objects is centred around persistent data stores, such as files or tables in the database, while programs and routines are candidates for implementing the object methods. Associating the methods to the objects is achieved by optimising selected object-oriented design metrics. The rationale behind this choice is that the object-oriented decomposition of a legacy system should not result in a poor design, as this would make the re-engineered system more difficult to maintain.
international conference on software maintenance | 1993
Gerardo Canfora; Aniello Cimitile; Malcolm Munro; C. J. Taylor
The results of a case study in identifying and extracting reusable abstract data types from C programs are presented. Reuse re-engineering processes already established in the RE/sup 2/ project are applied. The method for identifying abstract data types uses an interconnection graph called a variable-reference graph, and coincidental and spurious connections within the graph are resolved using a statistical technique. A prototype tool is described which demonstrates the feasibility of the method. The tool is used to analyze a C program, and a number of abstract data types are identified and used in the maintenance of the original program. The validity of the method is assessed by a simple manual analysis of the source code. The resulting reusable components are then specified using the formal notation Z.<<ETX>>
IEEE Transactions on Software Engineering | 1998
G. Canfora; Aniello Cimitile; U. De Carlini; A. De Lucia
Constructing code analyzers may be costly and error prone if inadequate technologies and tools are used. If they are written in a conventional programming language, for instance, several thousand lines of code may be required even for relatively simple analyses. One way of facilitating the development of code analyzers is to define a very high-level domain-oriented language and implement an application generator that creates the analyzers from the specification of the analyses they are intended to perform. This paper presents a system for developing code analyzers that uses a database to store both a no-loss fine-grained intermediate representation and the results of the analyses. The system uses an algebraic representation, called F(p), as the user-visible intermediate representation. Analyzers are specified in a declarative language, called F(p)-l, which enables an analysis to be specified in the form of a traversal of an algebraic expression, with access to, and storage of, the database information the algebraic expression indices. A foreign language interface allows the analyzers to be embedded in C programs. This is useful for implementing the user interface of an analyzer, for example, or to facilitate interoperation of the generated analyzers with pre-existing tools. The paper evaluates the strengths and limitations of the proposed system, and compares it to other related approaches.