Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giuseppe Scanniello is active.

Publication


Featured researches published by Giuseppe Scanniello.


international conference on program comprehension | 2011

Clustering Support for Static Concept Location in Source Code

Giuseppe Scanniello; Andrian Marcus

One of the most common comprehension activities undertaken by developers is concept location in source code. In the context of software change, concept location means finding locations in source code where changes are to be made in response to a modification request. Static techniques for concept location usually rely on searching the source code using textual information or on navigating the dependencies among software elements. In this paper we propose a novel static concept location technique, which leverages both the textual information present in the code and the structural dependencies between source code elements. The technique employs a textual search in that source code, which is clustered using the Border Flow algorithm, based on combining both structural and textual data. We evaluated the technique against a text search based baseline approach using data on almost 200 changes from five software systems. The results indicate that the new approach outperforms the baseline and that improvements are still possible.


conference on software maintenance and reengineering | 2011

Investigating the use of lexical information for software system clustering

Anna Corazza; Sergio Di Martino; Valerio Maggio; Giuseppe Scanniello

Developers have a lot of freedom in writing comments as well as in choosing identifiers and method names. These are intentional in nature and provide a different relevance of information to understand what a software system implements, and in particular the role of each source file. In this paper we investigate the effectiveness of exploiting lexical information for software system clustering. In particular we explore the contribution of the combined use of six different dictionaries, corresponding to the six parts of the source code where programmers introduce lexical information, namely: class, attribute, method and parameter names, comments, and source code statements. Their relevance has been weighted by means of a probabilistic model, whose parameters have been estimated by the Expectation-Maximization algorithm. To group source files accordingly we used a hierarchical clustering algorithm. The investigation has been conducted on a dataset of 13 open source Java software systems.


IEEE Transactions on Software Engineering | 2013

Assessing the Effectiveness of Sequence Diagrams in the Comprehension of Functional Requirements: Results from a Family of Five Experiments

Silvia Abrahão; Carmine Gravino; Emilio Insfran; Giuseppe Scanniello; Genoveffa Tortora

Modeling is a fundamental activity within the requirements engineering process and concerns the construction of abstract descriptions of requirements that are amenable to interpretation and validation. The choice of a modeling technique is critical whenever it is necessary to discuss the interpretation and validation of requirements. This is particularly true in the case of functional requirements and stakeholders with divergent goals and different backgrounds and experience. This paper presents the results of a family of experiments conducted with students and professionals to investigate whether the comprehension of functional requirements is influenced by the use of dynamic models that are represented by means of the UML sequence diagrams. The family contains five experiments performed in different locations and with 112 participants of different abilities and levels of experience with UML. The results show that sequence diagrams improve the comprehension of the modeled functional requirements in the case of high ability and more experienced participants.


Information & Software Technology | 2009

Evaluating legacy system migration technologies through empirical studies

Massimo Colosimo; Andrea De Lucia; Giuseppe Scanniello; Genoveffa Tortora

We present two controlled experiments conducted with master students and practitioners and a case study conducted with practitioners to evaluate the use of MELIS (Migration Environment for Legacy Information Systems) for the migration of legacy COBOL programs to the web. MELIS has been developed as an Eclipse plug-in within a technology transfer project conducted with a small software company [16]. The partner company has developed and marketed in the last 30 years several COBOL systems that need to be migrated to the web, due to the increasing requests of the customers. The goal of the technology transfer project was to define a systematic migration strategy and the supporting tools to migrate these COBOL systems to the web and make the partner company an owner of the developed technology. The goal of the controlled experiments and case study was to evaluate the effectiveness of introducing MELIS in the partner company and compare it with traditional software development environments. The results of the overall experimentation show that the use of MELIS increases the productivity and reduces the gap between novice and expert software engineers.


conference on software maintenance and reengineering | 2010

A Probabilistic Based Approach towards Software System Clustering

Anna Corazza; Sergio Di Martino; Giuseppe Scanniello

In this paper we present a clustering based approach to partition software systems into meaningful subsystems. In particular, the approach uses lexical information extracted from four zones in Java classes, which may provide a different contribution towards software systems partitioning. To automatically weigh these zones, we introduced a probabilistic model, and applied the Expectation-Maximization (EM) algorithm. To group classes according to the considered lexical information, we customized the well-known K-Medoids algorithm. To assess the approach and the implemented supporting system, we have conducted a case study on six open source software systems.


international conference on program comprehension | 2010

Using the Kleinberg Algorithm and Vector Space Model for Software System Clustering

Giuseppe Scanniello; Anna D'Amico; Carmela D'Amico; Teodora D'Amico

Clustering based approaches are generally difficult to use in practice since they need a significant human interaction for recovering software architectures, are conceived for a specific programming language, and very often do not use design knowledge (e.g., the implemented architectural model). In this paper we present a clustering based approach to recover the implemented architecture of software systems with a hierarchical structure and implemented with any object oriented programming language. The approach is based on the combination of structural and lexical dimensions. The structural dimension is used to decompose a software system into layers (i.e., horizontal decomposition), while the lexical dimension is then employed to partition each layer (i.e., vertical decomposition) into software modules. Layers are identified using a well known and widely employed link analysis algorithm, i.e., the Kleinberg algorithm, while Vector Space Model is used to vertically decompose the layers. To assess the approach and the underlying techniques, we also present a prototype of a supporting tool and the results from a case study conducted on subsequent versions of three open source Java software systems.


automated software engineering | 2013

Class level fault prediction using software clustering

Giuseppe Scanniello; Carmine Gravino; Andrian Marcus; Tim Menzies

Defect prediction approaches use software metrics and fault data to learn which software properties associate with faults in classes. Existing techniques predict fault-prone classes in the same release (intra) or in a subsequent releases (inter) of a subject software system. We propose an intra-release fault prediction technique, which learns from clusters of related classes, rather than from the entire system. Classes are clustered using structural information and fault prediction models are built using the properties of the classes in each cluster. We present an empirical investigation on data from 29 releases of eight open source software systems from the PROMISE repository, with predictors built using multivariate linear regression. The results indicate that the prediction models built on clusters outperform those built on all the classes of the system.


ACM Transactions on Software Engineering and Methodology | 2014

On the impact of UML analysis models on source-code comprehensibility and modifiability

Giuseppe Scanniello; Carmine Gravino; Marcela Genero; José A. Cruz-Lemus; Genoveffa Tortora

We carried out a family of experiments to investigate whether the use of UML models produced in the requirements analysis process helps in the comprehensibility and modifiability of source code. The family consists of a controlled experiment and 3 external replications carried out with students and professionals from Italy and Spain. 86 participants with different abilities and levels of experience with UML took part. The results of the experiments were integrated through the use of meta-analysis. The results of both the individual experiments and meta-analysis indicate that UML models produced in the requirements analysis process influence neither the comprehensibility of source code nor its modifiability.


empirical software engineering and measurement | 2010

On the effectiveness of screen mockups in requirements engineering: results from an internal replication

Filippo Ricca; Giuseppe Scanniello; Marco Torchiano; Gianna Reggio; Egidio Astesiano

In this paper, we present and discuss the results of an internal replication of a controlled experiment for assessing the effectiveness of including screen mockups when adopting Use Cases. The results of the original experiment indicate a clear improvement in terms of understandability of functional requirements when screen mockups are present with no significant impact on effort. The data analysis of the replication, conducted also in this case with undergraduate students, confirms the results of the original experiment with slight differences, thus confirming that screen mockups facilitate the understanding of requirements without influencing the effort. We also sketch here some issues related to the documentation and communication between experimenters.


International Journal of Distance Education Technologies | 2007

A SCORM Thin Client Architecture for E-Learning Systems Based on Web Services.

Giovanni Casella; Gennaro Costagliola; Filomena Ferrucci; Giuseppe Polese; Giuseppe Scanniello

In this paper we propose an architecture of e-learning systems characterized by the use of Web Services and a suitable Middleware component. These technical infrastructures allow us to extend the system with new services as well as to integrate and reuse heterogeneous software e-learning components. Moreover, they let us better support the “anytime and anywhere” learning paradigm. �s a matter of fact, the proposal provides an implementation of the Run-Time Environment suggested in the Sharable Content Object Reference Model �SCORM�� to trace learning processes, which is also suitable for mobile learning.

Collaboration


Dive into the Giuseppe Scanniello's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simone Romano

University of Basilicata

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ugo Erra

University of Basilicata

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Natalia Juristo

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge