Alejandro Rago
National Scientific and Technical Research Council
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alejandro Rago.
Requirements Engineering | 2013
Alejandro Rago; Claudia Marcos; J. Andres Diaz-Pace
Quality-attribute requirements describe constraints on the development and behavior of a software system, and their satisfaction is key for the success of a software project. Detecting and analyzing quality attributes in early development stages provides insights for system design, reduces risks, and ultimately improves the developers’ understanding of the system. A common problem, however, is that quality-attribute information tends to be understated in requirements specifications and scattered across several documents. Thus, making the quality attributes first-class citizens becomes usually a time-consuming task for analysts. Recent developments have made it possible to mine concerns semi-automatically from textual documents. Leveraging on these ideas, we present a semi-automated approach to identify latent quality attributes that works in two stages. First, a mining tool extracts early aspects from use cases, and then these aspects are processed to derive candidate quality attributes. This derivation is based on an ontology of quality-attribute scenarios. We have built a prototype tool called QAMiner to implement our approach. The evaluation of this tool in two case studies from the literature has shown interesting results. As main contribution, we argue that our approach can help analysts to skim requirements documents and quickly produce a list of potential quality attributes for the system.
Software and Systems Modeling | 2016
Alejandro Rago; Claudia Marcos; J. Andres Diaz-Pace
Developing high-quality requirements specifications often demands a thoughtful analysis and an adequate level of expertise from analysts. Although requirements modeling techniques provide mechanisms for abstraction and clarity, fostering the reuse of shared functionality (e.g., via UML relationships for use cases), they are seldom employed in practice. A particular quality problem of textual requirements, such as use cases, is that of having duplicate pieces of functionality scattered across the specifications. Duplicate functionality can sometimes improve readability for end users, but hinders development-related tasks such as effort estimation, feature prioritization, and maintenance, among others. Unfortunately, inspecting textual requirements by hand in order to deal with redundant functionality can be an arduous, time-consuming, and error-prone activity for analysts. In this context, we introduce a novel approach called ReqAligner that aids analysts to spot signs of duplication in use cases in an automated fashion. To do so, ReqAligner combines several text processing techniques, such as a use case-aware classifier and a customized algorithm for sequence alignment. Essentially, the classifier converts the use cases into an abstract representation that consists of sequences of semantic actions, and then these sequences are compared pairwise in order to identify action matches, which become possible duplications. We have applied our technique to five real-world specifications, achieving promising results and identifying many sources of duplication in the use cases.
Scientific Programming | 2009
Alejandro Rago; Esteban S. Abait; Claudia Marcos; Andres Diaz-Pace
In this article, we present a semi-automated approach for identifying candidate early aspects in requirements specifications. This approach aims at improving the precision of the aspect identification process in use cases, and also solving some problems of existing aspect mining techniques caused by the vagueness and ambiguity of text in natural language. To do so, we apply a combination of text analysis techniques such as: natural language processing (NLP) and word sense disambiguation (WSD). As a result, our approach is able to generate a graph of candidate concerns that crosscut the use cases, as well as a ranking of these concerns according to their importance. The developer then selects which concerns are relevant for his/her domain. Although there are still some challenges, we argue that this approach can be easily integrated into a UML development methodology, leading to improved requirements elicitation.
automated software engineering | 2016
Alejandro Rago; Claudia Marcos; J. Andres Diaz-Pace
Textual requirements are very common in software projects. However, this format of requirements often keeps relevant concerns (e.g., performance, synchronization, data access, etc.) from the analyst’s view because their semantics are implicit in the text. Thus, analysts must carefully review requirements documents in order to identify key concerns and their effects. Concern mining tools based on NLP techniques can help in this activity. Nonetheless, existing tools cannot always detect all the crosscutting effects of a given concern on different requirements sections, as this detection requires a semantic analysis of the text. In this work, we describe an automated tool called REAssistant that supports the extraction of semantic information from textual use cases in order to reveal latent crosscutting concerns. To enable the analysis of use cases, we apply a tandem of advanced NLP techniques (e.g, dependency parsing, semantic role labeling, and domain actions) built on the UIMA framework, which generates different annotations for the use cases. Then, REAssistant allows analysts to query these annotations via concern-specific rules in order to identify all the effects of a given concern. The REAssistant tool has been evaluated with several case-studies, showing good results when compared to a manual identification of concerns and a third-party tool. In particular, the tool achieved a remarkable recall regarding the detection of crosscutting concern effects.
language resources and evaluation | 2018
Alejandro Rago; Claudia Marcos; J. Andres Diaz-Pace
Engineering activities often produce considerable documentation as a by-product of the development process. Due to their complexity, technical analysts can benefit from text processing techniques able to identify concepts of interest and analyze deficiencies of the documents in an automated fashion. In practice, text sentences from the documentation are usually transformed to a vector space model, which is suitable for traditional machine learning classifiers. However, such transformations suffer from problems of synonyms and ambiguity that cause classification mistakes. For alleviating these problems, there has been a growing interest in the semantic enrichment of text. Unfortunately, using general-purpose thesaurus and encyclopedias to enrich technical documents belonging to a given domain (e.g. requirements engineering) often introduces noise and does not improve classification. In this work, we aim at boosting text classification by exploiting information about semantic roles. We have explored this approach when building a multi-label classifier for identifying special concepts, called domain actions, in textual software requirements. After evaluating various combinations of semantic roles and text classification algorithms, we found that this kind of semantically-enriched data leads to improvements of up to 18% in both precision and recall, when compared to non-enriched data. Our enrichment strategy based on semantic roles also allowed classifiers to reach acceptable accuracy levels with small training sets. Moreover, semantic roles outperformed Wikipedia- and WordNET-based enrichments, which failed to boost requirements classification with several techniques. These results drove the development of two requirements tools, which we successfully applied in the processing of textual use cases.
Proceedings of the 11th Brazilian Symposium on Software Components, Architectures, and Reuse | 2017
Alejandro Rago; Santiago Vidal; J. Andres Diaz-Pace; Sebastian Frank; André van Hoorn
A key challenge of software architecture design is how to satisfy quality-attribute requirements, which often conflict with each other. This is usually a complex task, because there are several candidates for architectural solutions meeting the same requirements, and quality-attribute tradeoffs of those solutions need to be considered by the architects. In this context, we present the SQuAT framework to assist architects in the exploration of design solutions and their tradeoffs. This framework provides a modular approach for integrating quality-attribute analyzers and solvers, and also features a distributed search-based optimization. In this paper, we report on an experience using SQuAT with Palladio architectural models, which integrates third-party tools for performance and modifiability, and shows the tradeoffs among candidate solutions to the architect. Furthermore, we enhance the standard search schema of SQuAT with a distributed negotiation technique based on monotonic concession, in order to provide better tradeoffs for the architects decision making.
model driven engineering languages and systems | 2015
Alejandro Rago; Claudia Marcos; J. Andres Diaz-Pace
Developing high-quality requirements specifications often demands a thoughtful analysis and an adequate level of expertise from analysts. Although requirements modeling techniques provide mechanisms for abstraction and clarity, fostering the reuse of shared functionality (e.g., via UML relationships for use cases), they are seldom employed in practice. A particular quality problem of textual requirements, such as use cases, is that of having duplicate pieces of functionality scattered across the specifications. Duplicate functionality can sometimes improve readability for end users, but hinders development-related tasks such as effort estimation, feature prioritization and maintenance, among others. Unfortunately, inspecting textual requirements by hand in order to deal with redundant functionality can be an arduous, time-consuming and error-prone activity for analysts. In this context, we introduce a novel approach called ReqAligner that aids analysts to spot signs of duplication in use cases in an automated fashion. To do so, ReqAligner combines several text processing techniques, such as a use-case-aware classifier and a customized algorithm for sequence alignment. Essentially, the classifier converts the use cases into an abstract representation that consists of sequences of semantic actions, and then these sequences are compared pairwise in order to identify action matches, which become possible duplications. We have applied our technique to five real-world specifications, achieving promising results and identifying many sources of duplication in the use cases.
IEEE Latin America Transactions | 2015
Claudia Marcos; Alejandro Rago; Jorge Andres Diaz Pace
This work presents a semi-automatic tool for use case refactoring called RE-USE. This tool discovers existing quality problems in use cases and suggests a prioritized set of candidate refactorings to functional analysts. The analyst then reviews the recommendation list and selects the most important refactoring. The tool applies the chosen refactoring and returns an improved specification. The tool effectiveness in detecting existing quality problems and recommending proper refactorings was assessed using a set of case studies related to real-world systems obtaining encouraging results.
ieee biennial congress of argentina | 2014
Alejandro Rago; Claudia Marcos; J. Andres Diaz-Pace
The inspection of documents written in natural language with computers has become feasible thanks to the advances in Natural Language Processing (NLP) techniques. However, certain applications require a deeper semantic analysis of the text to produce good results. In this article, we present an exploratory study of semantic-aware NLP techniques for discovering latent concerns in use case specifications. For this purpose, we propose two NLP techniques, namely: semantic clustering and semantically-enriched rules. After evaluating these two techniques and comparing them with a technique developed by other researchers, results have showed that semantic NLP techniques hold great potential for detecting candidate concerns. Particularly, if these techniques are properly configured, they can help to reduce the efforts of requirement analysts and promote better quality in software development.
Simposio Argentino de Ingeniería de Software (ASSE 2016) - JAIIO 45 (Tres de Febrero, 2016). | 2016
Alejandro Rago; Facundo Matías Ramos; Juan Ignacio Velez; J. Andrés Díaz Pace; Claudio Marcos