Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nelly Condori-Fernandez is active.

Publication


Featured researches published by Nelly Condori-Fernandez.


empirical software engineering and measurement | 2009

A systematic mapping study on empirical evaluation of software requirements specifications techniques

Nelly Condori-Fernandez; Maya Daneva; Klaas Sikkel; Roel Wieringa; Oscar Dieste; Oscar Pastor

This paper describes an empirical mapping study, which was designed to identify what aspects of Software Requirement Specifications (SRS) are empirically evaluated, in which context, and by using which research method. On the basis of 46 identified and categorized primary studies, we found that understandability is the most commonly evaluated aspect of SRS, experiments are the most commonly used research method, and the academic environment is where most empirical evaluation takes place.


ieee international software metrics symposium | 2003

Defining and validating metrics for navigational models

Silvia Abrahão; Nelly Condori-Fernandez; Luis Olsina; Oscar Pastor

Nowadays, several approaches for developing Web applications have been proposed in the literature. Most of them extend existing object-oriented conceptual modeling methods, incorporating new constructors in order to model the navigational structure and the content of Web applications. Such new constructors are commonly represented in a navigational model. While navigational models constitute the backbone of Web application design, their quality has a great impact on the quality of the final product, which is actually implemented and delivered. We discuss a set of metrics for navigational models that has been proposed for analyzing the quality of Web applications in terms of size and structural complexity. These metrics were defined and validated using a formal framework (DISTANCE) for software measure construction that satisfies the measurement needs of empirical software engineering research. Some experimental studies have shown that complexity affects the ability to understand and maintain conceptual models. In order to prove this, we also made a controlled experiment to observe how the proposed metrics can be used as early maintainability indicators.


empirical software engineering and measurement | 2010

Usability evaluation of multi-device/platform user interfaces generated by model-driven engineering

Nathalie Aquino; Jean Vanderdonckt; Nelly Condori-Fernandez; Oscar Dieste; Oscar Pastor

Nowadays several Computer-Aided Software Engineering environments exploit Model-Driven Engineering (MDE) techniques in order to generate a single user interface for a given computing platform or multi-platform user interfaces for several computing platforms simultaneously. Therefore, there is a need to assess the usability of those generated user interfaces, either taken in isolation or compared to each other. This paper describes an MDE approach that generates multi-platform graphical user interfaces (e.g., desktop, web) that will be subject to an exploratory controlled experiment. The usability of user interfaces generated for the two mentioned platforms and used on multiple display devices (i.e., standard size, large, and small screens) has been examined in terms of satisfaction, effectiveness and efficiency. An experiment with a factorial design for repeated measures was conducted for 31 participants, i.e., postgraduate students and professors selected by convenience sampling. The data were collected with the help of questionnaires and forms and were analyzed using parametric and non-parametric tests such as ANOVA with repeated measures and Friedmans test, respectively. Efficiency was significantly better in large screens than in small ones as well as in the desktop platform rather than in the web platform, with a confidence level of 95%. The experiment also suggests that satisfaction tends to be better in standard size screens than in small ones. The results suggest that the tested MDE approach should incorporate enhancements in its multi-device/platform user interface generation process in order to improve its generated usability.


requirements engineering | 2009

Evaluating the Completeness and Granularity of Functional Requirements Specifications: A Controlled Experiment

Sergio España; Nelly Condori-Fernandez; Arturo González; Oscar Pastor

Requirements Engineering (RE) is a relatively young discipline, and still many advances have been achieved during the last decades. In particular, numerous RE methods have been proposed. However, there is a growing concern for empirical validations that assess RE proposals and statements. This paper is related to the evaluation of the quality of functional requirements specifications, focusing on completeness and granularity. To do this, several concepts related to conceptual model quality are presented; these concepts lead to the definition of metrics that allow measuring certain aspects of a requirements model quality (e.g. degree of functional encapsulations completeness with respect to a reference model, number of functional fragmentation errors). A laboratory experiment with master students has been carried out, in order to compare (using the proposed metrics) two RE approaches; namely, Use Cases and Communication Analysis. Results indicate greater quality (in terms of completeness and granularity) when Communication Analysis guidelines are followed. Moreover, interesting issues arise from experimental results, which invite further research.


Journal of Computer Science and Technology | 2007

On the estimation of the functional size of software from requirements specifications

Nelly Condori-Fernandez; Silvia Abrahão; Oscar Pastor

This paper introduces a measurement procedure, called RmFFP, which describes a set of operations for modelling and estimating the size of object-oriented software systems from high-level specifications using the OO-Method Requirement Model. OO-Method is an automatic software production method. The contribution of this work is to systematically define a set of rules that allows estimating the functional size at an early stage of the software production process, in accordance with COSMIC-FFP. To do this, we describe the design, the application, and the analysis of the proposed measurement procedure following the steps of a process model for software measurement. We also report initial results on the evaluation of RmFFP in terms of its reproducibility.


Journal of the Brazilian Computer Society | 2010

An empirical comparative evaluation of requirements engineering methods

Sergio España; Nelly Condori-Fernandez; Arturo González; Oscar Pastor

Requirements Engineering (RE) is a relatively young discipline, and still many advances have been achieved during the last decades. In particular, numerous RE approaches are proposed in the literature with the aim of understanding a certain problem (e.g. information systems development) and establishing a knowledge base that is shared between domain experts and developers (i.e. a requirements specification). However, there is a growing concern for empirical validations that assess RE proposals and statements. This paper is related to the assessment of the quality of functional requirements specifications, using the Method Evaluation Model (MEM) as a theoretical framework. The MEM distinguishes the actual efficacy and the perceived efficacy of a method. In order to assess the actual efficacy or RE methods, the conceptual model quality framework by Lindland et al. can be applied; in this paper, we focus on the completeness and granularity of requirements models and extend this framework by defining four new metrics (e.g. degree of functional encapsulations completeness with respect to a reference model, number of functional fragmentation errors). In order to assess the perceived efficacy, conventional questionnaires can be used. A laboratory experiment with master students has been carried out, in order to compare (using the proposed metrics) two RE methods; namely, Use Cases and Communication Analysis. With respect to actual efficacy, results indicate greater model quality (in terms of completeness and granularity) when Communication Analysis guidelines are followed. With respect to perceived efficacy, we found that Use Cases was perceived to be slightly easier to use than Communication Analysis. However, Communication Analysis was perceived to be more useful in terms of determining the proper business processes granularity. The paper discusses these results and highlights some key issues for future research in this area.


International Journal of Information System Modeling and Design | 2015

TESTAR: Tool Support for Test Automation at the User Interface Level

Tanja E. J. Vos; Peter M. Kruse; Nelly Condori-Fernandez; Sebastian Bauersfeld; Joachim Wegener

Testing applications with a graphical user interface GUI is an important, though challenging and time consuming task. The state of the art in the industry are still capture and replay tools, which may simplify the recording and execution of input sequences, but do not support the tester in finding fault-sensitive test cases and leads to a huge overhead on maintenance of the test cases when the GUI changes. In earlier works the authors presented the TESTAR tool, an automated approach to testing applications at the GUI level whose objective is to solve part of the maintenance problem by automatically generating test cases based on a structure that is automatically derived from the GUI. In this paper they report on their experiences obtained when transferring TESTAR in three different industrial contexts with decreasing involvement of the TESTAR developers and increasing participation of the companies when deploying and using TESTAR during testing. The studies were successful in that they reached practice impact, research impact and give insight into ways to do innovation transfer and defines a possible strategy for taking automated testing tools into the market.


international conference on quality software | 2004

Towards a functional size measure for object-oriented systems from requirements specifications

Nelly Condori-Fernandez; Silvia Abrahão; Oscar Pastor

This work describes a measurement protocol to map the concepts used in the OO-method requirements model onto the concepts used by the COSMIC full function points (COSMIC-FFP) functional size measurement method. This protocol describes a set of measurement operations for modeling and sizing object-oriented software systems from requirements specifications obtained in the context of the OO-method. This development method starts from a requirements model that allows the specification of software functional requirements and generates a conceptual model through a requirements analysis process. The main contribution of this work is an extended set of rules that allows estimating the functional size of OO systems at an early stage of the development lifecycle. A case study is introduced to report the obtained results from a practical point of view.


empirical software engineering and measurement | 2008

Understandability measurement in an early usability evaluation for model-driven development: an empirical study

Jose Ignacio Panach; Nelly Condori-Fernandez; Francisco Valverde; Nathalie Aquino; Oscar Pastor

Traditionally, usability has been evaluated taking into account the users satisfaction when interacting with the software system. However, in a Model-Driven Development (MDD) process, where conceptual models are the main resource for software system generation, the usability can potentially be evaluated at earlier stages. This work goes one step further proposing that certain usability attributes, specifically internal understandability attributes, can be measured from Conceptual Models. This work presents an empirical study carried out to evaluate the proposal. The goal of this study is to evaluate whether the value measured using our proposal is related to the understandability value perceived by the end user. From the analysis of the empirical results obtained, several weaknesses of the proposal are stated.


2014 IEEE 1st International Workshop on Requirements Engineering and Testing, RET 2014 - Proceedings | 2014

Towards the automated generation of abstract test cases from requirements models

Maria Fernanda Granda; Nelly Condori-Fernandez; Tanja E. J. Vos; Oscar Pastor

In a testing process, the design, selection, creation and execution of test cases is a very time-consuming and error-prone task when done manually, since suitable and effective test cases must be obtained from the requirements. This paper presents a model-driven testing approach for conceptual schemas that automatically generates a set of abstract test cases, from requirements models. In this way, tests and requirements are linked together to find defects as soon as possible, which can considerably reduce the risk of defects and project reworking. The authors propose a generation strategy which consists of: two meta-models, a set of transformations rules which are used to generate a Test Model, and the Abstract Test Cases from an existing approach to communication-oriented Requirements Engineering; and an algorithm based on Breadth-First Search. A practical application of our approach is included.

Collaboration


Dive into the Nelly Condori-Fernandez's collaboration.

Top Co-Authors

Avatar

Oscar Pastor

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Tanja E. J. Vos

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Silvia Abrahão

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Alain Abran

École de technologie supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge