Christian Quesada-López
University of Costa Rica
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Quesada-López.
The Journal of Object Technology | 2016
Christian Quesada-López; Marcelo Jenkins
Background: The complexity of providing accurate functional software size and effort prediction models is well known in the software industry. Function point analysis (FPA) is currently one of the most accepted software functional size metrics in the industry, but it is hardly automatable and generally requires a lengthy and costly process. Objectives: This paper reports on a family of replications carried out on a subset of the International Software Benchmarking Standards Group dataset (ISBSG R12) to evaluate the structure and applicability of function points. The goal of this replication is to aggregate evidence about internal issues of FPA as a metric, and to confirm previous results using a different set of data. Methods: A subset of 202 business application projects from 2005 to 2011 was analyzed. FPA counting was analyzed in order to determine the extent to which the basic functional components (BFC) were independent of each other and thus appropriate for an additive model of size. The correlations among effort and BFCs and unadjusted function points (UFP) were assessed in order to determine whether a simplified sizing metric might be appropriate to simplify effort prediction models. Prediction models were constructed and evaluated in terms of accuracy. Results: The results confirmed that some BFCs of the FPA method are correlated. There is a relationship between BFCs and effort. That suggest that prediction models based on transactional functions (TF) or external inputs (EI) appears to be as good as a model based on UFP in this subset of projects. Conclusions: The results might suggest an improvement in the performance of the measurement process. Simplifying the FPA measurement process based on counting a subset of BFCs could allow savings in measurement effort, preserving the accuracy of effort estimates.
empirical software engineering and measurement | 2014
Christian Quesada-López; Marcelo Jenkins
Background: The complexity of providing accurate software size estimation and effort prediction models is well known in the software industry, turning it into one of the most important research issues in empirical software engineering. Function points (FPA) is currently one of the most accepted software functional size metrics in the industry, but it is hardly automatable and generally requires a lengthy and costly process. Although accurate size estimation and effort prediction are very important for the success of any project, many practitioners have experienced difficulties in applying them. Objectives: This paper reports on a replicated study carried out on a subset of the ISBSG dataset to evaluate the structure and applicability of function points. The goal of this replication was to aggregate evidence and confirm results reported about internal issues of FPA as a metric using a different set of data. First, we examined FPA counting in order to determine which base functional components (BFC) were independent of each other and thus appropriate for an additive model of size. Second, we investigated the relationship between size and effort. Methods: A subset of the ISBSG dataset was used with 14 business application projects developed in C# from 2008 to 2011. We studied BFC independence and correlation between size, effort and productivity. FPA base functional components independence was checked with the Pearson and Kendalls Tau correlation coefficient. Besides, we studied the correlation between size and effort. Results: The replication aggregated evidence and confirmed that some BFC of the FPA method are correlated. There is a relationship between BFC unadjusted function points and effort. Limitations: This is an initial experiment of a research in progress that was performed on a small subset of 14 recent projects taken from the ISBSG dataset. Conclusions: Simplifying and automating a FPA measurement process based on counting BFC could encourage the adoption of FSM methods. Further research is needed.
2016 IEEE 36th Central American and Panama Convention (CONCAPAN XXXVI) | 2016
Christian Quesada-López; Juan Murillo-Morera; Carlos Castro-Herrera; Marcelo Jenkins
Software effort estimation models has been an area of considerable research for many years and it is still a challenge for software engineering. Although Functional Size Measurement (FSM) methods have become widely used, effort estimation based on the functional size still needs further research. Unbiased and comprehensive comparison between prediction models is needed. Some studies suggest that the relationship between effort and the base functional components of a FSM method would improve estimation models. This paper evaluates the structure of COSMIC FFP base functional components and its applicability in functional size based effort estimation models. Our study reports a benchmarking experiment evaluating 600 learning schemes for 12 ISBSG R12 sub datasets in business application projects which were sized by the COSMIC FSM method. In total, 7,200 runs were conducted (Learning schemes X Datasets) and the best learning schemes were reported by dataset. Lessons learned after conducting the experiment are discussed.
product focused software process improvement | 2015
Christian Quesada-López; Marcelo Jenkins
This paper presents a verification protocol for analyzing the source of inaccuracy in measurement activities of Function Points Analysis FPA and Automated Function Point AFP. An empirical study was conducted with the protocol to determine the accuracy of FPA and AFP, and common differences during their application. The empirical study was conducted and differences between the measurement process regarding accuracy, reproducibility, and protocol adoption properties were reported. Effectiveness of the verification protocol to evaluate functional size measurement procedures was provided. The application of the protocol enabled participants to identify differences and their causes between counting results in a systematic way. Many participants had a favorable opinion regarding the usefulness of the protocol, and most of them agreed that the application of this protocol improved their understanding of measurement methods.
Archive | 2019
Leonardo Villalobos-Arias; Christian Quesada-López; Alexandra Martinez; Marcelo Jenkins
Model-based testing is a process that can reduce the cost of software testing by automating the design and generation of test cases but it usually involves some time-consuming manual steps. Current model-based testing tools automate the generation of test cases, but offer limited support for the model creation and test execution stages. In this paper we present MBT4J, a platform that automates most of the model-based testing process for Java applications, by integrating several existing tools and techniques. It automates the model building, test case generation, and test execution stages of the process. First, a model is extracted from the source code, then an adapter—between this model and the software under test—is generated and finally, test cases are generated and executed. We performed an evaluation of our platform with 12 configurations using an existing Java application from a public repository. Empirical results show that MBT4J is able to generate up to 2,438 test cases, detect up to 289 defects, and achieve a code coverage ranging between 72% and 84%. In the future, we plan to expand our evaluation to include more software applications and perform error seeding in order to be able to analyze the false positive and negative rates of our platform. Improving the automation of oracles is another vein for future research.
Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement on | 2017
Christian Quesada-López; Marcelo Jenkins; Luis Carlos Salas; Juan Carlos Gómez
Functional size measurement automatically calculated for specific development frameworks is a challenge for the software industry. Automating function points counting involves benefits such as savings in time and costs and better reliability and accuracy of the measures, but it is hardly automatable. This paper presents an automated functional size measurement procedure, which has been systematically designed to obtain the functional size of software systems modeled in the development framework called FastWorks. In this study, we describe the framework architecture, the procedure design based on IFPUG FPA and the measurement prototype tool. Finally, the approach is preliminarily validated, the results are presented and lessons learned are discussed.
Journal of Software Engineering Research and Development | 2017
Juan Murillo-Morera; Christian Quesada-López; Carlos Castro-Herrera; Marcelo Jenkins
BackgroundSeveral prediction models have been proposed in the literature using different techniques obtaining different results in different contexts. The need for accurate effort predictions for projects is one of the most critical and complex issues in the software industry. The automated selection and the combination of techniques in alternative ways could improve the overall accuracy of the prediction models.ObjectivesIn this study, we validate an automated genetic framework, and then conduct a sensitivity analysis across different genetic configurations. Following is the comparison of the framework with a baseline random guessing and an exhaustive framework. Lastly, we investigate the performance results of the best learning schemes.MethodsIn total, six hundred learning schemes that include the combination of eight data preprocessors, five attribute selectors and fifteen modeling techniques represent our search space. The genetic framework, through the elitism technique, selects the best learning schemes automatically. The best learning scheme in this context means the combination of data preprocessing + attribute selection + learning algorithm with the highest coefficient correlation possible. The selected learning schemes are applied to eight datasets extracted from the ISBSG R12 Dataset.ResultsThe genetic framework performs as good as an exhaustive framework. The analysis of the standardized accuracy (SA) measure revealed that all best learning schemes selected by the genetic framework outperforms the baseline random guessing by 45–80%. The sensitivity analysis confirms the stability between different genetic configurations.ConclusionsThe genetic framework is stable, performs better than a random guessing approach, and is as good as an exhaustive framework. Our results confirm previous ones in the field, simple regression techniques with transformations could perform as well as nonlinear techniques, and ensembles of learning machines techniques such as SMO, M5P or M5R could optimize effort predictions.
2016 IEEE 36th Central American and Panama Convention (CONCAPAN XXXVI) | 2016
Christian Quesada-López; Melissa Jensen; Giselle Zuniga; Anne Chinnock; Marcelo Jenkins
The potential of mobile technologies are being used increasingly has not been fully exploited in relation to health research. There are Application (app) stores have thousands of health-related apps in the market, but generally, what is available publicly has not been fully evaluated by experts. This paper presents a case study where Human-Computer Interaction techniques and agile methodologies were applied in the design, development and validation of a health system in an interdisciplinary project. The system consisted of a mobile software platform that includes a nutrition mobile application for dietary self-monitoring based on behavioral change techniques. The essential background on behavioral change is provided. The application was designed using contextual design and other DCU methodologies. The system architecture and the main features of the end user mobile application are shown. The prototype was evaluated by nutrition domain experts and the preliminary results suggest that the application can improve the nutrition care process by facilitating a more effective communication between nutritionists and patients.
2016 IEEE 36th Central American and Panama Convention (CONCAPAN XXXVI) | 2016
Juan Murillo-Morera; Christian Quesada-López; Carlos Castro-Herrera; Marcelo Jenkins
In software engineering, software quality is an important research area. Automated generation of learning schemes plays an important role and represents an efficient way to detect defects in software projects, thus avoiding high costs and long delivery times. This study carries out an empirical evaluation to validate two versions with different levels of noise of NASA-MDP data sets. The main objective of this paper is to determine the stability of our framework. In all, 864 learning schemes were studied (8 data preprocessors × 6 attribute selectors × 18 learning algorithms). In line with statistical tests, our framework reported stable results between the analyzed versions. Results reported that evaluation and prediction phases were similar. Furthermore, the performance of the phases of evaluation and prediction between versions of data sets were stable. This means that the differences between versions did not affect the performance of our framework.
ACM Sigsoft Software Engineering Notes | 2015
Davide Falessi; Zadia Codabux; Guoping Rong; Ioannis Stamelos; Waldemar Ferreira; Igor Scaliante Wiese; Emanoel Barreiros; Christian Quesada-López; Periklis Tsirakidis
The 12th Doctoral Symposium on Empirical Software Engineering (IDOESE), was organized as a full day event prior to the ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM) program. Seven PhD candidates came from different research institutes across the globe to present their research proposals at the symposium. The symposium was run in a lively and interactive manner. The candidates received constructive feedback on their proposals from all the symposium participants. In this report, we describe the presented proposals, focusing on the content and feedback. Through them, we can take a peek at the trends and emerging areas of empirical software engineering research.