Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José Javier Dolado is active.

Publication


Featured researches published by José Javier Dolado.


Information & Software Technology | 2001

On the problem of the software cost function

José Javier Dolado

Abstract The question of finding a function for software cost estimation is a long-standing issue in the software engineering field. The results of other works have shown different patterns for the unknown function, which relates software size to project cost (effort). In this work, the research about this problem has been made by using the technique of Genetic Programming (GP) for exploring the possible cost functions. Both standard regression analysis and GP have been applied and compared on several data sets. However, regardless of the method, the basic size–effort relationship does not show satisfactory results, from the predictive point of view, across all data sets. One of the results of this work is that we have not found significant deviations from the linear model in the software cost functions. This result comes from the marginal cost analysis of the equations with best predictive values.


Computers & Operations Research | 2008

A tabu search algorithm for structural software testing

Eugenia Díaz; Javier Tuya; Raquel Blanco; José Javier Dolado

This paper presents a tabu search metaheuristic algorithm for the automatic generation of structural software tests. It is a novel work since tabu search is applied to the automation of the test generation task, whereas previous works have used other techniques such as genetic algorithms. The developed test generator has a cost function for intensifying the search and another for diversifying the search that is used when the intensification is not successful. It also combines the use of memory with a backtracking process to avoid getting stuck in local minima. Evaluation of the generator was performed using complex programs under test and large ranges for input variables. Results show that the developed generator is both effective and efficient.


international conference on software maintenance | 2002

A post-placement side-effect removal algorithm

Mark Harman; Lin Hu; Robert M. Hierons; Malcolm Munro; Xingyuan Zhang; José Javier Dolado; Mari Carmen Otero; Joachim Wegener

Side-effects are widely believed to impede program comprehension and have a detrimental effect upon software maintenance. This paper introduces an algorithm for side-effect removal which splits the side-effects into their pure expression meaning and their state-changing meaning. Symbolic execution is used to determine the expression meaning, while transformation is used to place the state-changing part in a suitable location in a transformed version of the program. This creates a program which is semantically equivalent to the original but guaranteed to be free from side-effects. The paper also reports the results of an empirical study which demonstrates that the application of the algorithm causes a significant improvement in program comprehension.


Information & Software Technology | 2004

Evaluation of the comprehension of the dynamic modeling in UML

Mari Carmen Otero; José Javier Dolado

Abstract There is a certain degree of difficulty in developing and understanding the diagrams used for representing the dynamic behavior of a software application, specified in the Unified Modeling Language (UML). In this paper we evaluate the comprehension of the dynamic modeling in UML designs by using two split-plot factorial experiments. The metrics used for assessing the results are the time spent and the scores obtained in answering a questionnaire. In the first study three factors were controlled: the diagram type (sequence, collaboration and state), the application domain of the UML designs and the order of presentation of the documents. We observe that state diagrams provide higher semantic comprehension of dynamic modeling in UML when the domain is real-time and sequence diagrams are better in the case of a management information application. In the second study two factors were controlled: the paired combination of dynamic diagrams and the application domain. The main conclusion of the second study is that regardless of the domain, a higher semantic comprehension of the UML designs is achieved when the dynamic behavior is modeled by using the pair Sequence–State. Combining the results of both experiments we obtain the conditions which must concur to get an effective comprehension of the UML dynamic models: (a) if it is a management information application, the diagrams are sequence or the composition Sequence–State or Collaboration–State; (b) for a real-time non-reactive system the diagrams are collaboration or the pair Collaboration–State or Sequence–State; and (c) finally, when it is the design of the real-time reactive system, the best diagram is the State.


IEEE Transactions on Software Engineering | 2003

An empirical investigation of the influence of a type of side effects on program comprehension

José Javier Dolado; Mark Harman; Mari Carmen Otero; Lin Hu

This paper reports the results of a study on the impact of a type of side effect (SE) upon program comprehension. We applied a crossover design on different tests involving fragments of C code that include increment and decrement operators. Each test had an SE version and a side-effect-free counterpart. The variables measured in the treatments were the number of correct answers and the time spent in answering. The results show that the side-effect operators considered significantly reduce performance in comprehension-related tasks, providing empirical justification for the belief that side effects are harmful.


evaluation and assessment in software engineering | 2014

Preliminary comparison of techniques for dealing with imbalance in software defect prediction

Daniel Rodríguez; Israel Herraiz; Rachel Harrison; José Javier Dolado; José C. Riquelme

Imbalanced data is a common problem in data mining when dealing with classification problems, where samples of a class vastly outnumber other classes. In this situation, many data mining algorithms generate poor models as they try to optimize the overall accuracy and perform badly in classes with very few samples. Software Engineering data in general and defect prediction datasets are not an exception and in this paper, we compare different approaches, namely sampling, cost-sensitive, ensemble and hybrid approaches to the problem of defect prediction with different datasets preprocessed differently. We have used the well-known NASA datasets curated by Shepperd et al. There are differences in the results depending on the characteristics of the dataset and the evaluation metrics, especially if duplicates and inconsistencies are removed as a preprocessing step. Further Results and replication package: http://www.cc.uah.es/drg/ease14


Empirical Software Engineering | 2002

An Initial Experimental Assessment of the Dynamic Modelling in UML

Mari Carmen Otero; José Javier Dolado

The goal of this empirical study is to compare the semantic comprehension of three different notations for representing the dynamic behaviour in unified modelling language (UML): (a) sequence diagrams, (b) collaboration diagrams, and (c) state diagrams. Eighteen students of Informatics analysed the three types of diagrams within three different application domains. We performed a 3 × 3 factorial experimental design with repeated measures. The metrics collected were total time and total score. The main conclusion of this study is that the comprehension of the dynamic modelling in object-oriented designs depends on the diagram type and on the complexity of the document. The software project design written in the UML notation is more comprehensible, when the dynamic behaviour is modelled in a sequence diagram. While if it is implemented using a collaboration diagram, the design turns out to be less comprehensible as the application domain, and consequently, the document is more complex.


Journal of Systems and Software | 1997

A study of the relationships among Albrecht and Mark II function points, lines of code 4GL and effort

José Javier Dolado

Abstract There is a strong interest in finding metrics for replacing the common LOC measure of software size, with most of the interest focusing on the Function Point measures. Mark II Function Points were proposed as a better technique than the original of Albrecht Function Points. In this work, the results of a study comparing those measures are stated, and they are also compared against effort and LOC. Since other published results are based on samples generated randomly, it is interesting to see both methods working when applied to the same projects. The data collected comes from the measurements of academic projects. The fact that all projects have been developed in the same environment (mostly 4GL) and domain (accounting information systems) allows the value of the “technical complexity adjustment” variable to be set to constant and also allows us to examine the relationships among the variables. Several conclusions are reported.


Journal of Systems and Software | 2005

An empirical comparison of the dynamic modeling in OML and UML

Mari Carmen Otero; José Javier Dolado

This paper presents an empirical research for evaluating the semantic comprehension of two standard languages, UML (Unified Modeling Language) versus OML (OPEN Modeling Language), from the perspective of the dynamic modeling. We carried out two controlled experiments using a 2x2 crossover design, where the metrics studied were the comprehension time and the total score. We examined the OML and UML interaction diagrams and the statecharts of each language corresponding to the design of a real-time embedded system. The results obtained reveal that the specification of the dynamic behavior using OML is faster to comprehend and easier to interpret than using the UML language, regardless of the dynamic diagram type.


soft computing | 2016

Evaluation of estimation models using the Minimum Interval of Equivalence

José Javier Dolado; Daniel Rodríguez; Mark Harman; William B. Langdon; Federica Sarro

Graphical abstractDisplay Omitted HighlightsDefinition of a new measure for evaluating estimation models.The measure is based on the concept of Equivalence Hypothesis Testing.Application of the measure to estimations by different soft computing methods.Construction of probability intervals for each estimation method.Genetic programming and linear regression provide the best intervals. This article proposes a new measure to compare soft computing methods for software estimation. This new measure is based on the concepts of Equivalence Hypothesis Testing (EHT). Using the ideas of EHT, a dimensionless measure is defined using the Minimum Interval of Equivalence and a random estimation. The dimensionless nature of the metric allows us to compare methods independently of the data samples used.The motivation of the current proposal comes from the biases that other criteria show when applied to the comparison of software estimation methods. In this work, the level of error for comparing the equivalence of methods is set using EHT. Several soft computing methods are compared, including genetic programming, neural networks, regression and model trees, linear regression (ordinary and least mean squares) and instance-based methods. The experimental work has been performed on several publicly available datasets.Given a dataset and an estimation method we compute the upper point of Minimum Interval of Equivalence, MIEu, on the confidence intervals of the errors. Afterwards, the new measure, MIEratio, is calculated as the relative distance of the MIEu to the random estimation.Finally, the data distributions of the MIEratios are analysed by means of probability intervals, showing the viability of this approach. In this experimental work, it can be observed that there is an advantage for the genetic programming and linear regression methods by comparing the values of the intervals.

Collaboration


Dive into the José Javier Dolado's collaboration.

Top Co-Authors

Avatar

Mari Carmen Otero

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Mark Harman

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bryan F. Jones

University of South Wales

View shared research outputs
Top Co-Authors

Avatar

Federica Sarro

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lin Hu

King's College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge