Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Neagu is active.

Publication


Featured researches published by Daniel Neagu.


Empirical Software Engineering | 2010

Fuzzy grey relational analysis for software effort estimation

Mohammad Azzeh; Daniel Neagu; Peter I. Cowling

Accurate and credible software effort estimation is a challenge for academic research and software industry. From many software effort estimation models in existence, Estimation by Analogy (EA) is still one of the preferred techniques by software engineering practitioners because it mimics the human problem solving approach. Accuracy of such a model depends on the characteristics of the dataset, which is subject to considerable uncertainty. The inherent uncertainty in software attribute measurement has significant impact on estimation accuracy because these attributes are measured based on human judgment and are often vague and imprecise. To overcome this challenge we propose a new formal EA model based on the integration of Fuzzy set theory with Grey Relational Analysis (GRA). Fuzzy set theory is employed to reduce uncertainty in distance measure between two tuples at the kth continuous feature


Journal of Systems and Software | 2011

Analogy-based software effort estimation using Fuzzy numbers

Mohammad Azzeh; Daniel Neagu; Peter I. Cowling


model driven engineering languages and systems | 2008

Improving analogy software effort estimation using fuzzy feature subset selection algorithm

Mohammad Azzeh; Daniel Neagu; Peter I. Cowling

\left( {\left| {\left( {{x_o}(k) - {x_i}(k)} \right.} \right|} \right)


Sar and Qsar in Environmental Research | 2006

Validation of counter propagation neural network models for predictive toxicology according to the OECD principles: a case study

Marjan Vračko; Bandelj; Pierluigi Barbieri; Emilio Benfenati; Qasim Chaudhry; Mark T. D. Cronin; Devillers J; Gallegos A; Giuseppina Gini; Paola Gramatica; Helma C; Paolo Mazzatorta; Daniel Neagu; Tatiana I. Netzeva; Manuela Pavan; Grace Patlewicz; Randić M; Ivanka Tsakovska; Andrew Worth


Computers in Human Behavior | 2011

Assessing information quality of e-learning systems: a web mining approach

Mona Alkhattabi; Daniel Neagu; Andrea J. Cullen

.GRA is a problem solving method that is used to assess the similarity between two tuples with M features. Since some of these features are not necessary to be continuous and may have nominal and ordinal scale type, aggregating different forms of similarity measures will increase uncertainty in the similarity degree. Thus the GRA is mainly used to reduce uncertainty in the distance measure between two software projects for both continuous and categorical features. Both techniques are suitable when relationship between effort and other effort drivers is complex. Experimental results showed that using integration of GRA with FL produced credible estimates when compared with the results obtained using Case-Based Reasoning, Multiple Linear Regression and Artificial Neural Networks methods.


Journal of Chemical Information and Computer Sciences | 2002

The Importance of Scaling in Data Mining for Toxicity Prediction

Paolo Mazzatorta; Emilio Benfenati; Daniel Neagu; Giuseppina Gini

Background: Early stage software effort estimation is a crucial task for project bedding and feasibility studies. Since collected data during the early stages of a software development lifecycle is always imprecise and uncertain, it is very hard to deliver accurate estimates. Analogy-based estimation, which is one of the popular estimation methods, is rarely used during the early stage of a project because of uncertainty associated with attribute measurement and data availability. Aims: We have integrated analogy-based estimation with Fuzzy numbers in order to improve the performance of software project effort estimation during the early stages of a software development lifecycle, using all available early data. Particularly, this paper proposes a new software project similarity measure and a new adaptation technique based on Fuzzy numbers. Method: Empirical evaluations with Jack-knifing procedure have been carried out using five benchmark data sets of software projects, namely, ISBSG, Desharnais, Kemerer, Albrecht and COCOMO, and results are reported. The results are compared to those obtained by methods employed in the literature using case-based reasoning and stepwise regression. Results: In all data sets the empirical evaluations have shown that the proposed similarity measure and adaptation techniques method were able to significantly improve the performance of analogy-based estimation during the early stages of software development. The results have also shown that the proposed method outperforms some well know estimation techniques such as case-based reasoning and stepwise regression. Conclusions: It is concluded that the proposed estimation model could form a useful approach for early stage estimation especially when data is almost uncertain.


Journal of Cheminformatics | 2011

Data governance in predictive toxicology: A review.

Xin Fu; Anna Wojak; Daniel Neagu; Mick J. Ridley; Kim Z. Travis

One of the major problems with software project management is the difficulty to predict accurately the required effort for developing software applications. Analogy Software effort estimation appears well suited to model problems of this nature. The analogy approach may be viewed as a systematic development of the expert opinion through experience learning and exposure to analogue case studies. The accuracy of such model depends on characteristics of datasets. This paper examines the impact of feature subset selection algorithms on improving the accuracy of analogy software effort estimation model. We proposed a feature subset selection algorithm based on fuzzy logic for analogy software effort estimation models. Validation using two established datasets (ISBSG, Desharnais) shows that using fuzzy features subset selection algorithm in analogy software effort estimation contribute to significant results as other algorithms: Hill climbing, Forward subset selection, and backward subset selection do.


ICSP'08 Proceedings of the Software process, 2008 international conference on Making globally distributed software development a success story | 2008

Software project similarity measurement based on fuzzy C-means

Mohammad Azzeh; Daniel Neagu; Peter I. Cowling

The OECD has proposed five principles for validation of QSAR models used for regulatory purposes. Here we present a case study investigating how these principles can be applied to models based on Kohonen and counter propagation neural networks. The study is based on a counter propagation network model that has been built using toxicity data in fish fathead minnow for 541 compounds. The study demonstrates that most, if not all, of the OECD criteria may be met when modeling using this neural network approach.


model driven engineering languages and systems | 2009

Software effort estimation based on weighted fuzzy grey relational analysis

Mohammad Azzeh; Daniel Neagu; Peter I. Cowling

E-learning systems provide a promising solution as an information exchanging channel. Improved technologies could mean faster and easier access to information but do not necessarily ensure the quality of this information; for this reason it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations. This paper proposes an assessment model for information quality in e-learning systems based on the quality framework we proposed previously: the proposed framework consists of 14 quality dimensions grouped in three quality factors: intrinsic, contextual representation and accessibility. We use the relative importance as a parameter in a linear equation for the measurement scheme. Formerly, we implemented a goal-question-metrics approach to develop a set of quality metrics for the identified quality attributes within the proposed framework. In this paper, the proposed metrics were computed to produce a numerical rating indicating the overall information quality published in a particular e-learning system. The data collection and evaluation processes were automated using a web data extraction technique and results on a case study are discussed. This assessment model could be useful to e-learning systems designers, providers and users as it provides a comprehensive indication of the quality of information in such systems.


uk workshop on computational intelligence | 2010

Predictive model representation and comparison: Towards data and predictive models governance

Mokhairi Makhtar; Daniel Neagu; Mick J. Ridley

While mining a data set of 554 chemicals in order to extract information on their toxicity value, we faced the problem of scaling all the data. There are numerous different approaches to this procedure, and in most cases the choice greatly influences the results. The aim of this paper is 2-fold. First, we propose a universal scaling procedure for acute toxicity in fish according to the Directive 92/32/EEC. Second, we look at how expert preprocessing of the data effects the performance of qualitative structure-activity relationship (QSAR) approach to toxicity prediction.

Collaboration


Dive into the Daniel Neagu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gongde Guo

Fujian Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark T. D. Cronin

Liverpool John Moores University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohammad Azzeh

Applied Science Private University

View shared research outputs
Top Co-Authors

Avatar

Qasim Chaudhry

Food and Environment Research Agency

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge