Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hassan Najadat is active.

Publication


Featured researches published by Hassan Najadat.


Advances in Engineering Software | 2011

Evaluating the change of software fault behavior with dataset attributes based on categorical correlation

Izzat Alsmadi; Hassan Najadat

Utilization of data mining in software engineering has been the subject of several research papers. Majority of subjects of those paper were in making use of historical data for decision making activities such as cost estimation and product or project attributes prediction and estimation. The ability to predict software fault modules and the ability to correlate relations between faulty modules and product attributes using statistics is the subject of this paper. Correlations and relations between the attributes and the categorical variable or the class are studied through generating a pool of records from each dataset and then select two samples every time from the dataset and compare them. The correlation between the two selected records is studied in terms of changing from faulty to non-faulty or the opposite for the module defect attribute and the value change between the two records in each evaluated attribute (e.g. equal, larger or smaller). The goal was to study if there are certain attributes that are consistently affecting changing the state of the module from faulty to none, or the opposite. Results indicated that such technique can be very useful in studying the correlations between each attribute and the defect status attribute. Another prediction algorithm is developed based on statistics of the module and the overall dataset. The algorithm gave each attribute true class and faulty class predictions. We found that dividing prediction capability for each attribute into those two (i.e. correct and faulty module prediction) facilitate understanding the impact of attribute values on the class and hence improve the overall prediction relative to previous studies and data mining algorithms. Results were evaluated and compared with other algorithms and previous studies. ROC metrics were used to evaluate the performance of the developed metrics. Results from those metrics showed that accuracy or prediction performance calculated traditionally using accurately predicted records divided by the total number of records in the dataset does not necessarily give the best indicator of a good metric or algorithm predictability. Those predictions may give wrong implication if other metrics are not considered with them. The ROC metrics were able to show some other important aspects of performance or accuracy.


advances in social networks analysis and mining | 2012

A Classifier to Detect Tumor Disease in MRI Brain Images

Amer Al-Badarneh; Hassan Najadat; Ali M. Alraziqi

The traditional method for detecting the tumor diseases in the human MRI brain images is done manually by physicians. Automatic classification of tumors of MRI images requires high accuracy, since the non-accurate diagnosis and postponing delivery of the precise diagnosis would lead to increase the prevalence of more serious diseases. To avoid that, an automatic classification system is proposed for tumor classification of MRI images. This work shows the effect of neural network (NN) and K-Nearest Neighbor (K-NN) algorithms on tumor classification. We used a benchmark dataset MRI brain images. The experimental results show that our approach achieves 100% classification accuracy using K-NN and 98.92% using NN.


International Scholarly Research Notices | 2012

Predicting Software Projects Cost Estimation Based on Mining Historical Data

Hassan Najadat; Izzat Alsmadi; Yazan Shboul

In this research, a hybrid cost estimation model is proposed to produce a realistic prediction model that takes into consideration software project, product, process, and environmental elements. A cost estimation dataset is built from a large number of open source projects. Those projects are divided into three domains: communication, finance, and game projects. Several data mining techniques are used to classify software projects in terms of their development complexity. Data mining techniques are also used to study association between different software attributes and their relation to cost estimation. Results showed that finance metrics are usually the most complex in terms of code size and some other complexity metrics. Results showed also that games applications have higher values of the SLOCmath, coupling, cyclomatic complexity, and MCDC metrics. Information gain is used in order to evaluate the ability of object-oriented metrics to predict software complexity. MCDC metric is shown to be the first metric in deciding a software project complexity. A software project effort equation is created based on clustering and based on all software projects’ attributes. According to the software metrics weights values developed in this project, we can notice that MCDC, LOC, and cyclomatic complexity of the traditional metrics are still the dominant metrics that affect our classification process, while number of children and depth of inheritance are the dominant from the object-oriented metrics as a second level.


International Journal of Advanced Computer Science and Applications | 2016

Automatic Keyphrase Extractor from Arabic Documents

Hassan Najadat; Ismail Hmeidi; Mohammed N. Al-Kabi; Maysa Mahmoud Bany Issa

The keyphrase is a sentence or a part of a sentence that contains a sequence of words that expresses the meaning and the purpose of any given paragraph. Keyphrase extraction is the task of identifying the possible keyphrases from a given document. Many applications including text summarization, indexing, and characterization use keyphrase extraction. Also, it is an essential task to improve the performance of any information retrieval system. The internet contains a massive amount of documents that may have been manually assigned keyphrases or not. The Arabic language is an important language in the world. Nowadays the number of online Arabic documents is growing rapidly; and most of them have no manually assigned keyphrases, so the user will scan the whole retrieved web documents. To avoid scanning the entire retrieved document, we need keyphrases assigned to each web document manually or automatically. This paper addresses the problem of identifying keyphrases in Arabic documents automatically. In this work, we provide a novel algorithm that identified keyphrases from Arabic text. The new algorithm, Automatic Keyphrases Extraction from Arabic (AKEA), extracts keyphrases from Arabic documents automatically. In order to test the algorithm, we collected a dataset containing 100 documents from Arabic wiki; also, we downloaded another 56 agricultural documents from Food and Agricultural Organization of the United Nations (F.A.O.). The evaluation results show that the system achieves 83% precision value in identifying 2-word and 3-word keyphrases from agricultural domains.


Journal of Information & Knowledge Management | 2011

Clustering Generalised Instances Set Approaches for Text Classification

Hassan Najadat; Rasha Obeidat; Ismail Hmeidi

This paper introduces three new text classification methods: Clustering-Based Generalised Instances Set (CB-GIS), Multilevel Clustering-Based Generalised Instances Set (MLC_GIS) and Multilevel Clustering-Based, k Nearest Neighbours (MLC-kNN). These new methods aim to unify the strengths and overcome the drawbacks of the three similarity-based text classification methods, namely, kNN, centroid-based and GIS. The new methods utilise a clustering technique called spherical K-means to represent each class by a representative set of generalised instances to be used later in the classification. The CB-GIS method applies a flat clustering method while MLC-GIS and MLC-kNN apply multilevel clustering. Extensive experiments have been conducted to evaluate the new methods and compare them with kNN, centroid-based and GIS classifiers on the Reuters-21578(10) benchmark dataset. The evaluation has been performed in terms of the classification performance and the classification efficiency. The experimental results show that the top-performing classification method is the MLC-kNN classifier, followed by the MLC-GIS and CB-GIS classifiers. According to the best micro-averaged F1 scores, the new methods (CB-GIS, MLC-CIS, MLC-kNN) have improvements of 4.48%, 4.65% and 4.76% over kNN, 1.84%, 1.92% and 2.12% over the centroid-based and 5.26%, 5.34% and 5.45% over GIS respectively. With respect to the best macro-averaged F1 scores, the new methods (CB-GIS, MLC-CIS, MLC-kNN) have improvements of 10.29%, 10.19% and 10.45% over kNN, respectively, 0.1%, 0.03% and 0.29% over the centroid-based and 3.75%, 3.68% and 3.94% over GIS respectively.


LIFE: International Journal of Health and Life-Sciences | 2017

AN ADAPTIVE ROLE-BASED ACCESS CONTROL APPROACH FOR CLOUD E-HEALTH SYSTEMS

Amer Al-Badarneh; Hassan Najadat; Enas 'Hassan Abu Yabes'

Securing and protecting electronic medical records (EMR) stored in a cloud is one of the most critical issues in e-health systems. Many approaches with different security objectives have been developed to adapt this important issue.This paper proposes a new approach for securing and protecting electronic health records against unauthenticated access with allowing different hospitals, health centres and pharmacies access the system, by implementing role-based access control approach that could be applied smoothly in cloud e-health systems.


2017 8th International Conference on Information and Communication Systems (ICICS) | 2017

Performance evaluation of bloom filter size in map-side and reduce-side bloom joins

Amer Al-Badarneh; Hassan Najadat; Salah Rababah

Map Reduce (MP) Is an efficient programming model for processing big data. However, MR has some limitations in performing the join operation. Recent researches have been made to alleviate this problem, such as Bloom join. The idea of the Bloom join lies in constructing a Bloom filter to remove redundant records before performing the join operation. The size of the constructed filter is very critical and it should be chosen in a good manner. In this paper, we evaluate the performance of the Bloom filter size for two Bloom join algorithms, Map-side Bloom join and Reduce-side Bloom join. In our methodology, we constructed multiple Bloom filters with different sizes for two static input datasets. Our experimental results show that it is not always the best solution to construct a small or a large filter size to produce a good performance, it should be constructed based on the size of the input datasets. Also, the results show that tuning the Bloom filter size causes major effects on the join performance. Furthermore, the results show that it is recommended to choose small sizes of the Bloom filter, small enough to produce neglected false positive rate, in the implementation of the two algorithms when there is a concern about the memory. On the other hand, small to medium sizes of the Bloom filter in the Reduce-side join produce smaller elapsed time compared to the Map-side join, while large sizes produce larger elapsed time.


international conference on computer science and information technology | 2016

Phoenix: A MapReduce implementation with new enhancements

Amer Al-Badarneh; Hassan Najadat; Majd Al-Soud; Rasha Mosaid

Lately, the large increasing in data amount results in compound and large data-sets that caused the appearance of “Big Data” concept which gained the attention of industrial organizations as well as academic communities. Big data APIs that need large memory can benefit from Phoenix MapReduce implementation for shared-memory machines, instead of large, distributed clusters of computers. This paper evaluates the design and the prototype of Phoenix, Phoenix performance, as well as Phoenix limitations. This paper also suggests some new approaches to get over of some Phoenix limitation and enhance its performance on large-scale shared memory. The major contribution of this work is finding new approaches that get over the <;key, value> pairs limitation in phoenix framework using hash tables with B+Trees and get over the collisions problem of hash tables.


Proceedings of the The 3rd Multidisciplinary International Social Networks Conference on SocialInformatics 2016, Data Science 2016 | 2016

An Automatic Text Classification System Based on Genetic Algorithm

Mohammed I. Khaleel; Ismail Hmeidi; Hassan Najadat

The increasing numbers of on-line text documents make the process of searching and accessing documents related to a specific category a very difficult task. By classifying the documents, the search is then limited to only those documents that related to a particular category. Text classification is the process of classifying documents based on their content into predefined set of categories. Many classification systems that based on rules generation approach have been adopted for text classification. The classification rules that generated from these classifiers conducted directly from the characteristics of training documents. Which will be limited to a certain categories and has unequal number of the generated classification rules per category. In this paper, an automatic text classification system based on the genetic algorithm classifier has been developed. The genetic algorithm classifier generates a predefined number of optimized classification rules that have high level of flexibility and cover wide range of the characteristics that belong to the training documents. The performance of the genetic algorithm classifier is compared with the decision tree and k nearest neighbour classifiers. Results showed that the genetic algorithm classifier outperformed both classifiers with macro-average F1 measure value equal 0.748.


Proceedings of the The International Conference on Engineering & MIS 2015 | 2015

Performance Impact of Texture Features on MRI Image Classification

Amer Al-Badarneh; Ali Alrazqi; Hassan Najadat

The MR image texture contains a rich source of information which consists of entities that characterize brightness, color, slope, size and other characteristics. Features extraction are identifying relevant features leads to faster, easier, and better to understand images. Feature extraction process affects significantly the quality of the classification process. Accordingly, select representative features effect on classification accuracy. So, principle component analysis (PCA) used to reduce number of features. MRI classification is a computational method used to find patterns and develop classification schemes for data in very huge datasets. In this paper, we use two well-known algorithms neural network (NN) and support vector machine (SVM) for classification of MRI of the human brain. The extracted texture features passed to NN and SVM. The classifiers have been used to classify MRI as abnormal or normal. We use a large benchmark dataset of 710 MRI brain images obtained from Harvard medical school. The experimental results show that our approach achieved was 99.29 % classification accuracy achieved by NN and 97.32 % by SVM with cross-validation 10. And 99.58 % achieved by NN and 97.09 % SVM by with percentage split with 66%.

Collaboration


Dive into the Hassan Najadat's collaboration.

Top Co-Authors

Avatar

Amer Al-Badarneh

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ismail Hmeidi

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdalrhman Almodawar

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali M. Alraziqi

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Amer Al-Bdarneh

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Amer Badarneh

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Amnah Al Abdi

Jordan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Amnah Al-Abdi

Jordan University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge