Liyana Shuib
Information Technology University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Liyana Shuib.
Computers in Human Behavior | 2015
Liyana Shuib; Shahaboddin Shamshirband; Mohammad Ismail
To investigate and review how urbanized youth perceive and use mobile phones.To review the mobile phone usage pattern among the youth.Mobile learning technologies can utilize the technology of pervasive computing.The integration of mobile learning with pervasive computing. Mobile phones constitute a technology that has become part of our everyday usage. In the absence of an in-depth evaluation of mobile phone appropriation and its utilization, this paper investigates and reviews the usage of mobile phones in the context of pervasive learning. This paper reviews mobile phone usage and associated applications, as well as the negative impact. This paper also covers pervasive computing, and mobile pervasive learning technologies, applications and issues. Fifty-five papers were selected in the review process. The assimilation of pervasive learning with mobile phones marks an incredible venture forward. The incorporation of mobile technology and pervasive learning can enhance the effectiveness and accessibility of learning activities in the future. This new innovation has changed the conventional idea of learning in as much as we are now continually surrounded and submerged in learning encounters.
Applied Soft Computing | 2016
Haruna Chiroma; Abdullah Khan; Adamu Abubakar; Younes Saadi; Mukhtar Fatihu Hamza; Liyana Shuib; Abdulsalam Ya’u Gital; Tutut Herawan
Display Omitted We proposed a new forecasting method based on mete-heuristic algorithm.The method was applied to forecast OPEC petroleum consumption.The new method outperforms previous methods in forecasting OPEC petroleum consumption.The new method is an alternative means of forecasting OPEC petroleum consumption. Petroleum is the live wire of modern technology and its operations, with economic development being positively linked to petroleum consumption. Many meta-heuristic algorithms have been proposed in literature for the optimization of Neural Network (NN) to build a forecasting model. In this paper, as an alternative to previous methods, we propose a new flower pollination algorithm with remarkable balance between consistency and exploration for NN training to build a model for the forecasting of petroleum consumption by the Organization of the Petroleum Exporting Countries (OPEC). The proposed approach is compared with established meta-heuristic algorithms. The results show that the new proposed method outperforms existing algorithms by advancing OPEC petroleum consumption forecast accuracy and convergence speed. Our proposed method has the potential to be used as an important tool in forecasting OPEC petroleum consumption to be used by OPEC authorities and other global oil-related organizations. This will facilitate proper monitoring and control of OPEC petroleum consumption.
PLOS ONE | 2017
Ghulam Mujtaba; Liyana Shuib; Ram Gopal Raj; Retnagowri Rajandram; Khairunisa Shaikh; Mohammed Ali Al-Garadi
Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.
PLOS ONE | 2015
Haruna Chiroma; Sameem Abdulkareem; Abdullah Khan; Nazri Mohd Nawi; Abdulsalam Ya’u Gital; Liyana Shuib; Adamu Abubakar; Muhammad Zubair Rahman; Tutut Herawan
Background Global warming is attracting attention from policy makers due to its impacts such as floods, extreme weather, increases in temperature by 0.7°C, heat waves, storms, etc. These disasters result in loss of human life and billions of dollars in property. Global warming is believed to be caused by the emissions of greenhouse gases due to human activities including the emissions of carbon dioxide (CO2) from petroleum consumption. Limitations of the previous methods of predicting CO2 emissions and lack of work on the prediction of the Organization of the Petroleum Exporting Countries (OPEC) CO2 emissions from petroleum consumption have motivated this research. Methods/Findings The OPEC CO2 emissions data were collected from the Energy Information Administration. Artificial Neural Network (ANN) adaptability and performance motivated its choice for this study. To improve effectiveness of the ANN, the cuckoo search algorithm was hybridised with accelerated particle swarm optimisation for training the ANN to build a model for the prediction of OPEC CO2 emissions. The proposed model predicts OPEC CO2 emissions for 3, 6, 9, 12 and 16 years with an improved accuracy and speed over the state-of-the-art methods. Conclusion An accurate prediction of OPEC CO2 emissions can serve as a reference point for propagating the reorganisation of economic development in OPEC member countries with the view of reducing CO2 emissions to Kyoto benchmarks—hence, reducing global warming. The policy implications are discussed in the paper.
Applied Soft Computing | 2017
Haruna Chiroma; Tutut Herawan; Iztok Fister; Sameem Abdulkareem; Liyana Shuib; Mukhtar Fatihu Hamza; Younes Saadi; Adamu Abubakar
Abstract Presently, the Cuckoo Search algorithm is attracting unprecedented attention from the research community and applications of the algorithm are expected to increase in number rapidly in the future. The purpose of this study is to assist potential developers in selecting the most suitable cuckoo search variant, provide proper guidance in future modifications and ease the selection of the optimal cuckoo search parameters. Several researchers have attempted to apply several modifications to the original cuckoo search algorithm in order to advance its effectiveness. This paper reviews the recent advances of these modifications made to the original cuckoo search by analyzing recent published papers tackling this subject. Additionally, the influences of various parameter settings regarding cuckoo search are taken into account in order to provide their optimal settings for specific problem classes. In order to estimate the qualities of the modifications, the percentage improvements made by the modified cuckoo search over the original cuckoo search for some selected reviews studies are computed. It is found that the population reduction and usage of biased random walk are the most frequently used modifications. This study can be used by both expert and novice researchers for outlining directions for future development, and to find the best modifications, together with the corresponding optimal setting of parameters for specific problems. The review can also serve as a benchmark for further modifications of the original cuckoo search.
IEEE Access | 2017
Ghulam Mujtaba; Liyana Shuib; Ram Gopal Raj; Nahdia Majeed; Mohammed Ali Al-Garadi
Personal and business users prefer to use e-mail as one of the crucial sources of communication. The usage and importance of e-mails continuously grow despite the prevalence of alternative means, such as electronic messages, mobile applications, and social networks. As the volume of business-critical e-mails continues to grow, the need to automate the management of e-mails increases for several reasons, such as spam e-mail classification, phishing e-mail classification, and multi-folder categorization, among others. This paper comprehensively reviews articles on e-mail classification published in 2006–2016 by exploiting the methodological decision analysis in five aspects, namely, e-mail classification application areas, data sets used in each application area, feature space utilized in each application area, e-mail classification techniques, and the use of performance measures. A total of 98 articles (56 articles from Web of Science core collection databases and 42 articles from Scopus database) are selected. To achieve the objective of the study, a comprehensive review and analysis is conducted to explore the various areas where e-mail classification was applied. Moreover, various public data sets, features sets, classification techniques, and performance measures are examined and used in each identified application area. This review identifies five application areas of e-mail classification. The most widely used data sets, features sets, classification techniques, and performance measures are found in the identified application areas. The extensive use of these popular data sets, features sets, classification techniques, and performance measures is discussed and justified. The research directions, research challenges, and open issues in the field of e-mail classification are also presented for future researchers.
Intelligent Automation and Soft Computing | 2016
Haruna Chiroma; Sameem Abdulkareem; Ahmad Shukri Mohd Noor; Adamu Abubakar; Nader Sohrabi Safa; Liyana Shuib; Mukhtar Fatihu Hamza; Abdulsalam Ya’u Gital; Tutut Herawan
AbstractWhen crude oil prices began to escalate in the 1970s, conventional methods were the predominant methods used in forecasting oil pricing. These methods can no longer be used to tackle the nonlinear, chaotic, non-stationary, volatile, and complex nature of crude oil prices, because of the methods’ linearity. To address the methodological limitations, computational intelligence techniques and more recently, hybrid intelligent systems have been deployed. In this paper, we present an extensive review of the existing research that has been conducted on applications of computational intelligence algorithms to crude oil price forecasting. Analysis and synthesis of published research in this domain, limitations and strengths of existing studies are provided. This paper finds that conventional methods are still relevant in the domain of crude oil price forecasting and the integration of wavelet analysis and computational intelligence techniques is attracting unprecedented interest from scholars in the domai...
Proceedings of the Workshop on Noisy User-generated Text | 2015
Mohammad Arshi Saloot; Norisma Idris; Liyana Shuib; Ram Gopal Raj; AiTi Aw
The use of social network services and microblogs, such as Twitter, has created valuable text resources, which contain extremely noisy text. Twitter messages contain so much noise that it is difficult to use them in natural language processing tasks. This paper presents a new approach using the maximum entropy model for normalizing Tweets. The proposed approach addresses words that are unseen in the training phase. Although the maximum entropy needs a training dataset to adjust its parameters, the proposed approach can normalize unseen data in the training set. The principle of maximum entropy emphasizes incorporating the available features into a uniform model. First, we generate a set of normalized candidates for each out-ofvocabulary word based on lexical, phonemic, and morphophonemic similarities. Then, three different probability scores are calculated for each candidate using positional indexing, a dependency-based frequency feature and a language model. After the optimal values of the model parameters are obtained in a training phase, the model can calculate the final probability value for candidates. The approach achieved an 83.12 BLEU score in testing using 2,000 Tweets. Our experimental results show that the maximum entropy approach significantly outperforms previous well-known normalization approaches.
international conference on machine learning and applications | 2016
Ghulam Mujtaba; Liyana Shuib; Ram Gopal Raj; Retnagowri Rajandram; Khairunisa Shaikh
Forensic autopsy focuses on revealing the cause of death (CoD) by examination of a dead body. In this research study, various feature extraction schemes, feature value representation schemes and text classification algorithms have been applied on forensic autopsy reports to discover the suitable feature extraction approach, feature value representation approach and text classification approach. From experimental results, it was found that the unigram features outperformed bigram, trigram and hybrids of unigram, bigram and trigram features. Moreover, TF and TFiDF feature value representation schemes were proven more suitable than binary representation and normalized TFiDF schemes. Finally, SVM decision models outperformed RF and NB.
Journal of Forensic and Legal Medicine | 2017
Ghulam Mujtaba; Liyana Shuib; Ram Gopal Raj; Retnagowri Rajandram; Khairunisa Shaikh
OBJECTIVES Automatic text classification techniques are useful for classifying plaintext medical documents. This study aims to automatically predict the cause of death from free text forensic autopsy reports by comparing various schemes for feature extraction, term weighing or feature value representation, text classification, and feature reduction. METHODS For experiments, the autopsy reports belonging to eight different causes of death were collected, preprocessed and converted into 43 master feature vectors using various schemes for feature extraction, representation, and reduction. The six different text classification techniques were applied on these 43 master feature vectors to construct a classification model that can predict the cause of death. Finally, classification model performance was evaluated using four performance measures i.e. overall accuracy, macro precision, macro-F-measure, and macro recall. RESULTS From experiments, it was found that that unigram features obtained the highest performance compared to bigram, trigram, and hybrid-gram features. Furthermore, in feature representation schemes, term frequency, and term frequency with inverse document frequency obtained similar and better results when compared with binary frequency, and normalized term frequency with inverse document frequency. Furthermore, the chi-square feature reduction approach outperformed Pearson correlation, and information gain approaches. Finally, in text classification algorithms, support vector machine classifier outperforms random forest, Naive Bayes, k-nearest neighbor, decision tree, and ensemble-voted classifier. CONCLUSION Our results and comparisons hold practical importance and serve as references for future works. Moreover, the comparison outputs will act as state-of-art techniques to compare future proposals with existing automated text classification techniques.