Shahrul Azman Mohd Noah
National University of Malaysia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shahrul Azman Mohd Noah.
2011 International Conference on Semantic Technology and Information Retrieval | 2011
Saman Shishehchi; Seyed Yashar Banihashem; Nor Azan Mat Zin; Shahrul Azman Mohd Noah
With the rapidly increasing learning materials and learning resources, either offline or online, it is quite difficult to find suitable materials based on learners need. Recommender systems help learners find the appropriate learning materials in which they would need to learn. This paper discusses about the personalized recommendation systems in e-learning and compares their recommendation techniques. Two concepts are the main discussion topics in this research. The first one is about the learners requirement and the second one in about the personalized recommendation technique. Finally, this study proposes the knowledge based recommendation system as suitable recommendation technique. This recommendation aims to recommend to the learner, some materials based on the learners need. By using the semantic relationship between learning materials and the learners need, system can select the suitable materials as a recommendation to the learner. To develop the proposed knowledge based recommendation system is the next work for the future.
2011 International Conference on Semantic Technology and Information Retrieval | 2011
Mahyuddin K. M. Nasution; Shahrul Azman Mohd Noah
There has been quite a number of research efforts in extracting academic social network from on-line open sources such as the DBLP, ACM DL and IEEXplore. Extraction of such a network is usually based on the concept of co-occurrences. One of the issues in such efforts is actually involved extracting reliable and trusted network particularly when dealing with the heterogeneity of features in the Web. In this paper we demonstrate the use of association rule to enhance existing superficial method for extracting social network from online database such as the DBLP. The approach proposed has shown the capacity to extract social relation as well as the strength of these relations.
rough sets and knowledge technology | 2010
Mahyuddin K. M. Nasution; Shahrul Azman Mohd Noah
Social network analysis (SNA) has become one of the main themes in the Semantic Web agenda. The use of web is steadily gaining ground in the study of social networks. Few researchers have shown the possibility of extracting social network from the Web via search engine. However to get a rich and trusted social network from such an approach proved to be difficult. In this paper we proposed an Information Retrieval (IR) driven method for dealing with the heterogeneity of features in the Web. We demontrate the possibility of exploiting features in Web snippets returned by search engines for disambiguating entities and building relations among entities during the process of extracting social networks. Our approach has shown the capacity to extract underlying strength relations which are beyond recognition using the standard co-occurrence analysis employed by many research.
arXiv: Information Retrieval | 2012
Mahyuddin K. M. Nasution; Shahrul Azman Mohd Noah
Future Information Retrieval, especially in connection with the internet, will incorporate the content descriptions that are generated with social network extraction technologies and preferably incorporate the probability theory for assigning the semantic. Although there is an increasing interest about social network extraction, but a little of them has a significant impact to information retrieval. Therefore this paper proposes a model of information retrieval from the social network extraction.
international conference on computational science and its applications | 2007
Masita Abdul Jalil; Shahrul Azman Mohd Noah
Elements of design patterns have been incorporated in computer science syllabus. Most of these efforts have been motivated by the benefits offered by patterns as well as positive feedback from the industry and software community. Despite various techniques and approaches suggested by researchers and educators to ensure effective learning of patterns, there is no formal reports on the actual difficulties encountered by these novices when applying patterns. Thus, we have conducted an exploratory study to identify the difficulties they have in using patterns.
Knowledge Based Systems | 2016
Bassam Al-Salemi; Shahrul Azman Mohd Noah; Mohd Juzaiddin Ab Aziz
Abstract The AdaBoost.MH boosting algorithm is considered to be one of the most accurate algorithms for multi-label classification. AdaBoost.MH works by iteratively building a committee of weak hypotheses of decision stumps. In each round of AdaBoost.MH learning, all features are examined, but only one feature is used to build a new weak hypothesis. This learning mechanism may entail a high degree of computational time complexity, particularly in the case of a large-scale dataset. This paper describes a way to manage the learning complexity and improve the classification performance of AdaBoost.MH. We propose an improved version of AdaBoost.MH, called RFBoost . The weak learning in RFBoost is based on filtering a small fixed number of ranked features in each boosting round rather than using all features, as AdaBoost.MH does. We propose two methods for ranking the features: One Boosting Round and Labeled Latent Dirichlet Allocation (LLDA), a supervised topic model based on Gibbs sampling. Additionally, we investigate the use of LLDA as a feature selection method for reducing the feature space based on the maximal conditional probabilities of words across labels. Our experimental results on eight well-known benchmarks for multi-label text categorisation show that RFBoost is significantly more efficient and effective than the baseline algorithms. Moreover, the LLDA-based feature ranking yields the best performance for RFBoost.
Journal of Information Science | 2015
Bassam Al-Salemi; Mohd Juzaiddin Ab Aziz; Shahrul Azman Mohd Noah
AdaBoost.MH is a boosting algorithm that is considered to be one of the most accurate algorithms for multilabel classification. It works by iteratively building a committee of weak hypotheses of decision stumps. To build the weak hypotheses, in each iteration, AdaBoost.MH obtains the whole extracted features and examines them one by one to check their ability to characterize the appropriate category. Using Bag-Of-Words for text representation dramatically increases the computational time of AdaBoost.MH learning, especially for large-scale datasets. In this paper we demonstrate how to improve the efficiency and effectiveness of AdaBoost.MH using latent topics, rather than words. A well-known probabilistic topic modelling method, Latent Dirichlet Allocation, is used to estimate the latent topics in the corpus as features for AdaBoost.MH. To evaluate LDA-AdaBoost.MH, the following four datasets have been used: Reuters-21578-ModApte, WebKB, 20-Newsgroups and a collection of Arabic news. The experimental results confirmed that representing the texts as a small number of latent topics, rather than a large number of words, significantly decreased the computational time of AdaBoost.MH learning and improved its performance for text categorization.
international conference on computer graphics, imaging and visualisation | 2008
Lili Nurliyana Abdullah; Shahrul Azman Mohd Noah
This paper presents a method which able to integrate audio and visual information for action scene analysis in any movie. The approach is top-down for determining and extract action scenes in video by analyzing both audio and video data. In this paper, we directly modelled the hierarchy and shared structures of human behaviours, and we present a framework of the hidden Markov model based application for the problem of activity recognition. We proposed a framework for recognizing actions by measuring human action-based information from video with the following characteristics: the method deals with both visual and auditory information, and captures both spatial and temporal characteristics; and the extracted features are natural, in the sense that they are closely related to the human perceptual processing. Our effort was to implementing idea of action identification by extracting syntactic properties of a video such as edge feature extraction, colour distribution, audio and motion vectors. In this paper, we present a two layers hierarchical module for action recognition. The first one performs supervised learning to recognize individual actions of participants using low-level visual features. The second layer models actions, using the output of the first layer as observations, and fuse with the high level audio features. Both layers use hidden Markov model-based approaches for action recognition and clustering, respectively. Our proposed technique characterizes the scenes by integration cues obtained from both the video and audio tracks. We are sure that using joint audio and visual information can significantly improve the accuracy for action detection over using audio or visual information only. This is because multimodal features can resolve ambiguities that are present in a single modality. Besides, we modelled them into multidimensional form.
2nd International Multi-Conference on Artificial Intelligence Technology, M-CAIT 2013 | 2013
Shahrul Azman Mohd Noah; Azizi Abdullah; Haslina Arshad; Azuraliza Abu Bakar; Zulaiha Ali Othman; Shahnorbanun Sahran; Nazlia Omar; Zalinda Othman
The determination of real world coordinate from image coordinate has many applications in computer vision. This paper proposes the algorithm for determination of real world coordinate of a point on a plane from its image coordinate using single calibrated camera based on simple analytic geometry. Experiment has been done using the image of chessboard pattern taken from five different views. The experiment result shows that exact real world coordinate and its approximation lie on the same plane and there are no significant difference between exact real world coordinate and its approximation.
ieee international advance computing conference | 2009
Mutasem Alsmadi; Khairuddin Omar; Shahrul Azman Mohd Noah; Ibrahim Almarashdah
A multilayer perceptron is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate output. It is a modification of the standard linear perceptron in that it uses three or more layers of neurons (nodes) with nonlinear activation functions, and is more powerful than the perceptron in that it can distinguish data that is not linearly separable, or separable by a hyper plane. MLP networks are general-purpose, flexible, nonlinear models consisting of a number of units organised into multiple layers. The complexity of the MLP network can be changed by varying the number of layers and the number of units in each layer. Given enough hidden units and enough data, it has been shown that MLPs can approximate virtually any function to any desired accuracy. This paper presents the performance comparison between Multi-layer Perceptron (Back Propagation, Delta Rule and Perceptron). Perceptron is a steepest descent type algorithm that normally has slow convergence rate and the search for the global minimum often becomes trapped at poor local minima. The current study investigates the performance of three algorithms to train MLP networks. Its was found that the Perceptron algorithm are much better than others algorithms.