Abdullah Muhammed
Universiti Putra Malaysia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Abdullah Muhammed.
asia-pacific conference on applied electromagnetics | 2007
Abdullah Muhammed; KyairulNiza Mohd Saleh; Azizol Abdullah
First-in-First-out (FIFO) is one of the simplest queuing policies used to provide best effort services in packet-switched network. However, the performance of FIFO is really crucial when it related to stability i.e. question of whether there is a bound on the total size of packets in the network at all times. In this study, our main objective is to find the optimum packet size of voice packet when using FIFO scheduling policy. Our approach is based on adversarial generation of packets so that positive results are more robust in that they do not depend on particular probabilistic assumptions about the input sequences. In this paper, we proposed the FIFO scheduling technique that uses adversarial queuing model to find the optimum packet size of voice packet in FIFO network. Although the simulation results show that the average packet loss is increase when the arrival packet is increased, the average packet delay is improved as compared to FIFO M/M/1 technique, studied by (Phalgun, 2003). This algorithm can be utilized for transmitting voice packet over IP.
Journal of Computer Science | 2017
Adamu Muhammad Noma; Abdullah Muhammed; Mohamad Afendee Mohamed; Zuriati Ahmad Zulkarnain
Field exponentiation and scalar multiplication are the pillars of and the most computationally expensive operations in the public key cryptosystems. Optimizing the operation is the key to the efficiency of the systems. Analogous to the optimization is solving addition chain problem. In this study, we survey from the onset of the addition chain problem to the state-of-the-art heuristics for optimizing it, with the view to identifying fundamental issues that when addressed renders the heuristics most optimal mean of minimizing the two operations in various public key cryptosystems. Thus, our emphasis is specifically on the heuristics: Their various constraints and implementations efficiencies. We present possible ways forwards toward the optimal solution for the addition chain problem that can be efficiently applied for optimal implementation of the public key cryptosystems.
International Conference on Mobile and Wireless Technology | 2017
Mamman Maharazu; Zurina Mohd Hanapi; Azizol Abdullah; Abdullah Muhammed
Provision Quality of Service (QoS) is a crucial challenge in any vehicular multimedia wireless application. In wireless applications, maintaining QoS requirements of various calls is a more challenging issue because of the concurrent need to prioritize the handoff calls and new calls trying to access the network. In this paper, a novel call admission control scheme is proposed which prioritizes calls and differentiates the calls types into real-time and non-real-time traffic. By doing so, the scheme reduces both the handoff blocking probability, new call dropping probability, and improves data throughput utilization. Experimental results reveal the outstanding performance of the proposed scheme as it is able to achieve a better call blocking and call dropping probabilities compared to Non-prioritize scheme.
IEEE Access | 2017
Maharazu Mamman; Zurina Mohd Hanapi; Azizol Abdullah; Abdullah Muhammed
In recent years, consumers of 4G cellular networks have increased exponentially as they discover that the service is user-friendly. Due to the large users and their frequent demands, it is necessary to use the limited network resources that guarantee the eminent standard quality of service (QoS). Call admission control (CAC) scheme has a major impact in assuring QoS for different users with various QoS requirements in 4G networks. Recently, the reservation-based scheme and bandwidth degradation schemes were proposed with the aim to provide effective use of network resources and assure QoS requirements to admitted calls. However, in spite of these several objectives, these schemes are not efficient as a result of the modeling and approximation method that starve the best effort (BE) traffic. The dynamic threshold value approach adjusts handoff call and new call based on time-varying conditions resulted in a waste of network resources, where bandwidth are reserved for handoff call, but at the network environment, there is little or no handoff calls. In this paper, we propose a novel CAC scheme to provide effective use of network resources and avoid the starvation of BE traffic. The scheme introduces an adaptive threshold value, which adjusts the network resources under heavy traffic intensity. In addition, we proposed reservation and degradation approach to admit many users when there is a limited number of bandwidth, which also achieved effective utilization of network resources. Simulation results show that the proposed scheme significantly outperforms the reservation-based scheme and bandwidth degradation schemes in terms of admitting many calls and guaranteeing QoS to all the traffic types in the network. Numerical results imitate to experimental results with insignificant differences.
world congress on information and communication technologies | 2014
Omid Seifaddini; Azizol Abdullah; Masnida Hussin; Abdullah Muhammed
Offloading is a technique that resource poor mobile devices try to alleviate the resource constrained by migrating partial or full heavy computation part of mobile application to the more resourceful platforms such as cloud computing. This paper presents an analysis and overview of cloud based mobile computation offloading performance based on the current Internet infrastructure and backbone in Malaysia specifically in Peninsular Malaysia, Serdang area. The experimental results show that the current developing infrastructure and backbone need to be more enhanced in order to support mobile offloading.
Journal of Computer Science | 2018
James Kok Konjaang; Fahrul Hakim Ayob; Abdullah Muhammed
The rise in demand for cloud resources (network, hardware and software) requires cost effective scientific workflow scheduling algorithm to reduce cost and balance load of all jobs evenly for a better system throughput. Getting multiple scientific workflows scheduled with a reduced makespan and cost in a dynamic cloud computing environment is an attractive research area which needs more attention. Scheduling multiple workflows with the standard Max-Min algorithm is a challenge because of the high priority given to task with maximum execution time first. To overcome this challenge, we proposed a new mechanism call Expanded Max-Min (Expa-Max-Min) algorithm to effectively give equal opportunity to both cloudlets with maximum and minimum execution time to be scheduled for a reduce cost and time. Expa-Max-Min algorithm first calculates the completion time of all the cloudlets in the cloudletList to find cloudlets with minimum and maximum execution time, then it sorts and queue the cloudlets in two queues based on their execution times. The algorithm first select a cloudlet from the cloudletList in the maximum execution time queue and assign it to a resource that produces minimum completion time, while executing cloudlets in the minimum execution time queue concurrently. The experimented results demonstrats that our proposed algorithm, Expa-Max-Min algorithm, is able to produce good quality solutions in terms of minimising average cost and makespan and able to balance loads than Max-Min and Min-Min algorithms.
International Journal of Sensor Networks | 2017
Ammar Y. Tuama; Mohamad Afendee Mohamed; Abdullah Muhammed; Zurina Mohd Hanapi
Energy consumption is one of the most critical issues in wireless sensor network (WSN). For a sensor device, transmission of data is considered as the most energy consuming task, and it mostly depends on the size of the data. Fortunately, data compression can be used to minimise the transmitted data size and thus extend sensors lifetime. In this paper, we propose a new lossless compression algorithm that can handle small data communication in WSNs. Using compression ratio, memory usage, number of instructions and execution speed as a comparison parameters, the proposed algorithm is measured against a set of existing algorithms. Two different datasets have been used for this purpose; namely, self-generated dataset and real sensor dataset from Harvard Sensor Library. As a result, the proposed algorithm not only outclasses other existing algorithms but most importantly produces positive compression ratio throughout the whole test where most existing algorithms experience an expansion in data size when dealing with very small data.
IEEE Access | 2017
Kailun Eng; Abdullah Muhammed; Mohamad Afendee Mohamed; Sazlinah Hasan
Over the years, many heuristic algorithms have been proposed for solving various Grid scheduling problems. GridSim simulator has become a very popular simulation tool and has been widely used by Grid researchers to test and evaluate the performance of their proposed scheduling algorithms. As heterogeneity is one of the unique characteristics of Grid computing, which induces additional challenges in designing heuristic-based scheduling algorithms, the main concern when performing simulation experiments for evaluating the performance of scheduling algorithms is how to model and simulate different Grid scheduling scenarios or cases that capture the inherent nature of heterogeneity of Grid computing environment. However, most simulation studies that based on GridSim have not considered the nature of heterogeneity. In this paper, we propose a new simulation model that incorporates the range-based method into GridSim for modeling and simulating heterogeneous tasks and resources in order to capture the inherent heterogeneity of Grid environments that later can be used by other researchers to test their algorithms.
Cogent engineering | 2017
Adamu Muhammad Noma; Abdullah Muhammed; Zuriati Ahmad Zukarnain; Muhammad Afendee Mohamed
Cryptography via public key cryptosystems (PKC) has been widely used for providing services such as confality, authentication, integrity and non-repudiation. Other than security, computational efficiency is another major issue of concern. And for PKC, it is largely controlled by either modular exponentiation or scalar multiplication operations such that found in RSA and elliptic curve cryptosystem (ECC), respectively. One approach to address this operational problem is via concept of addition chain (AC), in which the exhaustive single operation involving large integer is reduced into a sequence of operations consisting of simple multiplications or additions. Existing techniques manipulate the representation of integer into binary and m-ary prior performing the series of operations. This paper proposes an iterative variant of sliding window method (SWM) form of m-ary family, for shorter sequence of multiplications corresponding to the modular exponentiation. Thus, it is called an iterative SWM. Moreover, specific for ECC that imposes no extra resource for point negation, the paper proposes an iterative recoded SWM, operating on integers recoded using a modified non-adjacent form (NAF) for speeding up the scalar multiplication. The relative behaviour is also examined, of number of additions in scalar multiplications, with the integers hamming weight. The proposed iterative SWM methods reduce the number of operations by up to 6% than the standard SWM heuristic. They result to even shorter chains of operations than ones returned by many metaheuristic algorithms for the AC.
international conference on computational science and its applications | 2016
Omid Seifaddini; Azizol Abdullah; Abdullah Muhammed; Masnida Hussin
Scheduling of jobs is one of the most important research areas of Grid computing as it has attracted so much attention since its beginning. Job scheduling in Grid computing is a NP-Complete problem due to Grid characteristics such as heterogeneity and dynamicity. Many heuristic algorithms have been proposed for Grid scheduling to avail Grid computing. However, these heuristic methods are limited by time constraints required for remapping of jobs to Grid resources in such elastic and dynamic environments. Great Deluge (GD) is a practical solution for such a problem. Therefore, this paper presents Great Deluge and Extended Great Deluge (EGD) based scheduling algorithm for Grid computing. We also present the detailed implementation of GD and EGD in a reliable simulation platform, GridSim. This has two advantages. First, it will ease the reimplementation process for future contributors since there are lots of complexity and ambiguity to develop such scheduling algorithms. Second, most of the research and experimental results, especially in the area of Grid scheduling, have used their own developed infrastructure to simulate the performance of their algorithms, thus the question remains on how well they will perform in a real world environment. We also, investigate the computation time and the number of soft constraints violations of EGD against its conventional GD algorithm. The GD scheduling algorithm is able to provide qualitative solution in shorter time for small Grid size while EGD could produce schedule in shorter time for all cases.