Masnida Hussin
Universiti Putra Malaysia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Masnida Hussin.
International Journal of Machine Learning and Computing | 2014
Mohd Hairy Mohamaddiah; Azizol Abdullah; Shamala Subramaniam; Masnida Hussin
The cloud provider plays a major role especially providing resources such as computing power for the cloud subscriber to deploy their applications on multiple platforms anywhere; anytime. Hence the cloud users still having problem for resource management in receiving the guaranteed computing resources on time. This will impact the service time and the service level agreements for various users in multiple applications. Therefore there is a need for a new resolution to resolve this problem. This survey paper conducts a study in resource allocation and monitoring in the cloud computing environment. We describe cloud computing and its properties, research issues in resource management mainly in resource allocation and monitoring and finally solutions approach for resource allocation and monitoring. It is believed that this paper would benefit both cloud users and researchers for further knowledge on resource management in cloud computing. On the other hand, the resources on the cloud are pooled in order to serve multiple subscribers. The provider use multi-tenancy model where the resources (physical and virtual) are reassigned dynamically based on the tenant requirement (5). The assigning of the resources will be based on the lease and SLA agreement, whereby different clients will need more or less amount of virtual resources. Subsequently, the growth of demands for cloud services is bringing more challenge for the provider to provide the resources to the client subscriber. Therefore, in this paper we provide a review on cloud computing which focus on resource management: allocation and monitoring. Our methodologies for this review are as follows: We provide a cloud computing taxonomy covers the cloud definitions, characteristics and deployment models. We then analyze the literatures and discuss about resource management, the process and the elements. We then concentrate literatures on resource allocation and monitoring. We derived the problems, challenge and the approach solution for resource allocation and monitoring in the cloud. This paper organizes as follows: Section II introduces an overview of Cloud Computing, Section III discuss about resource management and its processes, Section IV discuss about related work with resource management in the cloud, section, Section V describes about approach solution to resource allocation and monitoring and, finally Section VI concludes the paper.
ieee international conference on dependable, autonomic and secure computing | 2011
Masnida Hussin; Young Choon Lee; Albert Y. Zomaya
Large-scale distributed computing systems (LDSs), such as grids and clouds are primarily designed to provide massive computing capacity. These systems dissipate often excessive energy to both power and cool them. Concerns over greening these systems have prompted a call for scheduling policies with energy awareness (e.g., energy proportionality). The dynamic and heterogeneous nature of resources and tasks in LDSs is a major hurdle to be overcome for energy efficiency when designing scheduling policies. In this paper, we address the problem of scheduling tasks with different priorities (deadlines) for energy efficiency exploiting resource heterogeneity. Specifically, our investigation for energy efficiency focuses on two issues: (1) balancing the workload in the way utilization is maximized and (2) power management by controlling execution of tasks on processor for ensuring the energy is optimally consumed. We form a hierarchical scheduler that exploits the multi-core architecture for effective scheduling. Our scheduling approach exploits the diversity of task priority for proper load balancing across heterogeneous processors while observing energy consumption in the system. Simulation experiments prove the efficacy of our approach, and the comparison results indicate our scheduling policy helps improve energy efficiency of the system.
Journal of Parallel and Distributed Computing | 2015
Masnida Hussin; Nor Asilah Wati Abdul Hamid; Khairul Azhar Kasmiran
Demands on capacity of distributed systems (e.g., Grid and Cloud) play a crucial role in todays information era due to the growing scale of the systems. While the distributed systems provide a vast amount of computing power their reliability is often hard to be guaranteed. This paper presents effective resource management using adaptive reinforcement learning (RL) that focuses on improving successful execution with low computational complexity. The approach uses an emerging methodology of RL in conjunction with neural network to help a scheduler that effectively observes and adapts to dynamic changes in execution environments. The observation of environment at various learning stages that normalize by resource-aware availability and feedback-based scheduling significantly brings the environments closer to the optimal solutions. Our approach also solves a high computational complexity in RL system through on-demand information sharing. Results from our extensive simulations demonstrate the effectiveness of adaptive RL for improving system reliability. Able to drive the evolution of automation networks towards higher reliability.Able to handle tasks at different states in processing.Able to adapt with system changes while leading better learning experiences.Able to sustain reliable performanceAble to deal with dynamic global network infrastructure.
international conference on parallel processing | 2011
Masnida Hussin; Young Choon Lee; Albert Y. Zomaya
Energy consumption in large-scale distributed systems, such as computational grids and clouds gains a lot of attention recently due to its significant performance, environmental and economic implications. These systems consume a massive amount of energy not only for powering them, but also cooling them. More importantly, the explosive increase in energy consumption is not linear to resource utilization as only a marginal percentage of energy is consumed for actual computational works. This energy problem becomes more challenging with uncertainty and variability of workloads and heterogeneous resources in those systems. This paper presents a dynamic scheduling algorithm incorporating reinforcement learning for good performance and energy efficiency. This incorporation helps the scheduler observe and adapt to various processing requirements (tasks) and different processing capacities (resources). The learning process of our scheduling algorithm develops an association between the best action (schedule) and the current state of the environment (parallel system). We have also devised a task-grouping technique to help the decision-making process of our algorithm. The grouping technique is adaptive in nature since it incorporates current workload and energy consumption for the best action. Results from our extensive simulations with varying processing capacities and a diverse set of tasks demonstrate the effectiveness of this learning approach.
parallel and distributed computing: applications and technologies | 2010
Masnida Hussin; Young Choon Lee; Albert Y. Zomaya
Large-scale distributed computing systems (LDCSs) can be best characterized by their dynamic nature particularly in terms of availability and performance. Typically, these systems deal with various types of jobs in many aspects, such as resource requirements, quality of service (QoS) and other temporal constraints. These diverse characteristics in both resources and jobs impose a great burden on scheduling and resource allocation. That is, inefficient resource allocation brings about poor resource utilization issues and often unreliable job execution. We present the Adaptive Reliable Allocation (ADREA) scheme, which attempts to ensure reliable job execution effectively exploiting heterogeneity in both resources and jobs using a novel clustering technique and a dynamic job migration policy. Specifically, ADREA intends to pave the way in producing better performance (e.g., response time, resource utilization) with reliable computation. Extensive simulations with varying processing capacities and different job arrival rates have been carried out to evaluate our scheme. The results demonstrate that the proposed scheme provides better performance over other algorithms as it significantly improves both job completion time and resource utilization.
international conference on algorithms and architectures for parallel processing | 2011
Masnida Hussin; Young Choon Lee; Albert Y. Zomaya
The scale of the parallel and distributed systems (PDSs), such as grids and clouds, and the diversity of applications running on them put reliability a high priority performance metric. This paper presents a reputation-based resource allocation strategy for PDSs with a market model. Resource reputation is determined by availability and reliable execution. The market model helps in defining a trust interaction between provider and consumer that leverages dependable computing. We also have explicitly taken into account data staging and its delay when making the decisions. Results demonstrate that our approach significantly increases successful execution, while exploiting diversity in tasks and resources.
grid computing | 2010
Masnida Hussin; Young Choon Lee; Albert Y. Zomaya
The diversity of job characteristics such as unstructured/unorganized arrival of jobs and priorities, could lead to inefficient resource allocation. Therefore, the characterization of jobs is an important aspect worthy of investigation, which enables judicious resource allocation decisions achieving two goals (performance and utilization) and improves resource availability.
Journal of Computer Science | 2013
Masnida Hussin; Rohaya Latip
The service-oriented distributed systems such as Gr ids and Clouds are unified computing platform that connect and share heterogeneous resources including computation resource, storage resource, informatio n resource and knowledge resource. While these systems provide a vast amount of computing power their reliability are often hard to be guaranteed. It is due to the increased complexity of processing (e.g. , overhead, latency) that can indirectly affect the s ystem performance. In this study, we addressed the problem of dynamic control for resource management in distributed computing environment. Our dynamic resource control mechanism is designed based on rep utation-based scheduling that aims for sustainable resource sharing. Particularly, each computational resource in the environment has its own reputation value that calculated online by considering the computing capacity and availability. The degree of resource reputation significantly helps in scheduling decisi ons in terms of successful execution while adaptive ly monitoring resource availability. Results demonstra te that our resource control mechanism significantl y increases successful execution, while leading to ro bust resource management.
world congress on information and communication technologies | 2014
Omid Seifaddini; Azizol Abdullah; Masnida Hussin; Abdullah Muhammed
Offloading is a technique that resource poor mobile devices try to alleviate the resource constrained by migrating partial or full heavy computation part of mobile application to the more resourceful platforms such as cloud computing. This paper presents an analysis and overview of cloud based mobile computation offloading performance based on the current Internet infrastructure and backbone in Malaysia specifically in Peninsular Malaysia, Serdang area. The experimental results show that the current developing infrastructure and backbone need to be more enhanced in order to support mobile offloading.
international conference on computational science | 2014
Raja Azlina Raja Mahmood; Masnida Hussin; Noridayu Manshor; Asad I. Khan
Efficient and quick attack detection is critical in any networks, especially if the attack is harmful and can bring down the whole network within a short period of time. A black hole or packet drop attack is one example of a harmful attack in mobile ad hoc networks. In this study, we implement a series of time-based black hole attack detection of different time intervals and compare the results. We study the performances of the networks, the packet delivery ratio percentage, with detection interval time of 900, 450 and 300 seconds with a total of 900 seconds of simulation time. The results suggest that appropriate time interval is critical in providing reliable detection results in timely manner. In general, the 450 seconds detection interval time has provided more reliable results, with lower false positive percentage in comparison to those of the 300 seconds detection interval time. The best explanation to the high false positive rate in the shorter detection interval time is due to the insufficient time given to the packets to arrive to the destinations during the detection process. Meanwhile, implementing the attack detection only after 900 seconds may be considered too late and thus, may have a devastating impact to the networks.