Azlan Ismail
Universiti Teknologi MARA
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Azlan Ismail.
Journal of Systems and Software | 2013
Azlan Ismail; Jun Yan; Jun Shen
This research addresses a critical issue of service level agreement (SLA) violation handling, i.e., time constraint violation related to service-based systems (SBS). Whenever an SLA violation occurs to a service, it can potentially impact dependent services, leading to unreliable SBS. Therefore, an SLA violation handling support is much required to produce a robust and adaptive SBS. There are several approaches to realizing exceptions and faults handling support for SBS, focusing on the detection stage, the analysis stage, and the resolution stage. However, the current works have not considered the handling strategy that takes the impact information into account to reduce the amount of change. This is essential to effectively handle the violation while consuming a reasonable recovery execution time. Therefore, in this research, we propose an incremental SLA violation handling with time impact analysis. The main role of the time impact analysis in the approach is to automatically generate an impact region based on the negative time impact conditions. Furthermore, the time impact analysis generates the appropriate time requirements. Both the region and the requirement are useful to support the recovery process. Based on a simplified evaluation study, the outcome suggests that the proposed approach can reduce the amount of service change within a reasonable recovery execution time.
service oriented computing and applications | 2010
Azlan Ismail; Jun Yan; Jun Shen
Service level agreement (SLA) plays an important role in realizing service-oriented application. With SLA negotiation mechanism, both parties namely the requester and the provider can exchange information of SLA parameters towards establishing an agreement. In this paper, we study the roles of both parties and focus on how service providers generate offers upon receiving the requests from service requesters. From the provider’s perspective, the provider has to decide the right values to offer based on its current resource availability while aiming to satisfy the requester requirements (if possible). Therefore, in this paper, we propose an approach to addressing offer generation, including the architecture, the information modeling and the generation algorithm. We then provide a case study to illustrate the usefulness of the approach, followed by an analysis to justify the effectiveness of the approach.
web information systems engineering | 2009
Azlan Ismail; Jun Yan; Jun Shen
This paper aims to address the issue of consistency and satisfaction of composite services with the presence of temporal constraints. These constraints may cause conflict between services and affect the estimation over composition requirements. Existing verification approaches have not adequately addressed this issue. Therefore, this paper contributes to the verification method with temporal consistency checking and temporal satisfaction estimation. A set of checking rules and estimation formulae are presented according to workflow patterns and temporal dependencies. The method will lead to three major outcomes; consistent with satisfactory combination, consistent with unsatisfactory combination and inconsistent with unsatisfactory combination.
australian software engineering conference | 2009
Azlan Ismail; Jun Yan; Jun Shen
Service selection is a kind of planning approach that evaluates and selects from multiple services to form a composite plan. In service selection, we found one additional issue that has not been investigated yet namely time constraints consistency among composite services. This issue is significant because there are potential time constraints involved that might cause inconsistency between selected services although they can offer aggregated QoS that satisfy global requirements. Furthermore, there might be some unintended waiting time to be considered in the QoS aggregation. Thus, this paper contributes to the analysis on the problem caused by the time constraints and proposes general selection approach to tackle these issues. The approach comprises two major functionalities; (i) exploration of the process model based on patterns, (ii) evaluation of candidates based on time constraints and objective functions.
ieee international conference on services computing | 2011
Azlan Ismail; Jun Yan; Jun Shen
A fault situation occurs to a service needs to be well analyzed and handled in order to ensure the reliability of composite service. The analysis can be driven by understanding the impact caused by the faulty service on the other services as well as the entire composition. Existing works have given less attention to this issue, in particular, the temporal impact situation caused by the fault. Thus, we propose an approach to analyzing the temporal impact and generating the impact region. The region can be utilized by the handling mechanism to prioritize the services to be repaired. The approach begins by estimating the updated temporal behavior of the composite service after the fault situation occurs, followed by identifying the potential candidates of the impact region. The concept of temporal negative impact is introduced to support the identification activity. Intuitively, the approach can assist in reducing the number of service changes in handling the fault situation.
international conference on service oriented computing | 2013
Azlan Ismail; Valeria Cardellini
A complex service-based system (CSBS), which comprises a multi-layer structure possibly spanning multiple organizations, operates in a highly dynamic and heterogeneous environment. At run time the quality of service provided by a CSBS may suddenly change, so that violations of the Service Level Agreements (SLAs) established within and across the boundaries of organizations can occur. Hence, a key management choice is to design the CSBS as a self-adaptive system, so that it can properly plan adaptation decisions to maintain the overall quality defined in the SLAs. However, the challenge in planning the CSBS adaptation is the uncertainty effect of adaptation actions that can variously affect the multiple layers of the CSBS. In a dynamic and constantly evolving environment, there is no guarantee that the adaptation action taken at a given layer can have an overall positive effect. Furthermore, the complexity of the cross-layer interactions makes the decision making process a non-trivial task. In this paper, we address the problem by proposing a multi-layer adaptation planning with local and global adaptation managers. The local manager is associated with a single planning model, while the global manager is associated with a multiple planning model. Both planning models are based on Markov Decision Processes (MDPs) that provide a suitable technique to model decisions under uncertainty. We present an example of scenario to show the practicality of the proposed approach.
european conference on service-oriented and cloud computing | 2014
Azlan Ismail; Valeria Cardellini
The runtime management of Internet of Things (IoT) oriented applications deployed in multi-clouds is a complex issue due to the highly heterogeneous and dynamic execution environment. To effectively cope with such an environment, the cross-layer and multi-cloud effects should be taken into account and a decentralized self-adaptation is a promising solution to maintain and evolve the applications for quality assurance. An important issue to be tackled towards realizing this solution is the uncertainty effect of the adaptation, which may cause negative impact to the other layers or even clouds. In this paper, we tackle such an issue from the planning perspective, since an inappropriate planning strategy can fail the adaptation outcome. Therefore, we present an architectural model for decentralized self-adaptation to support the cross-layer and multi-cloud environment. We also propose a planning model and method to enable the decentralized decision making. The planning is formulated as a Reinforcement Learning problem and solved using the Q-learning algorithm. Through simulation experiments, we conduct a study to assess the effectiveness and sensitivity of the proposed planning approach. The results show that our approach can potentially reduce the negative impact on the cross-layer and multi-cloud environment.
service-oriented computing and applications | 2009
Azlan Ismail; Jun Yan; Jun Shen
SLAs play an important role for the QoS-driven service composition. Meanwhile, the temporal constraints are one of the main elements in the management of SLA especially in specifying the validity period of the QoS offers. In practice, the temporal constraints should be generated dynamically by taking the resource capability of the provider into account. The generation should consider various parameters that influence the resource capability such as the expected duration of the required Web service, the amount of current utilization, the amount of available resources, the number of required time slots, etc. Therefore, this paper aims to elaborate this issue and present a temporal constraints formation for SLA negotiation framework. This framework is proposed in the context of service selection and SLA negotiation. It provides the foundation towards the dynamic formation. This paper also demonstrates the initial approach of temporal constraints formation.
soft computing | 2017
Muhammad Firdaus Mustapha; Noor Elaiza Abd Khalid; Azlan Ismail; Mazani Manaf
Self-organizing Map (SOM) is a very popular algorithm that has been used as clustering algorithm and data exploration. SOM consists of complex calculations where the calculation of complexity depending on the circumstances. Many researchers have managed to improve online SOM processing speed using discrete Graphic Processing Units (GPU). In spite of excellent performance using GPU, there is a situation that causes computer hardware underutilized when executing online SOM variant on GPU architecture. In details, the situation occurs when number of cores is larger than the number of neurons on map. Moreover, the complexities of SOM steps also increase the usage of high memory capacity which leads to high rate memory transfer. Recently, Heterogeneous System Architecture (HSA), that integrated Central Processing Unit (CPU) and GPU together on a single chip are rapidly attractive the design paradigm for recent platform because of their remarkable parallel processing abilities. Therefore, the main goal of this study is to reduce computation time of SOM training through adapting HSA platform and combining two SOM training processes. This study attempts to enhance the processing of SOM algorithm using multiple stimuli approach. The data used in this study are benchmark datasets from UCI Machine Learning repository. As a result, the enhanced parallel SOM algorithm that executed on HSA platform is able to score a promising speed up for different parameter size compared to standard parallel SOM on HSA platform.
asian conference on intelligent information and database systems | 2017
Noor Elaiza Abd Khalid; Muhammad Firdaus Mustapha; Azlan Ismail; Mazani Manaf
Parallel implementation of Self-organizing Map (SOM) has been studied since last decade. Graphic Processing Unit (GPU) is one of most promising architecture for executing SOM in parallel. However, there are performances issues are highlighted when imposing larger mapping and dataset size onto parallel SOM that executed on the GPU. Alternatively, heterogeneous systems that soldered GPU together with Central Processing Unit (CPU) are introduced in order to improve communication between CPU and GPU. Shared Virtual Memory (SVM) is one of features in OpenCL 2.0 which allows the host and the device to share a common virtual address range. Thus this research proposes to introduce a parallel SOM architecture that suitable for both GPU and heterogeneous system with the aim to compare the performance in term of computation time. The architecture comprises of three kernels that executed on two different platforms (1) discrete GPU platform and (2) heterogeneous system platform that tested using SVM buffers. The experimental results show the parallel SOM running on heterogeneous platform has significant improvement in computation time.