Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sai Peck Lee is active.

Publication


Featured researches published by Sai Peck Lee.


Computers in Industry | 2014

Review: Interoperability evaluation models: A systematic review

Reza Rezaei; Thiam Kian Chiew; Sai Peck Lee; Zeinab Shams Aliee

Interoperability is defined as the ability for two (or more) systems or components to exchange information and to use the information that has been exchanged. There is increasing demand for interoperability between individual software systems. Developing an interoperability evaluation model between software and information systems is difficult, and becoming an important challenge. An interoperability evaluation model allows knowing the degree of interoperability, and lead to the improvement of interoperability. This paper describes the existing interoperability evaluation models, and performs a comparative analysis among their findings to determine the similarities and differences in their philosophy and implementation. This analysis yields a set of recommendations for any party that is open to the idea of creating or improving an interoperability evaluation model.


Future Generation Computer Systems | 2015

Cost-aware challenges for workflow scheduling approaches in cloud computing environments

Ehab Nabiel Alkhanak; Sai Peck Lee; Saif Ur Rehman Khan

Workflow Scheduling (WFS) mainly focuses on task allocation to achieve the desired workload balancing by pursuing optimal utilization of available resources. At the same time, relevant performance criteria and system distribution structure must be considered to solve specific WFS problems in cloud computing by providing different services to cloud users on pay-as-you-go and on-demand basis. In the literature, various WFS challenges affecting WFS execution cost have been discussed. However, prior work did not consider such challenges collectively. The main objective of this paper is to facilitate researchers in selecting appropriate cost-aware WFS approaches from the available pool of alternatives. To achieve this objective, we conducted an extensive review to investigate and analyze the underlying concepts of the relevant approaches. The cost-aware relevant challenges of WFS in cloud computing are classified based on Quality of Service (QoS) performance, system functionality and system architecture, which ultimately result in a taxonomy set. Some research opportunities are also discussed that help in identifying future research directions in the area of cloud computing. The findings of this review provide a roadmap for developing cost-aware models, which will motivate researchers to propose better cost-aware approaches for service consumers and/or utility providers in cloud computing. Extensively reviews cost-aware workflow scheduling approaches in cloud computing.Presents a set of taxonomy for cost-aware workflow scheduling challenges.Critically analyzes reported cost-aware workflow scheduling challenges.Provides useful recommendations for service consumers and utility providers.


Expert Systems With Applications | 2014

A semantic interoperability framework for software as a service systems in cloud computing environments

Reza Rezaei; Thiam Kian Chiew; Sai Peck Lee; Zeinab Shams Aliee

Abstract In cloud computing environments in software as a service (SaaS) level, interoperability refers to the ability of SaaS systems on one cloud provider to communicate with SaaS systems on another cloud provider. One of the most important barriers to the adoption of SaaS systems in cloud computing environments is interoperability. A common tactic for enabling interoperability is the use of an interoperability framework or model. During the past few years, in cloud SaaS level, various interoperability frameworks and models have been developed to provide interoperability between systems. The syntactic interoperability of SaaS systems have already been intensively researched. However, not enough consideration has been given to semantic interoperability issues. Achieving semantic interoperability is a challenge within the world of SaaS in cloud computing environments. Therefore, a semantic interoperability framework for SaaS systems in cloud computing environments is needed. We develop a semantic interoperability framework for cloud SaaS systems. The capabilities and value of service oriented architecture for semantic interoperability within cloud SaaS systems have been studied and demonstrated. This paper is accomplished through a number of steps (research methodology). It begins with a study on related works in the literature. Then, problem statement and research objectives are explained. In the next step, semantic interoperability requirements for SaaS systems in cloud computing environments that are needed to support are analyzed. The details of the proposed semantic interoperability framework for SaaS systems in cloud computing environments are presented. It includes the design of the proposed semantic interoperability framework. Finally, the evaluation methods of the semantic interoperability framework are elaborated. In order to evaluate the effectiveness of the proposed semantic interoperability framework for SaaS systems in cloud computing environments, extensive experimentation and statistical analysis have been performed. The experiments and statistical analysis specify that the proposed semantic interoperability framework for cloud SaaS systems is able to establish semantic interoperability between cloud SaaS systems in a more efficient way. It is concluded that using the proposed framework, there is a significant improvement in the effectiveness of semantic interoperability of SaaS systems in cloud computing environments.


Journal of Systems and Software | 2016

Cost optimization approaches for scientific workflow scheduling in cloud and grid computing: A review, classifications, and open issues

Ehab Nabiel Alkhanak; Sai Peck Lee; Reza Rezaei; Reza Meimandi Parizi

Abstract Workflow scheduling in scientific computing systems is one of the most challenging problems that focuses on satisfying user-defined quality of service requirements while minimizing the workflow execution cost. Several cost optimization approaches have been proposed to improve the economic aspect of Scientific Workflow Scheduling (SWFS) in cloud and grid computing. To date, the literature has not yet seen a comprehensive review that focuses on approaches for supporting cost optimization in the context of SWFS in cloud and grid computing. Furthermore, providing valuable guidelines and analysis to understand the cost optimization of SWFS approaches is not well-explored in the current literature. This paper aims to analyze the problem of cost optimization in SWFS by extensively surveying existing SWFS approaches in cloud and grid computing and provide a classification of cost optimization aspects and parameters of SWFS. Moreover, it provides a classification of cost based metrics that are categorized into monetary and temporal cost parameters based on various scheduling stages. We believe that our findings would help researchers and practitioners in selecting the most appropriate cost optimization approach considering identified aspects and parameters. In addition, we highlight potential future research directions in this on-going area of research.


Journal of Systems and Software | 2014

A review on E-business Interoperability Frameworks

Reza Rezaei; Thiam Kian Chiew; Sai Peck Lee

Abstract Interoperability frameworks present a set of assumptions, concepts, values, and practices that constitute a method of dealing with interoperability issues in the electronic business (e-business) context. Achieving interoperability in the e-business generates numerous benefits. Thus, interoperability frameworks are the main component of e-business activities. This paper describes the existing interoperability frameworks for e-business, and performs a comparative analysis among their findings to determine the similarities and differences in their philosophy and implementation. This analysis yields a set of recommendations for any party that is open to the idea of creating or improving an E-business Interoperability Framework.


Advances in Engineering Software | 2014

An interoperability model for ultra large scale systems

Reza Rezaei; Thiam Kian Chiew; Sai Peck Lee

Ultra large scale systems are a new generation of distributed software system that are composed of various changing, inconsistent or even conflicting components that are distributed in a wide domain. Some important characteristics of these systems include their very large size, global geographical distribution, operational and managerial independence of their member systems. The main function of these systems arises from the interoperability between their components. Nowadays one of the most important challenges facing ultra large scale systems is the interoperability of their component systems. Interoperability is the ability by which system elements can exchange and understand the information required with each other. This paper aims to solve the mentioned challenge, which is divided into two main parts. In the first part, this paper presents a maturity model for the interoperability of ultra large scale systems, by using the interoperability level of the component system of one ultra large scale system its maturity level can be determined. In the second part, by proposing a framework we try to increase the interoperability of the component systems in ultra large scale systems based on the interoperability maturity levels determined in the first part. Consequently their interoperability is improved.


Information & Software Technology | 2013

Efficient software clustering technique using an adaptive and preventive dendrogram cutting approach

Chun Yong Chong; Sai Peck Lee; Teck Chaw Ling

Context: Software clustering is a key technique that is used in reverse engineering to recover a high-level abstraction of the software in the case of limited resources. Very limited research has explicitly discussed the problem of finding the optimum set of clusters in the design and how to penalize for the formation of singleton clusters during clustering. Objective: This paper attempts to enhance the existing agglomerative clustering algorithms by introducing a complementary mechanism. To solve the architecture recovery problem, the proposed approach focuses on minimizing redundant effort and penalizing for the formation of singleton clusters during clustering while maintaining the integrity of the results. Method: An automated solution for cutting a dendrogram that is based on least-squares regression is presented in order to find the best cut level. A dendrogram is a tree diagram that shows the taxonomic relationships of clusters of software entities. Moreover, a factor to penalize clusters that will form singletons is introduced in this paper. Simulations were performed on two open-source projects. The proposed approach was compared against the exhaustive and highest gap dendrogram cutting methods, as well as two well-known cluster validity indices, namely, Dunns index and the Davies-Bouldin index. Results: When comparing our clustering results against the original package diagram, our approach achieved an average accuracy rate of 90.07% from two simulations after the utility classes were removed. The utility classes in the source code affect the accuracy of the software clustering, owing to its omnipresent behavior. The proposed approach also successfully penalized the formation of singleton clusters during clustering. Conclusion: The evaluation indicates that the proposed approach can enhance the quality of the clustering results by guiding software maintainers through the cutting point selection process. The proposed approach can be used as a complementary mechanism to improve the effectiveness of existing clustering algorithms.


Information & Software Technology | 2014

A noun-based approach to feature location using time-aware term-weighting

Sima Zamani; Sai Peck Lee; Ramin Shokripour; John Anvik

Abstract Context Feature location aims to identify the source code location corresponding to the implementation of a software feature. Many existing feature location methods apply text retrieval to determine the relevancy of the features to the text data extracted from the software repositories. One of the preprocessing activities in text retrieval is term-weighting, which is used to adjust the importance of a term within a document or corpus. Common term-weighting techniques may not be optimal to deal with text data from software repositories due to the origin of term-weighting techniques from a natural language context. Objective This paper describes how the consideration of when the terms were used in the repositories, under the condition of weighting only the noun terms, can improve a feature location approach. Method We propose a feature location approach using a new term-weighting technique that takes into account how recently a term has been used in the repositories. In this approach, only the noun terms are weighted to reduce the dataset volume and avoid dealing with dimensionality reduction. Results An empirical evaluation of the approach on four open-source projects reveals improvements to the accuracy, effectiveness and performance up to 50%, 17%, and 13%, respectively, when compared to the commonly-used Vector Space Model approach. The comparison of the proposed term-weighting technique with the Term Frequency-Inverse Document Frequency technique shows accuracy, effectiveness, and performance improvements as much as 15%, 10%, and 40%, respectively. The investigation of using only noun terms, instead of using all terms, in the proposed approach also indicates improvements up to 28%, 21%, and 58% on accuracy, effectiveness, and performance, respectively. Conclusion In general, the use of time in the weighting of terms, along with the use of only the noun terms, makes significant improvements to a feature location approach that relies on textual information.


IEEE Transactions on Reliability | 2014

Achievements and Challenges in State-of-the-Art Software Traceability Between Test and Code Artifacts

Reza Meimandi Parizi; Sai Peck Lee; Mohammad Dabbagh

Testing is a key activity of software development and maintenance that determines the level of reliability. Traceability is the ability to describe and follow the life of software artifacts, and has been promoted as a means for supporting various activities, most importantly testing. Traceability information facilitates the testing and debugging of complex software by modeling the dependencies between code and tests. Actively supplementing traceability to testing enables rectifying defects more reliably and efficiently. Despite its importance, the development of test-to-code traceability has not been sufficiently addressed in the literature, and even worse there is currently no organized review of traceability studies in this field. In this work, we have investigated the main conferences, workshops, and journals of the requirements engineering, testing, and reliability, and identified those contributions that refer to traceability topics. From that starting point, we characterized and analyzed the chosen contributions against three research questions by utilizing a comparative framework including nine criteria. As a result, our study arrives to some interesting points, and outlines a number of potential research directions. This, in turn, can pave the way for facilitating and empowering traceability research in this domain to assist software engineers and testers in test management.


software engineering, artificial intelligence, networking and parallel/distributed computing | 2013

A Consistent Approach for Prioritizing System Quality Attributes

Mohammad Dabbagh; Sai Peck Lee

Requirements prioritization is recognized as a critical but often neglected activity during software development process. To achieve a high quality software system, quality attribute requirements must be taken into consideration during the prioritization process. However, prioritizing quality attributes is not an easy task due to the inherent interrelationships between these attributes. Although some methods on prioritizing requirements have been introduced in recent years, no particular method or technique is proposed for prioritizing system quality attributes. In this paper, an approach is presented in which prioritization of system quality attributes could be performed. Indeed, it provides the requirements engineering team with a set of ordered quality attributes which are prioritized based on their relative importance, take into consideration their consistencies.

Collaboration


Dive into the Sai Peck Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Reza Rezaei

Information Technology University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tong Ming Lim

Monash University Malaysia Campus

View shared research outputs
Top Co-Authors

Avatar

Ehab Nabiel Alkhanak

Information Technology University

View shared research outputs
Researchain Logo
Decentralizing Knowledge