Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francis C. M. Lau is active.

Publication


Featured researches published by Francis C. M. Lau.


Archive | 1997

Load Balancing in Parallel Computers: Theory and Practice

Cheng Zhong Xu; Francis C. M. Lau

Foreword. Preface. 1. Introduction. 2. A Survey of Nearest-Neighbor Load Balancing Algorithms. 3. The GDE Method. 4. GDE on Tori and Meshes. 5. The Diffusion Method. 6. GDE Versus Diffusion. 7. Termination Detection of Load Balancing. 8. Remapping with the GDE Method. 9. Load Distribution in Combinatorial Optimizations. 10. Conclusions. References. Index.


IEEE Pervasive Computing | 2002

A context-aware decision engine for content adaptation

Wai Yip Lum; Francis C. M. Lau

Building a good content adaptation service for mobile devices poses many challenges. To meet these challenges, this quality-of-service-aware decision engine automatically negotiates for the appropriate adaptation decision for synthesizing an optimal content version.


international conference on cluster computing | 2002

JESSICA2: a distributed Java Virtual Machine with transparent thread migration support

Wenzhang Zhu; Cho-Li Wang; Francis C. M. Lau

A distributed Java Virtual Machine (DJVM) spanning multiple cluster nodes can provide a true parallel execution environment for multi-threaded Java applications. Most existing DJVMs suffer from the slow Java execution in interpretive mode and thus may not be efficient enough for solving computation-intensive problems. We present JESSICA2, a new DJVM running in JIT compilation mode that can execute multi-threaded Java applications transparently on clusters. JESSICA2 provides a single system image (SSI) illusion to Java applications via an embedded global object space (GOS) layer. It implements a cluster-aware Java execution engine that supports transparent Java thread migration for achieving dynamic load balancing. We discuss the issues of supporting transparent Java thread migration in a JIT compilation environment and propose several lightweight solutions. An adaptive migrating-home protocol used in the implementation of the GOS is introduced. The system has been implemented on x86-based Linux clusters and significant performance improvements over the previous JESSICA system have been observed.


international conference on distributed computing systems | 2011

CloudMedia: When Cloud on Demand Meets Video on Demand

Yu Wu; Chuan Wu; Bo Li; Xuanjia Qiu; Francis C. M. Lau

Internet-based cloud computing is a new computing paradigm aiming to provide agile and scalable resource access in a utility-like fashion. Other than being an ideal platform for computation-intensive tasks, clouds are believed to be also suitable to support large-scale applications with periods of flash crowds by providing elastic amounts of bandwidth and other resources on the fly. The fundamental question is how to configure the cloud utility to meet the highly dynamic demands of such applications at a modest cost. In this paper, we address this practical issue with solid theoretical analysis and efficient algorithm design using Video on Demand (VoD) as the example application. Having intensive bandwidth and storage demands in real time, VoD applications are purportedly ideal candidates to be supported on a cloud platform, where the on-demand resource supply of the cloud meets the dynamic demands of the VoD applications. We introduce a queueing network based model to characterize the viewing behaviors of users in a multichannel VoD application, and derive the server capacities needed to support smooth playback in the channels for two popular streaming models: client-server and P2P. We then propose a dynamic cloud resource provisioning algorithm which, using the derived capacities and instantaneous network statistics as inputs, can effectively support VoD streaming with low cloud utilization cost. Our analysis and algorithm design are verified and extensively evaluated using large-scale experiments under dynamic realistic settings on a home-built cloud platform.


Stroke | 1999

Replicability of SF-36 Summary Scores by the SF-12 in Stroke Patients

A. Simon Pickard; Jeffrey A. Johnson; Andrew Penn; Francis C. M. Lau; Tom Noseworthy

BACKGROUND AND PURPOSE The replicability of the physical and mental component summary scores of the Short Form (SF)-36 has been established using the SF-12 in selected patient populations but has yet to be assessed in stroke patients. If the summary scores of the SF-12 are highly correlated with those of the SF-36, the benefits of using a shorter health-status measure may be realized without substantial loss of information or precision. Both self-reported and proxy assessments were evaluated for replicability. METHODS Intraclass correlation coefficients (ICCs) and linear regression were used to assess the ability of the SF-12 physical component summary (PCS-12) scores to predict PCS-36 scores and the SF-12 mental component summary (MCS-12) scores to predict MCS-36 scores. Multivariate regression was used to explore the relationship between SF-12 and SF-36 scores. RESULTS The MCS-12 and PCS-12 scores were strongly correlated with the corresponding SF-36 summary scores for surveys completed by proxy or self-report (ICCs ranged from 0.954 to 0.973). Regression analysis of the proxy assessments indicated that patient age was an important effect modifier in the relationship between MCS-12 and MCS-36 scores. CONCLUSIONS The SF-12 reproduced SF-36 summary scores without substantial loss of information in stroke patients. Accordingly, the SF-12 can be used at the summary score level as a substitute for the SF-36 in stroke survivors capable of self-report. However, the mental health summary scores of proxy assessments are influenced by patient age, thereby limiting the replicability of the SF-36 by the SF-12 under these conditions.


Journal of Parallel and Distributed Computing | 2000

JESSICA: Java-enabled single-system-image computing architecture

Matchy J. M. Ma; Cho-Li Wang; Francis C. M. Lau

Abstract JESSICA stands for Java-enabled single-system-image computing architecture, a middleware that runs on top of the standard UNIX operating system to support parallel execution of multithreaded Java applications in a cluster of computers. JESSICA hides the physical boundaries between machines and makes the cluster appear as a single computer to applications—a single system image. JESSICA supports preemptive thread migration, which allows a thread to freely move between machines during its execution, and global object sharing through the help of a distributed shared-memory subsystem. JESSICA implements location-transparency through a message-redirection mechanism. The result is a parallel execution environment where threads are automatically redistributed across the cluster for achieving the maximal possible parallelism. A JESSICA prototype that runs on a Linux cluster has been implemented and considerable speedups have been obtained for all the experimental applications tested.


IEEE Journal on Selected Areas in Communications | 2013

Moving Big Data to The Cloud: An Online Cost-Minimizing Approach

Linquan Zhang; Chuan Wu; Zongpeng Li; Chuanxiong Guo; Minghua Chen; Francis C. M. Lau

Cloud computing, rapidly emerging as a new computation paradigm, provides agile and scalable resource access in a utility-like fashion, especially for the processing of big data. An important open issue here is to efficiently move the data, from different geographical locations over time, into a cloud for effective processing. The de facto approach of hard drive shipping is not flexible or secure. This work studies timely, cost-minimizing upload of massive, dynamically-generated, geo-dispersed data into the cloud, for processing using a MapReduce-like framework. Targeting at a cloud encompassing disparate data centers, we model a cost-minimizing data migration problem, and propose two online algorithms: an online lazy migration (OLM) algorithm and a randomized fixed horizon control (RFHC) algorithm , for optimizing at any given time the choice of the data center for data aggregation and processing, as well as the routes for transmitting data there. Careful comparisons among these online and offline algorithms in realistic settings are conducted through extensive experiments, which demonstrate close-to-offline-optimum performance of the online algorithms.


IEEE Transactions on Software Engineering | 2003

User-centric content negotiation for effective adaptation service in mobile computing

Wai Yip Lum; Francis C. M. Lau

We address the challenges of building a good content adaptation service for mobile devices and propose a decision engine that is user-centric with QoS awareness, which can automatically negotiate for the appropriate adaptation decision to use in the synthesis of an optimal adapted version. The QoS-sensitive approach complements the lossy nature of the transcoding operations. The decision engine will look for the best trade off among various parameters in order to reduce the loss of quality in various domains. Quantitative methods are suggested to measure the QoS of the content versions in various quality domains. Based on the particular user perception and other contextual information on the client capability, the network connection, and the requested content, the proposed negotiation algorithm will determine a content version with a good aggregate score. We have built a prototype document adaptation system for PDF documents to demonstrate the viability of our approach.


international conference on computer communications | 2015

Scaling social media applications into geo-distributed clouds

Yu Wu; Chuan Wu; Bo Li; Linquan Zhang; Zongpeng Li; Francis C. M. Lau

Federation of geo-distributed cloud services is a trend in cloud computing that, by spanning multiple data centers at different geographical locations, can provide a cloud platform with much larger capacities. Such a geo-distributed cloud is ideal for supporting large-scale social media applications with dynamic contents and demands. Although promising, its realization presents challenges on how to efficiently store and migrate contents among different cloud sites and how to distribute user requests to the appropriate sites for timely responses at modest costs. These challenges escalate when we consider the persistently increasing contents and volatile user behaviors in a social media application. By exploiting social influences among users, this paper proposes efficient proactive algorithms for dynamic, optimal scaling of a social media application in a geo-distributed cloud. Our key contribution is an online content migration and request distribution algorithm with the following features: 1) future demand prediction by novelly characterizing social influences among the users in a simple but effective epidemic model; 2) one-shot optimal content migration and request distribution based on efficient optimization algorithms to address the predicted demand; and 3) a Δ(t)-step look-ahead mechanism to adjust the one-shot optimization results toward the offline optimum. We verify the effectiveness of our online algorithm by solid theoretical analysis, as well as thorough comparisons to ready algorithms including the ideal offline optimum, using large-scale experiments with dynamic realistic settings on Amazon Elastic Compute Cloud (EC2).


Journal of Parallel and Distributed Computing | 1992

Analysis of the generalized dimension exchange method for dynamic load balancing

Cheng Zhong Xu; Francis C. M. Lau

Abstract The dimension exchange method is a distributed load balancing method for point-to-point networks. We add a parameter, called the exchange parameter, to the method to control the splitting of load between a pair of directly connected processors, and call this parameterized version the generalized dimension exchange (GDE) method. The rationale for the introduction of this parameter is that splitting the workload into equal halves does not necessarily lead to an optimal result (in terms of the convergence rate) for certain structures. We carry out an analysis of this new method, emphasizing its termination aspects and potential efficiency. Given a specific structure, one needs to determine a value to use for the exchange parameter that would lead to an optimal result. To this end, we first derive a sufficient and necessary condition for the termination of the method. We then show that equal splitting, proposed originally by others as a heuristic strategy, indeed yields optimal efficiency in hypercube structures. For chains, rings, meshes, and tori, however, optimal choices of the exchange parameter are found to be closely related to the scales of these structures. Finally, to further investigate the potential of the GDE method, we extend it to allow exchange parameters of different values to be used over the set of edges, and based on this extension, we compare the GDE method with the diffusion method.

Collaboration


Dive into the Francis C. M. Lau's collaboration.

Top Co-Authors

Avatar

Cho-Li Wang

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yuexuan Wang

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Qiang-Sheng Hua

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chuan Wu

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Songhua Xu

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dongxiao Yu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge