Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xuanhua Shi is active.

Publication


Featured researches published by Xuanhua Shi.


international conference on cluster computing | 2009

Live virtual machine migration with adaptive, memory compression

Hai Jin; Li Deng; Song Wu; Xuanhua Shi; Xiaodong Pan

Live migration of virtual machines has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance, and power-saving, especially in clusters or data centers. Although pre-copy is a predominantly used approach in the state of the art, it is difficult to provide quick migration with low network overhead, due to a great amount of transferred data during migration, leading to large performance degradation of virtual machine services. This paper presents the design and implementation of a novel memory-compression-based VM migration approach (MECOM) that first uses memory compression to provide fast, stable virtual machine migration, while guaranteeing the virtual machine services to be slightly affected. Based on memory page characteristics, we design an adaptive zero-aware compression algorithm for balancing the performance and the cost of virtual machine migration. Pages are quickly compressed in batches on the source and exactly recovered on the target. Experiment demonstrates that compared with Xen, our system can significantly reduce 27.1% of downtime, 32% of total migration time and 68.8% of total transferred data on average.


international conference on cloud computing | 2009

Evaluating MapReduce on Virtual Machines: The Hadoop Case

Shadi Ibrahim; Hai Jin; Lu Lu; Li Qi; Song Wu; Xuanhua Shi

MapReduceis emerging as an important programming model for large scale parallel application. Meanwhile, Hadoop is an open source implementation of MapReduce enjoying wide popularity for developing data intensive applications in the cloud. As, in the cloud, the computing unit is virtual machine (VM) based; it is feasible to demonstrate the applicability of MapReduce on virtualized data center. Although the potential for poor performance and heavy load no doubt exists, virtual machines can instead be used to fully utilize the system resources, ease the management of such systems, improve the reliability, and save the power. In this paper, a series of experiments are conducted to measure and analyze the performance of Hadoop on VMs. Our experiments are used as a basis for outlining several issues that will need to be considered when implementing MapReduce to fit completely in the cloud.


Journal of Network and Computer Applications | 2011

Optimizing the live migration of virtual machine by CPU scheduling

Hai Jin; Wei Gao; Song Wu; Xuanhua Shi; Xiaoxin Wu; Fan Zhou

Live migration has been proposed to reduce the downtime for migrated VMs by pre-copying the generated run-time memory state files from the original host to the migration destination host. However, if the rate for such a dirty memory generation is high, it may take a long time to accomplish live migration because a large amount of data needs to be transferred. In extreme cases when dirty memory generation rate is faster than pre-copy speed, live migration will fail. In this work we address the problem by designing an optimization scheme for live migration, under which according to pre-copy speed, the VCPU working frequency may be reduced so that at a certain phase of the pre-copy the remaining dirty memory can reach a desired small amount. The VM downtime during the migration can be limited. The scheme works for the scenario where the migrated application has a high memory writing speed, or the pre-copy speed is slow, e.g., due to low network bandwidth between the migration parties. The method improves migration liveness at the cost of application performance, and works for those applications for which interruption causes much more serious problems than quality deterioration. Compared to the original live migration, our experiments show that the optimized scheme can reduce up to 88% of application downtime with an acceptable overhead.


International Journal of Grid and Utility Computing | 2005

An adaptive meta-scheduler for data-intensive applications

Hai Jin; Xuanhua Shi; Weizhong Qiang; Deqing Zou

In data-intensive applications, such as high-energy physics, bio-informatics, we encounter applications involving numerous jobs that access and generate large datasets. Effective scheduling of such applications is a challenge, due to the need to consider for both computational resources and data storage resources. In this paper, we describe an adaptive scheduling model that considers availability of computational, storage and network resources. Based on this model we implement a scheduler used in our campus grid. The results achieved by our scheduler have been analysed by comparing with greedy algorithm that is widely used in computational grids and some data grids.


Archive | 2010

Tools and Technologies for Building Clouds

Hai Jin; Shadi Ibrahim; Tim Bell; Li Qi; Haijun Cao; Song Wu; Xuanhua Shi

With cloud computing growing in popularity, tools and technologies are emerging to build, access, manage, and maintain the clouds. These tools need to manage the huge number of operations within a cloud transparently and without service interruptions. Cloud computing promises lower costs, faster implementation, and more flexibility using mixtures of technologies, and the associated tools are critical for achieving this.


ieee international conference on cloud computing technology and science | 2015

Towards Optimized Fine-Grained Pricing of IaaS Cloud Platform

Hai Jin; Xinhou Wang; Song Wu; Sheng Di; Xuanhua Shi

Although many pricing schemes in IaaS platform are already proposed with pay-as-you-go and subscription/spot market policy to guarantee service level agreement, it is still inevitable to suffer from wasteful payment because of coarse-grained pricing scheme. In this paper, we investigate an optimized fine-grained and fair pricing scheme. Two tough issues are addressed: (1) the profits of resource providers and customers often contradict mutually; (2) VM-maintenance overhead like startup cost is often too huge to be neglected. Not only can we derive an optimal price in the acceptable price range that satisfies both customers and providers simultaneously, but we also find a best-fit billing cycle to maximize social welfare (i.e., the sum of the cost reductions for all customers and the revenue gained by the provider). We carefully evaluate the proposed optimized fine-grained pricing scheme with two large-scale real-world production traces (one from Grid Workload Archive and the other from Google data center). We compare the new scheme to classic coarse-grained hourly pricing scheme in experiments and find that customers and providers can both benefit from our new approach. The maximum social welfare can be increased up to 72.98 and 48.15 percent with respect to DAS-2 trace and Google trace respectively.


International Journal of Grid and Utility Computing | 2005

RB-GACA: an RBAC based grid access control architecture

Hai Jin; Weizhong Qiang; Xuanhua Shi; Deqing Zou

Grid computing is emerging as a new format of wide area distributed computing. Because the distribution of services and resources in wide-area networks are heterogeneous, dynamic, and multi-domain, security is a critical concern in grid computing. Authorisation and access control, which are important aspects of security, have obtained more and more attention. This paper proposes a universal, scalable authorisation and access control architecture, RB-GACA, for grid computing. It is based on classical access control mechanism in distributed applications, Role Based Access Control (RBAC). The paper provides a flexible policy management approach for various grid environments. We also use a standard policy language for the presentation of access control policies to provide a general and standard support for different services and resources.


Future Generation Computer Systems | 2014

MECOM: Live migration of virtual machines by adaptively compressing memory pages

Hai Jin; Li Deng; Song Wu; Xuanhua Shi; Hanhua Chen; Xiaodong Pan

Abstract Live migration of virtual machines has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance, and power-saving, especially in clusters or data centers. Although pre-copy is extensively used to migrate memory data of virtual machines, it cannot provide quick migration with low network overhead but leads to large performance degradation of virtual machine services due to the great amount of transferred data during migration. To solve the problem, this paper presents the design and implementation of a novel memory-compression-based VM migration approach (MECOM for short) that uses memory compression to provide fast, stable virtual machine migration, while guaranteeing the virtual machine services to be slightly affected. Based on memory page characteristics, we design an adaptive zero-aware compression algorithm for balancing the performance and the cost of virtual machine migration. Using the proposed scheme pages are rapidly compressed in batches on the source and exactly recovered on the target. Experimental results demonstrate that compared with Xen, our system can significantly reduce downtime, total migration time, and total transferred data by 27.1%, 32%, and 68.8% respectively.


international conference on parallel and distributed systems | 2010

VirtCFT: A Transparent VM-Level Fault-Tolerant System for Virtual Clusters

Minjia Zhang; Hai Jin; Xuanhua Shi; Song Wu

A virtual cluster consists of a multitude of virtual machines and software components that are doomed to fail eventually. In many environments, such failures can result in unanticipated, potentially devastating failure behavior and in service unavailability. The ability of failover is essential to the virtual cluster’s availability, reliability, and manageability. Most of the existing methods have several common disadvantages: requiring modifications to the target processes or their OSes, which is usually error prone and sometimes impractical; only targeting at taking checkpoints of processes, not whole entire OS images, which limits the areas to be applied. In this paper we present VirtCFT, an innovative and practical system of fault tolerance for virtual cluster. VirtCFT is a system-level, coordinated distributed checkpointing fault tolerant system. It coordinates the distributed VMs to periodically reach the globally consistent state and take the checkpoint of the whole virtual cluster including states of CPU, memory, disk of each VM as well as the network communications. When faults occur, VirtCFT will automatically recover the entire virtual cluster to the correct state within a few seconds and keep it running. Superior to all the existing fault tolerance mechanisms, VirtCFT provides a simpler and totally transparent fault tolerant platform that allows existing, unmodified software and operating system (version unawareness) to be protected from the failure of the physical machine on which it runs. We have implemented this system based on the Xen virtualization platform. Our experiments with real-world benchmarks demonstrate the effectiveness and correctness of VirtCFT.


IEEE Transactions on Dependable and Secure Computing | 2013

SafeStack: Automatically Patching Stack-Based Buffer Overflow Vulnerabilities

Gang Chen; Hai Jin; Deqing Zou; Bing Bing Zhou; Zhenkai Liang; Weide Zheng; Xuanhua Shi

Buffer overflow attacks still pose a significant threat to the security and availability of todays computer systems. Although there are a number of solutions proposed to provide adequate protection against buffer overflow attacks, most of existing solutions terminate the vulnerable program when the buffer overflow occurs, effectively rendering the program unavailable. The impact on availability is a serious problem on service-oriented platforms. This paper presents SafeStack, a system that can automatically diagnose and patch stack-based buffer overflow vulnerabilities. The key technique of our solution is to virtualize memory accesses and move the vulnerable buffer into protected memory regions, which provides a fundamental and effective protection against recurrence of the same attack without stopping normal system execution. We developed a prototype on a Linux system, and conducted extensive experiments to evaluate the effectiveness and performance of the system using a range of applications. Our experimental results showed that SafeStack can quickly generate runtime patches to successfully handle the attacks recurrence. Furthermore, SafeStack only incurs acceptable overhead for the patched applications.

Collaboration


Dive into the Xuanhua Shi's collaboration.

Top Co-Authors

Avatar

Hai Jin

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Song Wu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Deqing Zou

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Weizhong Qiang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Li Qi

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yongcai Tao

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lu Lu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wei Gao

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Li Deng

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ligang He

University of Warwick

View shared research outputs
Researchain Logo
Decentralizing Knowledge