Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhengwei Qi is active.

Publication


Featured researches published by Zhengwei Qi.


Journal of Computer and System Sciences | 2013

A multi-objective ant colony system algorithm for virtual machine placement in cloud computing

Yongqiang Gao; Haibing Guan; Zhengwei Qi; Yang Hou; Liang Liu

Virtual machine placement is a process of mapping virtual machines to physical machines. The optimal placement is important for improving power efficiency and resource utilization in a cloud computing environment. In this paper, we propose a multi-objective ant colony system algorithm for the virtual machine placement problem. The goal is to efficiently obtain a set of non-dominated solutions (the Pareto set) that simultaneously minimize total resource wastage and power consumption. The proposed algorithm is tested with some instances from the literature. Its solution performance is compared to that of an existing multi-objective genetic algorithm and two single-objective algorithms, a well-known bin-packing algorithm and a max-min ant system (MMAS) algorithm. The results show that the proposed algorithm is more efficient and effective than the methods we compared it to.


aspect-oriented software development | 2012

DiSL: a domain-specific language for bytecode instrumentation

Lukáš Marek; Alex Villazón; Yudi Zheng; Danilo Ansaloni; Walter Binder; Zhengwei Qi

Many dynamic analysis tools for programs written in managed languages such as Java rely on bytecode instrumentation. Tool development is often tedious because of the use of low-level bytecode manipulation libraries. While aspect-oriented programming (AOP) offers high-level abstractions to concisely express certain dynamic analyses, the join point model of mainstream AOP languages such as AspectJ is not well suited for many analysis tasks and the code generated by weavers in support of certain language features incurs high overhead. In this paper we introduce DiSL (domain-specific language for instrumentation), a new language especially designed for dynamic program analysis. DiSL offers an open join point model where any region of bytecodes can be a shadow, synthetic local variables for efficient data passing, efficient access to comprehensive static and dynamic context information, and weave-time execution of user-defined static analysis code. We demonstrate the benefits of DiSL with a case study, recasting an existing dynamic analysis tool originally implemented in AspectJ. We show that the DiSL version offers better code coverage, incurs significantly less overhead, and eases the integration of new analysis features that could not be expressed in AspectJ.


communications and mobile computing | 2011

Power Consumption of Virtual Machine Live Migration in Clouds

Qiang Huang; Fengqian Gao; Rui Wang; Zhengwei Qi

Virtualization Technology has been employed increasingly widely in modern data centers in order to improve its energy efficiency. In particular, the capability of virtual machine(VM) migration brings multiple benefits for such as resources(CPU, memory, et al.) distribution, energy aware consolidation. However, the migration of virtual machines itself brings extra power consumption. For this reason, a better understanding of its effect on system power consumption is highly desirable. In this paper, we present a power consumption evaluation on the effects of live migration of VMs. Results show that the power overhead of migration is much less in the scenario of employing the strategy of consolidation than the regular deployment without using consolidation. Our results are based on the typical physical server, the power of which is linear model of CPU utilization percentage.


ieee symposium on security and privacy | 2015

Effective Real-Time Android Application Auditing

Mingyuan Xia; Lu Gong; Yuanhao Lyu; Zhengwei Qi; Xue Liu

Mobile applications can access both sensitive personal data and the network, giving rise to threats of data leaks. App auditing is a fundamental program analysis task to reveal such leaks. Currently, static analysis is the de facto technique which exhaustively examines all data flows and pinpoints problematic ones. However, static analysis generates false alarms for being over-estimated and requires minutes or even hours to examine a real app. These shortcomings greatly limit the usability of automatic app auditing. To overcome these limitations, we design AppAudit that relies on the synergy of static and dynamic analysis to provide effective real-time app auditing. AppAudit embodies a novel dynamic analysis that can simulate the execution of part of the program and perform customized checks at each program state. AppAudit utilizes this to prune false positives of an efficient but over-estimating static analysis. Overall, AppAudit makes app auditing useful for app market operators, app developers and mobile end users, to reveal data leaks effectively and efficiently. We apply AppAudit to more than 1,000 known malware and 400 real apps from various markets. Overall, AppAudit reports comparative number of true data leaks and eliminates all false positives, while being 8.3x faster and using 90% less memory compared to existing approaches. AppAudit also uncovers 30 data leaks in real apps. Our further study reveals the common patterns behind these leaks: 1) most leaks are caused by 3rd-party advertising modules; 2) most data are leaked with simple unencrypted HTTP requests. We believe AppAudit serves as an effective tool to identify data-leaking apps and provides implications to design promising runtime techniques against data leaks.


Computers & Electrical Engineering | 2014

Service level agreement based energy-efficient resource management in cloud data centers

Yongqiang Gao; Haibing Guan; Zhengwei Qi; Tao Song; Fei Huan; Liang Liu

As cloud computing has become a popular computing paradigm, many companies have begun to build increasing numbers of energy hungry data centers for hosting cloud computing applications. Thus, energy consumption is increasingly becoming a critical issue in cloud data centers. In this paper, we propose a dynamic resource management scheme which takes advantage of both dynamic voltage/frequency scaling and server consolidation to achieve energy efficiency and desired service level agreements in cloud data centers. The novelty of the proposed scheme is to integrate timing analysis, queuing theory, integer programming, and control theory techniques. Our experimental results indicate that, compared to a statically provisioned data center that runs at the maximum processor speed without utilizing the sleep state, the proposed resource management scheme can achieve up to 50.3% energy savings while satisfying response-time-based service level agreements with rapidly changing dynamic workloads.


Computer Networks | 2015

A survey on data center networking for cloud computing

Bin Wang; Zhengwei Qi; Ruhui Ma; Haibing Guan; Athanasios V. Vasilakos

Data Center Networks (DCNs) are an essential infrastructure that impact the success of cloud computing. A scalable and efficient data center is crucial in both the construction and operation of stable cloud services. In recent years, the growing importance of data center networking has drawn much attention to related issues including connective simplification and service stability. However, existing DCNs lack the necessary agility for multi-tenant demands in the cloud, creating poor responsiveness and limited scalability. In this paper, we present an overview of data center networks for cloud computing and evaluate construction prototypes based on these issues. We provide, specifically, detailed descriptions of several important aspects: the physical architecture, virtualized infrastructure, and DCN routing. Each section of this work discusses and evaluates resolution approaches, and presents the use cases for cloud computing service. In our attempt to build insight relevant to future research, we also present some open research issues. Based on experience gained in both research and industrial trials, the future of data center networking must include careful consideration of the interactions between the important aspects mentioned above.


embedded and ubiquitous computing | 2010

Real-time Enhancement for Xen Hypervisor

Peijie Yu; Mingyuan Xia; Qian Lin; Min Zhu; Shang Gao; Zhengwei Qi; Kai Chen; Haibing Guan

System virtualization, which provides good isolation, is now widely used in server consolidation. Meanwhile, one of the hot topics in this field is to extend virtualization for embedded systems. However, current popular virtualization platforms do not support real-time operating systems such as embedded Linux well because the platform is not real-time ware, which will bring low-performance I/O and high scheduling latency. The goal of this paper is to optimize the Xen virtualization platform to be real-time operating system friendly. We improve two aspects of the Xen virtualization platform. First, we improve the xen scheduler to manage the scheduling latency and response time of the real-time operating system. Second, we import multiple real-time operating systems balancing method. Our experiment demonstrates that our enhancement to the Xen virtualization platform support real-time operating system well and the improvement to the real-time performance is about 20%.


IEEE Transactions on Parallel and Distributed Systems | 2014

vGASA: Adaptive Scheduling Algorithm of Virtualized GPU Resource in Cloud Gaming

Chao Zhang; Jianguo Yao; Zhengwei Qi; Miao Yu; Haibing Guan

As the virtualization technology for GPUs matures, cloud gaming has become an emerging application among cloud services. In addition to the poor default mechanisms of GPU resource sharing, the performance of cloud games is inevitably undermined by various runtime uncertainties such as rendering complex game scenarios. The question of how to handle the runtime uncertainties for GPU resource sharing remains unanswered. To address this challenge, we propose vGASA, a virtualized GPU resource adaptive scheduling algorithm in cloud gaming. vGASA interposes scheduling algorithms in the graphics API of the operating system, and hence the host graphic driver or the guest operating system remains unmodified. To fulfill the service level agreement as well as maximize GPU usage, we propose three adaptive scheduling algorithms featuring feedback control that mitigates the impact of the runtime uncertainties on the system performance. The experimental results demonstrate that vGASA is able to maintain frames per second of various workloads at the desired level with the performance overhead limited to 5-12 percent.


high performance distributed computing | 2013

VGRIS: virtualized GPU resource isolation and scheduling in cloud gaming

Miao Yu; Chao Zhang; Zhengwei Qi; Jianguo Yao; Yin Wang; Haibing Guan

Fueled by the maturity of virtualization technology for Graphics Processing Unit (GPU), there is an increasing number of data centers dedicated to GPU-related computation tasks in cloud gaming. However, GPU resource sharing in these applications is usually poor. This stems from the fact that the typical cloud gaming service providers often allocate one GPU exclusively for one game. To achieve the efficiency of computational resource management, there is a demand for cloud computing to employ the multi-task scheduling technologies to improve the utilization of GPU. In this paper, we propose VGRIS, a resource management framework for Virtualized GPU Resource Isolation and Scheduling in cloud gaming. By leveraging the mature GPU paravirtualization architecture, VGRIS resides in the host through library API interception, while the guest OS and the GPU computing applications remain unmodified. In the proposed framework, we implemented three scheduling algorithms in VGRIS for different objectives, i.e., Service Level Agreement (SLA)-aware scheduling, proportional-share scheduling, and hybrid scheduling that mixes the former two. By designing such a scheduling framework, it is possible to handle different kinds of GPU computation tasks for different purposes in cloud gaming. Our experimental results show that each scheduling algorithm can achieve its goals under various workloads.


Journal of Systems Architecture | 2013

Quality of service aware power management for virtualized data centers

Yongqiang Gao; Haibing Guan; Zhengwei Qi; Bin Wang; Liang Liu

Abstract Nowadays, one of the most important goals of data center management is to maximize their profit by minimizing power consumption and service-level agreement violations of hosted applications. In this paper, we propose an integrated management solution which takes advantages of both virtual machine resizing and server consolidation to achieve energy efficiency and quality of service in virtualized data centers. A novelty of the solution is to integrate linear programming, ant colony optimization, and control theory techniques. Empirical results show that our approach can achieve power savings of 41.3% compared to an uncontrolled system, while ensuring application performance.

Collaboration


Dive into the Zhengwei Qi's collaboration.

Top Co-Authors

Avatar

Haibing Guan

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Miao Yu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qian Lin

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Ruhui Ma

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Shang Gao

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Jianguo Yao

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge