Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruhui Ma is active.

Publication


Featured researches published by Ruhui Ma.


Computer Networks | 2015

A survey on data center networking for cloud computing

Bin Wang; Zhengwei Qi; Ruhui Ma; Haibing Guan; Athanasios V. Vasilakos

Data Center Networks (DCNs) are an essential infrastructure that impact the success of cloud computing. A scalable and efficient data center is crucial in both the construction and operation of stable cloud services. In recent years, the growing importance of data center networking has drawn much attention to related issues including connective simplification and service stability. However, existing DCNs lack the necessary agility for multi-tenant demands in the cloud, creating poor responsiveness and limited scalability. In this paper, we present an overview of data center networks for cloud computing and evaluate construction prototypes based on these issues. We provide, specifically, detailed descriptions of several important aspects: the physical architecture, virtualized infrastructure, and DCN routing. Each section of this work discusses and evaluates resolution approaches, and presents the use cases for cloud computing service. In our attempt to build insight relevant to future research, we also present some open research issues. Based on experience gained in both research and industrial trials, the future of data center networking must include careful consideration of the interactions between the important aspects mentioned above.


international conference on computer sciences and convergence information technology | 2010

Performance analysis towards a KVM-Based embedded real-time virtualization architecture

Jun Zhang; Kai Chen; Baojing Zuo; Ruhui Ma; Yaozu Dong; Haibing Guan

In Recent years embedded world has been undergoing a shift from traditional single-core processors to processors with multiple cores. However, this shift poses a challenge of adapting legacy uniprocessor-oriented real-time operating system (RTOS) to exploit the capability of multi-core processor. In addition, some embedded systems are inevitably going towards the direction of integrating real-time with off-the-shelf time-sharing system, as the combination of the two has the potential to provide not only timely and deterministic response but also a large application base. Virtualization technology, which ensures strong isolation between virtual machines, is therefore a promising solution to above mentioned issues. However, there remains a concern regarding the responsiveness of the RTOS running on top of a virtual machine. In this paper we propose an embedded real-time virtualization architecture based on Kernel-Based Virtual Machine (KVM), in which VxWorks and Linux are combined together. We then analyze and evaluate how KVM influences the interrupt-response times of VxWorks as a guest operating system. By applying several real-time performance tuning methods on the host Linux, we will show that sub-millisecond interrupt response latency can be achieved on the guest VxWorks.


international conference on cluster computing | 2012

Adaptive and Scalable Optimizations for High Performance SR-IOV

Zhiqiang Huang; Ruhui Ma; Jian Li; Zhibo Chang; Haibing Guan

High performance networking interfaces, such as 10-Gigabit Ethernet (10GE), are now widely deployed in commercial Cloud computing environments. Virtualization is a standard technique for these environments, one of whose key challenges is to achieve highly efficient and scalable I/O virtualization. Single Root I/O Virtualization (SR-IOV) eliminates the overhead of redundant data copies and the virtual network switch through direct I/O, but needs more work on performance and scalability. In this paper, we first study the defects of SR-IOV with 10GE networking and find two major challenges. Due to multiplexing of traffic from different virtual machines, SR-IOV may generate redundant interrupts unexpectedly and thus result in high CPU overhead. SR-IOV also suffers from single-threaded NAPI which prevents it from fully utilizing multi-core machines. Then we propose two optimizations for enhancing the SR-IOV performance. The first uses adaptive interrupt rate control (AIRC) to reduce CPU overhead caused by excessive interrupts. The second is a multi-threaded network driver (MTND) which allows SR-IOV to make full use of multi-core resources. We implement these optimizations and carry out a detailed performance evaluation. The results show that AIRC can reduce CPU overhead by up to 143% and MTND can improve SR-IOV performance by up to 38%.


ieee international conference on cloud computing technology and science | 2014

Workload-Aware Credit Scheduler for Improving Network I/O Performance in Virtualization Environment

Haibing Guan; Ruhui Ma; Jian Li

Single-root I/O virtualization (SR-IOV) has become the de facto standard of network virtualization in cloud infrastructure. Owing to the high interrupt frequency and heavy cost per interrupt in high-speed network virtualization, the performance of network virtualization is closely correlated to the computing resource allocation policy in Virtual Machine Manager (VMM). Therefore, more sophisticated methods are needed to process irregularity and the high frequency of network interrupts in high-speed network virtualization environment. However, the I/O-intensive and CPU-intensive applications in virtual machines are treated in the same manner since application attributes are transparent to the scheduler in hypervisor, and this unawareness of workload makes virtual systems unable to take full advantage of high performance networks. In this paper, we discuss the SR-IOV networking solution and show by experiment that the current credit scheduler in Xen does not utilize high performance networks efficiently. Hence we propose a novel workload-aware scheduling model with two optimizations to eliminate the bottleneck caused by scheduler. In this model, guest domains are divided into I/O-intensive domains and CPU-intensive domains according to their monitored behaviour. I/O-intensive domains can obtain extra credits that CPU-intensive domains are willing to share. In addition, the total number of credits available is adjusted to accelerate the I/O responsiveness. Our experimental evaluations show that the new scheduling models improve bandwidth and reduce response time, by keeping the fairness between I/O-intensive and CPU-intensive domains. This enables virtualization infrastructure to provide cloud computing services more efficiently and predictably.


ieee international conference on cloud computing technology and science | 2017

TEES: An Efficient Search Scheme over Encrypted Data on Mobile Cloud

Jian Li; Ruhui Ma; Haibing Guan

Cloud storage provides a convenient, massive, and scalable storage at low cost, but data privacy is a major concern that prevents users from storing files on the cloud trustingly. One way of enhancing privacy from data owner point of view is to encrypt the files before outsourcing them onto the cloud and decrypt the files after downloading them. However, data encryption is a heavy overhead for the mobile devices, and data retrieval process incurs a complicated communication between the data user and cloud. Normally with limited bandwidth capacity and limited battery life, these issues introduce heavy overhead to computing and communication as well as a higher power consumption for mobile device users, which makes the encrypted search over mobile cloud very challenging. In this paper, we propose traffic and energy saving encrypted search (TEES), a bandwidth and energy efficient encrypted search architecture over mobile cloud. The proposed architecture offloads the computation from mobile devices to the cloud, and we further optimize the communication between the mobile clients and the cloud. It is demonstrated that the data privacy does not degrade when the performance enhancement methods are applied. Our experiments show that TEES reduces the computation time by 23 to 46 percent and save the energy consumption by 35 to 55 percent per file retrieval, meanwhile the network traffics during the file retrievals are also significantly reduced.


IEEE Transactions on Parallel and Distributed Systems | 2013

Performance Enhancement for Network I/O Virtualization with Efficient Interrupt Coalescing and Virtual Receive-Side Scaling

Haibing Guan; Yaozu Dong; Ruhui Ma; Dongxiao Xu; Yang Zhang; Jian Li

Virtualization is a key technology in cloud computing; it can accommodate numerous guest VMs to provide transparent services, such as live migration, high availability, and rapid checkpointing. Cloud computing using virtualization allows workloads to be deployed and scaled quickly through the rapid provisioning of virtual machines on physical machines. However, I/O virtualization, particularly for networking, suffers from significant performance degradation in the presence of high-speed networking connections. In this paper, we first analyze performance challenges in network I/O virtualization and identify two problems—conventional network I/O virtualization suffers from excessive virtual interrupts to guest VMs, and the back-end driver does not efficiently use the computing resources of underlying multicore processors. To address these challenges, we propose optimization methods for enhancing the networking performance: 1) Efficient interrupt coalescing for network I/O virtualization and 2) virtual receive-side scaling to effectively leverage multicore processors. These methods are implemented and evaluated with extensive performance tests on a Xen virtualization platform. Our experimental results confirm that the proposed optimizations can significantly improve network I/O virtualization performance and effectively solve the performance challenges.


international conference on cluster computing | 2012

Adjustable Credit Scheduling for High Performance Network Virtualization

Zhibo Chang; Jian Li; Ruhui Ma; Zhiqiang Huang; Haibing Guan

Virtualization technology is now widely adopted in cloud computing to support heterogeneous and dynamic workload. The scheduler in a virtual machine monitor (VMM) plays an important role in allocating resources. However, the type of applications in virtual machines (VM) is unknown to the scheduler, and I/O-intensive and CPU-intensive applications are treated the same. This makes virtual systems unable to take full advantage of high performance networks such as 10-Gigabit Ethernet. In this paper, we review the SR-IOV networking solution and show by experiment that the current credit scheduler in Xen does not utilize high performance networks efficiently. For this reason, we propose a novel scheduling model with two optimizations to eliminate the bottleneck caused by scheduler. In this model, guest domains are divided into I/O-intensive domains and CPU-intensive domains according to their monitored behaviour. I/O-intensive domains can obtain extra credits that CPU-intensive domains are willing to share. Besides, the total available credits is adjusted agilely to accelerate the I/O responsiveness. Our experimental evaluation with benchmarks shows that the new scheduling model improves bandwidth even when the systems load is very high.


international conference on information engineering and computer science | 2010

Performance Tuning Towards a KVM-Based Low Latency Virtualization System

Baojing Zuo; Kai Chen; Alei Liang; Haibing Guan; Jun Zhang; Ruhui Ma; Hongbo Yang

Utilizing virtualization technology to combine real-time operating system(RTOS)and off-the-shelf time-sharing general purpose operating system (GPOS)is attracting much more interest recently.Such combination has the potential to provide a large application base,and to guarantee timely deterministic response to real-time applications,yet there is no convincible experimental result about its real-time property.In this paper,we analyze the interrupt latency of RTOS running on Linux KVM based on some preliminary tunings,and find out System Management Interrupt (SMI) is the main factor which makes the maximum latency unideal, so we propose a method to limit the worst-case interrupt latency in an acceptable interval.Furthermore,we also find out that boosting priority may result in wastes of CPU resources when RTOS is not executing real-time tasks,so we design a co-scheduling mechanism to improve the CPU throughput of the GPOS system.


Journal of Computer Science and Technology | 2016

Optimizations for High Performance Network Virtualization

Fanfu Zhou; Ruhui Ma; Jian Li; Li-Xia Chen; Wei-Dong Qiu; Haibing Guan

The increasing requirements of intensive interoperaterbility among the distributed nodes desiderate the high performance network connections, owing to the substantial growth of cloud computing and datacenters. Network I/O virtualization aggregates the network resource and separates it into manageable parts for particular servers or devices, which provides effective consolidation and elastic management with high agility, flexibility and scalability as well as reduced cost and cabling. However, both network I/O virtualization aggregation and the increasing network speed incur higher traffic density, which generates a heavy system stress for I/O data moving and I/O event processing. Consequently, many researchers have dedicated to enhancing the system performance and alleviating the system overhead for high performance networking virtualization. This paper first elaborates the mainstreaming I/O virtualization methodologies, including device emulation, split-driver model and hardware assisted model. Then, the paper discusses and compares their specific advantages in addition to performance bottlenecks in practical utilities. This paper mainly focuses on the comprehensive survey of stateof-the-art approaches for performance optimizations and improvements as well as the portability management for network I/O virtualization. The approaches include various novel data delivery schemes, overhead mitigations for interrupt processing and adequate resource allocations for dynamic network states. Finally, we highlight the diversity of I/O virtualization besides the performance improvements in network virtualization infrastructure.


ieee international conference on progress in informatics and computing | 2010

Power-aware I/O-Intensive and CPU-Intensive applications hybrid deployment within virtualization environments

Zhiwu Liu; Ruhui Ma; Fanfu Zhou; Yindong Yang; Zhengwei Qi; Haibing Guan

As more and more software applications are migrating from local desktops to remote data centers, efficient data centers management is becoming increasingly important. The management of data centers has been facing a key challenge how to trade-off power consumption and Quality of Service (QoS) of applications. To deal with this challenge, there has been a lot of ongoing efforts on power-aware and QoS-aware application deployment. Based on these works, we present I/O-Intensive and CPU-Intensive applications hybrid deployment to optimize resource utilization within virtualization environments. In this work, we investigate the resources allocation between virtual machines where I/O and CPU-Intensive applications reside, to achieve power-aware applications hybrid deployment. To demonstrate the problem of I/O and CPU resource in virtualization environment, we use Xen as the Virtual Machine Monitor to do experiments. Under different resource allocation configurations, we evaluate power efficiency up to 2%∼12%, compared to the default deployment. We also can conclude the more CPU resource that the CPU-Intensive application in the hybrid deployment applications need to satisfy QoS, the more power efficiency improvement that hybrid deployment can bring.

Collaboration


Dive into the Ruhui Ma's collaboration.

Top Co-Authors

Avatar

Haibing Guan

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Jian Li

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Zhengwei Qi

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Alei Liang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Bin Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fanfu Zhou

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Hongbo Yang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Yindong Yang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge