Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Huiba Li is active.

Publication


Featured researches published by Huiba Li.


Archive | 2013

An Integration Framework of Cloud Computing with Wireless Sensor Networks

Pengfei You; Huiba Li; Yuxing Peng; Ziyang Li

Wireless sensor networks (WSN) is a key technology extensively applied in many fields, such as transportation, health-care and environment monitoring. Despite rapid development, the exponentially increasing data emanating from WSN is not efficiently stored and used. Besides, the data from multiple different types and locations of WSN needs to be well analyzed, fused and supplied to various types of clients, such as PC, workstation and smart phone. The emerging cloud computing technology provides scalable data process and storage power and some types of connectable services, which can helpfully utilize sensor data from WSN. In this paper, we propose an integration framework of cloud computing with WSN, in which sensor data is transmitted from WSN to cloud, and processed and stored in cloud, then mined and analyzed so as to be supplied to various clients. By applying virtualization and cloud storage technology, and Infrastructure as a Service (IaaS) and Software as a Service (SaaS) of cloud service model, the framework can fully process and store mass sensor data from multiple types of WSN. Besides, it efficiently mines and analyzes sensor data, based on which the data applications are well supplied to various types of clients in form of services.


Science in China Series F: Information Sciences | 2010

Superscalar communication: A runtime optimization for distributed applications

Huiba Li; Shengyun Liu; Yuxing Peng; Dongsheng Li; HangJun Zhou; Xicheng Lu

Building distributed applications is difficult mostly because of concurrency management. Existing approaches primarily include events and threads. Researchers and developers have been debating for decades to prove which is superior. Although the conclusion is far from obvious, this long debate clearly shows that neither of them is perfect. One of the problems is that they are both complex and error-prone. Both events and threads need the programmers to explicitly manage concurrencies, and we believe it is just the source of difficulties. In this paper, we propose a novel approach—superscalar communication, in which concurrencies are automatically managed by the runtime system. It dynamically analyzes the programs to discover potential concurrency opportunities; and it dynamically schedules the communication and the computation tasks, resulting in automatic concurrent execution. This approach is inspired by the idea of superscalar technology in modern microprocessors, which dynamically exploits instruction-level parallelism. However, hardware superscalar algorithms do not fit software in many aspects, thus we have to design a new scheme completely from scratch. Superscalar communication is a runtime extension with no modification to the language, compiler or byte code, so it is good at backward compatibility. Superscalar communication is likely to begin a brand new research area in systems software, which is characterized by dynamic optimization for networking programs.


networking architecture and storages | 2014

RAFlow: Read Ahead Accelerated I/O Flow through Multiple Virtual Layers

Zhaoning Zhang; Kui Wu; Huiba Li; Jinghua Feng; Yuxing Peng; Xicheng Lu

Virtualization is the foundation for cloud computing, and the virtualization can not be achieved without software defined, elastic, flexible and scalable virtual layers. Unfortunately, if multiple virtual storage devices are chained together, the system may be subject to severe performance degradation. While the read-ahead (RA) mechanism in storage devices plays a very important role to improve I/O performance, RA may not be effective as expected for multiple virtualization layers, since it is originally designed for one layer only. When I/O requests are passed through a long I/O path, they may trigger a chain reaction and lead to unnecessary data transmission and thus bandwidth waste. In this paper, we study the dynamic behavior of RA through multiple I/O layers and demonstrate that if controlled well, RA can greatly accelerate I/O speed. We present RAFlow, a RA control mechanism, to effectively improve I/O performance by strategically expanding RA window at each layer. Our real-world experiments show that it can achieve 20% to 50% performance improvement in I/O paths with up to 8 virtualized storage devices.


service oriented software engineering | 2014

CoCache: Multi-layer Multi-path Cooperative Cache Accelerating the Deployment of Large Scale Virtual Machines

Ziyang Li; Zhaoning Zhang; Huiba Li; Yuxing Peng

By analyzing the problems and challenges of virtual machine image store system in cloud computing environment, we present a cooperative persistent cache (CoCache) for virtual disks. CoCache takes advantage of the service ability of the cached nodes by providing virtual image data service for other nodes. CoCache can transfer data between nodes in a P2P pattern, for extending data service ability of the system. CoCache is realized in the kernel space of Linux, can support any kind of VMM. Experiments show that CoCache can effectively reduce the cost during virtual machines read data, and promote the service ability of virtual machine storage system. Layer-aware cache policy is proposed specially for improving cache hit rate in the multi-layer and multipath environment.


service oriented software engineering | 2014

DLSM: Decoupled Live Storage Migration with Distributed Device Mapper Storage

Zhaoning Zhang; Ziyang Li; Kui Wu; Huiba Li; Yuxing Peng; Xicheng Lu

As a key technique in cloud computing, live virtual machine (VM) migration makes cloud computing elastic and renders more efficient resource scheduling. In many cases, live storage migration is also desirable. Compared to live VM migration, however, live storage migration is relatively slow. The speed gap between live VM migration and live storage migration may lead to poor Quality of Service (QoS) for cloud users. In this paper, we present a new resource migration scheme, called Decoupled Live Storage Migration (DLSM), which decouples the live storage migration from the live VM migration. DLSM can migrate the VM immediately, while data blocks actually required by the VM are moved on demand at later times. It achieves the on-demand data migration at the storage level without the need of modifying the hypervisor. The key contribution of DLSM is that it reduces the network traffic and speeds up the live storage migration, by completely eliminating iterative copy of dirty data over the network. This good property is achieved without relying on any tracking mechanism for dirty data blocks. Experimental evaluation demonstrates the efficiency and effectiveness of DLSM.


international symposium on computers and communications | 2010

Automatic Concurrency Management for distributed applications

Huiba Li; Shengyun Liu; Yuxing Peng; Dongsheng Li

Building distributed applications is difficult mostly because of concurrency management. Existing approaches primarily include events and threads. Researchers and developers have been debating for decades to prove which is superior. Although the conclusion is far from obvious, this long debate clearly shows that neither of them is perfect. One of the problems is that they are both complex and error-prone. Both events and threads need the programmers to explicitly manage concurrency, and we believe it is just the source of difficulties. In this paper, we propose a novel approach—automatic concurrency management by the runtime system. It dynamically analyzes the programs to discover potential concurrency opportunities; and it dynamically schedules the communication and the computation tasks, resulting in automatic concurrent execution. This approach is inspired by the instruction scheduling technologies used in modern microprocessors, which dynamically exploits instruction-level parallelism. However, hardware scheduling algorithms do not fit software in many aspects, thus we have to design a new scheme completely from scratch. automatic concurrency management is a runtime technique with no modification to the language, compiler or byte code, so it is good at backward compatibility. It is essentially a dynamic optimization for networking programs.


international conference on parallel and distributed systems | 2010

Nexus: Speculative Execution for Event-Driven Networking Programs

Huiba Li; Xicheng Lu; Yuxing Peng

The efficiency of communication is a key factor to the performance of networking applications, and concurrent communication is an important approach to the efficiency of communication. However, many concurrency opportunities are very difficult to exploit because they depend on some undeterministic conditions. If these conditions are highly predictable, speculative execution can be a very effective approach to cope with the uncertainties. Existing researches on speculation seldom target at networking systems, and none of them can handle the event-driven model that is very popular in such systems. In this paper, we propose Nexus, a novel speculation scheme that supports event-driven networking applications. Nexus analyzes the dependence relationship of events, and performs speculation according to the duality of events and threads. Evaluation on a prototype implementation of nexus shows that this approach can significantly reduces the time needed to complete an event-driven program.


international conference on education technology and computer | 2010

The composability problem of events and threads in distributed systems

Huiba Li; Yuxing Peng; Xicheng Lu

Event-driven programming has been a relatively hot topic in distributed systems development. Having worked on these systems for years, we now believe that it is not the best choice. Besides the well-known “stack ripping” problem, we argue that it greatly influences the composability of software modules. Preemptive threads are also short of composability because of data-races and locks. Lacking of composability can result in systems with little vitality. Cooperative threading (or coroutine), on the contrary, is almost free of this problem, so we advocate it as the primary concurrency model for most distributed systems.


IEEE Transactions on Parallel and Distributed Systems | 2014

VMThunder: Fast Provisioning of Large-Scale Virtual Machine Clusters

Zhaoning Zhang; Ziyang Li; Kui Wu; Dongsheng Li; Huiba Li; Yuxing Peng; Xicheng Lu


Archive | 2007

Two-stage distributed application layer multicasting method facing to MSVMT problem

Feng Liu; Xicheng Lu; Yuhang Peng; Dongsheng Li; Huiba Li

Collaboration


Dive into the Huiba Li's collaboration.

Top Co-Authors

Avatar

Yuxing Peng

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xicheng Lu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Dongsheng Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Shengyun Liu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Zhaoning Zhang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Ziyang Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Kui Wu

University of Victoria

View shared research outputs
Top Co-Authors

Avatar

Feng Liu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

HangJun Zhou

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Pengfei You

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge