Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qingbo Wu is active.

Publication


Featured researches published by Qingbo Wu.


The Journal of Supercomputing | 2015

Workflow scheduling in cloud: a survey

Fuhui Wu; Qingbo Wu; Yusong Tan

To program in distributed computing environments such as grids and clouds, workflow is adopted as an attractive paradigm for its powerful ability in expressing a wide range of applications, including scientific computing, multi-tier Web, and big data processing applications. With the development of cloud technology and extensive deployment of cloud platform, the problem of workflow scheduling in cloud becomes an important research topic. The challenges of the problem lie in: NP-hard nature of task-resource mapping; diverse QoS requirements; on-demand resource provisioning; performance fluctuation and failure handling; hybrid resource scheduling; data storage and transmission optimization. Consequently, a number of studies, focusing on different aspects, emerged in the literature. In this paper, we firstly conduct taxonomy and comparative review on workflow scheduling algorithms. Then, we make a comprehensive survey of workflow scheduling in cloud environment in a problem–solution manner. Based on the analysis, we also highlight some research directions for future investigation.


APPT 2013 Revised Selected Papers of the 10th International Symposium on Advanced Parallel Processing Technologies - Volume 8299 | 2013

A Vectorized K-Means Algorithm for Intel Many Integrated Core Architecture

Fuhui Wu; Qingbo Wu; Yusong Tan; Lifeng Wei; Lisong Shao; Long Gao

The K-Means algorithms is one of the most popular and effective clustering algorithms for many practical applications. However, direct K-Means methods, taking objects as processing unit, is computationally expensive especially in Objects-Assignment phase on Single-Instruction Single-Data SISD processors, typically as CPUs. In this paper, we propose a vectorized K-Means algorithm for Intel Many Integrated Core MIC coprocessor, a newly released product from Intel for highly parallel workloads. This new algorithm is able to achieve fine-grained Single-Instruction Multiple-Data SIMD parallelism by taking each dimension of all objects as a long vector. This vectorized algorithm is suitable for any-dimensional objects, which is little taken into consideration in preceding works. We also parallelize the vectorized K-Means algorithm on MIC coprocessor to achieve coarse-grained thread-level parallelism. Finally, we implement and evaluate the vectorized method on the first generation of Intel MIC product. Measurements show that this algorithm based on MIC coprocessor gets desired speedup to sequential algorithm on CPU and demonstrate that MIC coprocessor owns highly parallel computational power as well as scalability.


international conference on cloud computing | 2013

Residency-Aware Virtual Machine Communication Optimization: Design Choices and Techniques

Yi Ren; Ling Liu; Qi Zhang; Qingbo Wu; Jie Wu; Jinzhu Kong; Jianbo Guan; Huadong Dai

Network I/O workloads are dominating in many data centers and cloud computing environments today. One way to improve inter Virtual Machine (VM) communication efficiency is to support co-resident VM communication by using shared memory based approaches and to resort to the traditional TCP/IP for inter-VM communications between VMs that are located on different physical hosts. Although a number of independent efforts are dedicated to improving communication efficiency between co-resident VMs, they differ from one another in terms of how the inter-VM communication optimization is carried out and where in the software stack the shared memory channel is established. In this paper, we provide an in-depth overview of the design choices and techniques for optimizing the performance of the co-resident inter-VM communication, with dual objectives. First, we describe the core design guidelines and key issues for optimizing inter-VM communication by using shared memory based mechanisms. Typical issues include choices of implementation layer in the software stack, seamless agility for VM live migration and VM dynamic deployment support, multilevel transparency. Second, we conduct a comprehensive analysis of representative state-of-the-art research efforts and implementation techniques based on the core design guidelines. We also give an analysis of future requirements in advanced features such as reliability, security and stability. The research reported in this paper not only provides the reference for developing the next generation of inter-VM communication optimization mechanisms, but also offers opportunities for both cloud infrastructure providers and cloud service consumers to improve inter-VM communication efficiency in virtualized platforms.


ACM Computing Surveys | 2016

Shared-Memory Optimizations for Inter-Virtual-Machine Communication

Yi Ren; Ling Liu; Qi Zhang; Qingbo Wu; Jianbo Guan; Jinzhu Kong; Huadong Dai; Lisong Shao

Virtual machines (VMs) and virtualization are one of the core computing technologies today. Inter-VM communication is not only prevalent but also one of the leading costs for data-intensive systems and applications in most data centers and cloud computing environments. One way to improve inter-VM communication efficiency is to support coresident VM communication using shared-memory-based methods and resort to the traditional TCP/IP for communications between VMs that are located on different physical machines. In recent years, several independent kernel development efforts have been dedicated to improving communication efficiency between coresident VMs using shared-memory channels, and the development efforts differ from one another in terms of where and how the shared-memory channel is established. In this article, we provide a comprehensive overview of the design choices and techniques for performance optimization of coresident inter-VM communication. We examine the key issues for improving inter-VM communication using shared-memory-based mechanisms, such as implementation choices in the software stack, seamless agility for dynamic addition or removal of coresident VMs, and multilevel transparency, as well as advanced requirements in reliability, security, and stability. An in-depth comparison of state-of-the-art research efforts, implementation techniques, evaluation methods, and performance is conducted. We conjecture that this comprehensive survey will not only provide the foundation for developing the next generation of inter-VM communication optimization mechanisms but also offers opportunities to both cloud infrastructure providers and cloud service providers and consumers for improving communication efficiency between coresident VMs in virtualized computing platforms.


international conference on algorithms and architectures for parallel processing | 2015

Maximize Throughput Scheduling and Cost-Fairness Optimization for Multiple DAGs with Deadline Constraint

Wei Wang; Qingbo Wu; Yusong Tan; Fuhui Wu

More and more application workflows are computed in cloud and most of them can be expressed by Directed Acyclic Graph DAG. As Cloud resource providers, they should guarantee as many as possible DAGs be accomplished within their deadline when they face the overstep request of computer resource. In this paper, we define the urgency of DAG and introduce the MTMD Maximize Throughput of Multi-DAG with Deadline algorithm to improve the ratio of DAGs which can be accomplished within deadline. The urgency of DAG is changing among execution and determine the execution order of tasks. We can detect DAGs which will exceed the deadline by this algorithm and abandon these DAGs timely. Based on the MTMD algorithm, we put forward the CFS Cost Fairness Scheduling algorithm to reduce the unfairness of cost between different DAGs. The simulation results show that the MTMD algorithm outperforms three other algorithms and the CFS algorithm reduces the cost of all DAGs by 12.1i?ź% on average and reduces the unfairness among DAGs by 54.5i?ź% on average.


advanced data mining and applications | 2017

Drug-Drug Interaction Extraction via Recurrent Neural Network with Multiple Attention Layers

Zibo Yi; Shasha Li; Jie Yu; Yusong Tan; Qingbo Wu; Hong Yuan; Ting Wang

Drug-drug interaction (DDI) is a vital information when physicians and pharmacists intend to co-administer two or more drugs. Thus, several DDI databases are constructed to avoid mistakenly drug combined use. In recent years, automatically extracting DDIs from biomedical text has drawn researchers’ attention. However, the existing work utilize either complex feature engineering or NLP tools, both of which are insufficient for sentence comprehension. Inspired by the deep learning approaches in natural language processing, we propose a recurrent neural network model with multiple attention layers for DDI classification. We evaluate our model on 2013 SemEval DDIExtraction dataset. The experiments show that our model classifies most of the drug pairs into correct DDI categories, which outperforms the existing NLP or deep learning methods.


web age information management | 2016

ERPC: An Edge-Resources Based Framework to Reduce Bandwidth Cost in the Personal Cloud

Shaoduo Gan; Jie Yu; Xiaoling Li; Jun Ma; Lei Luo; Qingbo Wu; Shasha Li

Personal Cloud storage and file synchronization services, such as Dropbox, Google Drive, and Baidu Cloud, are increasingly prevalent within the Internet community. It is estimated that subscriptions of personal cloud storage are projected to hit 1.3 billion in 2017. In order to provide high rates of data retrieving, cloud providers require huge amounts of bandwidth. As an attempt to reduce their bandwidth cost and, at the same time, guarantee the quality of service, we propose a novel cloud framework based on distributed edge resources (i.e., voluntary peers in P2P Networks and edge servers in Content Delivery Networks).


international conference on algorithms and architectures for parallel processing | 2015

Unified Multi-constraint and Multi-objective Workflow Scheduling for Cloud System

Fuhui Wu; Qingbo Wu; Yusong Tan; Wei Wang

With the development of cloud computing, the problem of scheduling workflow in cloud system attracts a large amount of attention. In general, the cloud workflow scheduling problem requires to consider a variety of optimization objectives with some constraints. Traditional workflow scheduling methods focus on single optimization goal like makespan and single constraint like deadline or budget. In this paper, we first make a unified formalization of the optimality problem of multi-constraint and multi-objective cloud workflow scheduling using pareto optimality theory. We also present a two-constraint and two-objective case study, considering deadline, budget constraints and energy consumption, reliability objectives. A general list scheduling algorithm and a tuning mechanism are designed to solve this problem. Through extensive experimental, it confirms the efficiency of the unified multi-constraint and multi-objective cloud workflow scheduling system.


international conference on cloud computing | 2016

TMVCE—topology-aware multipath Virtual Cluster embedding algorithm

Rongzhen Li; Jianfeng Zhang; Yusong Tan; Qingbo Wu

Virtual Cluster refers to the basics of providing distributed parallel system for tenants by sharing resources in cloud data center. Allocating physical resource for virtual cluster is known as virtual cluster embedding problem, which is a critical issue that affects the performance of virtual cluster and resource utilization of the system. In order to effectively reduce the runtime of virtual cluster embedding and improve the revenue/cost ratio, this paper proposes a topology-aware multipath virtual cluster embedding algorithm (TMVCE). Virtual cluster topology information, mainly including the degree and closeness two factors of topology network, is used efficiently for the measurement parameters of VCE. According to extensive experimental tests and the comparison of correlative algorithms, it is obvious that TMVCE obtains a higher embedding efficiency and, to some extent, improves the revenue/cost ratio.


The Journal of Supercomputing | 2016

Resource stealing: a resource multiplexing method for mix workloads in cloud system

Yusong Tan; Fuhui Wu; Qingbo Wu; Xiangke Liao

The cloud computing paradigm enables providing resources on demand. However, most of them focus on a single type of application requiring separate quality of service. In the context that mix heterogeneous workloads are co-scheduled in the cloud, resource multiplexing is the key to improve resource utilization under premise of performance guaranteing. In this paper, we propose a resource stealing mechanism to improve resource multiplexing of cloud resources. It enables free resource fragments reserved by some workloads being utilized by others. To meet certain service level agreement, resource preemption is adopted as a complement to resource stealing. It ensures each workload with a minimum amount of resources when required. Moreover, we propose an adaptive joint resource provisioning algorithm. It integrates our resource multiplexing method into elastic resource provisioning. Experimental results reveal that the proposed algorithms improve resource utilization and workload performance simultaneously.

Collaboration


Dive into the Qingbo Wu's collaboration.

Top Co-Authors

Avatar

Yusong Tan

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Huadong Dai

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Jie Yu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Jinzhu Kong

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaojian Liu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Ma

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Lei Luo

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Fuhui Wu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Shasha Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Rongzhen Li

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge