Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhaoning Zhang is active.

Publication


Featured researches published by Zhaoning Zhang.


Frontiers of Computer Science in China | 2016

Large-scale virtual machines provisioning in clouds: challenges and approaches

Zhaoning Zhang; Dongsheng Li; Kui Wu

The scale of global data center market has been explosive in recent years. As the market grows, the demand for fast provisioning of the virtual resources to support elastic, manageable, and economical computing over the cloud becomes high. Fast provisioning of large-scale virtual machines (VMs), in particular, is critical to guarantee quality of service (QoS). In this paper, we systematically review the existing VM provisioning schemes and classify them in three main categories. We discuss the features and research status of each category, and introduce two recent solutions, VMThunder and VMThunder+, both of which can provision hundreds of VMs in seconds.


network and parallel computing | 2016

DSS: A Scalable and Efficient Stratified Sampling Algorithm for Large-Scale Datasets

Minne Li; Dongsheng Li; Siqi Shen; Zhaoning Zhang; Xicheng Lu

Statistical analysis of aggregated records is widely used in various domains such as market research, sociological investigation and network analysis, etc. Stratified sampling (SS), which samples the population divided into distinct groups separately, is preferred in the practice for its high effectiveness and accuracy. In this paper, we propose a scalable and efficient algorithm named DSS, for SS to process large datasets. DSS executes all the sampling operations in parallel by calculating the exact subsample size for each partition according to the data distribution. We implement DSS on Spark, a big-data processing system, and we show through large-scale experiments that it can achieve lower data-transmission cost and higher efficiency than state-of-the-art methods with high sample representativeness.


networking architecture and storages | 2014

RAFlow: Read Ahead Accelerated I/O Flow through Multiple Virtual Layers

Zhaoning Zhang; Kui Wu; Huiba Li; Jinghua Feng; Yuxing Peng; Xicheng Lu

Virtualization is the foundation for cloud computing, and the virtualization can not be achieved without software defined, elastic, flexible and scalable virtual layers. Unfortunately, if multiple virtual storage devices are chained together, the system may be subject to severe performance degradation. While the read-ahead (RA) mechanism in storage devices plays a very important role to improve I/O performance, RA may not be effective as expected for multiple virtualization layers, since it is originally designed for one layer only. When I/O requests are passed through a long I/O path, they may trigger a chain reaction and lead to unnecessary data transmission and thus bandwidth waste. In this paper, we study the dynamic behavior of RA through multiple I/O layers and demonstrate that if controlled well, RA can greatly accelerate I/O speed. We present RAFlow, a RA control mechanism, to effectively improve I/O performance by strategically expanding RA window at each layer. Our real-world experiments show that it can achieve 20% to 50% performance improvement in I/O paths with up to 8 virtualized storage devices.


Ninth International Conference on Graphic and Image Processing (ICGIP 2017) | 2018

Border-oriented post-processing refinement on detected vehicle bounding box for ADAS

Xinyuan Chen; Zhaoning Zhang; Minne Li; Dongsheng Li

We investigate a new approach for improving localization accuracy of detected vehicles for object detection in advanced driver assistance systems(ADAS). Specifically, we implement a bounding box refinement as a post-processing of the state-of-the-art object detectors (Faster R-CNN, YOLOv2, etc.). The bounding box refinement is achieved by individually adjusting each border of the detected bounding box to its target location using a regression method. We use HOG features which perform well on the edge detection of vehicles to train the regressor and the regressor is independent of the CNN-based object detectors. Experiment results on the KITTI 2012 benchmark show that we can achieve up to 6% improvements over YOLOv2 and Faster R-CNN object detectors on the IoU threshold of 0.8. Also, the proposed refinement framework is computationally light, allowing for processing one bounding box within a few milliseconds on CPU. Further, this refinement method can be added to any object detectors, especially those with high speed but less accuracy.


Conference on Advanced Computer Architecture | 2018

An Experimental Perspective for Computation-Efficient Neural Networks Training

Lujia Yin; Xiaotao Chen; Zheng Qin; Zhaoning Zhang; Jinghua Feng; Dongsheng Li

Nowadays, as the tremendous requirements of computation-efficient neural networks to deploy deep learning models on inexpensive and broadly-used devices, many lightweight networks have been presented, such as MobileNet series, ShuffleNet, etc. The computation-efficient models are specifically designed for very limited computational budget, e.g., 10–150 MFLOPs, and can run efficiently on ARM-based devices. These models have smaller CMR than the large networks, such as VGG, ResNet, Inception, etc.


arXiv: Computer Vision and Pattern Recognition | 2017

S-OHEM: Stratified Online Hard Example Mining for Object Detection

Minne Li; Zhaoning Zhang; Hao Yu; Xinyuan Chen; Dongsheng Li

One of the major challenges in object detection is to propose detectors with highly accurate localization of objects. The online sampling of high-loss region proposals (hard examples) uses the multitask loss with equal weight settings across all loss types (e.g, classification and localization, rigid and non-rigid categories) and ignores the influence of different loss distributions throughout the training process, which we find essential to the training efficacy. In this paper, we present the Stratified Online Hard Example Mining (S-OHEM) algorithm for training higher efficiency and accuracy detectors. S-OHEM exploits OHEM with stratified sampling, a widely-adopted sampling technique, to choose the training examples according to this influence during hard example mining, and thus enhance the performance of object detectors. We show through systematic experiments that S-OHEM yields an average precision (AP) improvement of 0.5% on rigid categories of PASCAL VOC 2007 for both the IoU threshold of 0.6 and 0.7. For KITTI 2012, both results of the same metric are 1.6%. Regarding the mean average precision (mAP), a relative increase of 0.3% and 0.5% (1% and 0.5%) is observed for VOC07 (KITTI12) using the same set of IoU threshold. Also, S-OHEM is easy to integrate with existing region-based detectors and is capable of acting with post-recognition level regressors.


service oriented software engineering | 2014

CoCache: Multi-layer Multi-path Cooperative Cache Accelerating the Deployment of Large Scale Virtual Machines

Ziyang Li; Zhaoning Zhang; Huiba Li; Yuxing Peng

By analyzing the problems and challenges of virtual machine image store system in cloud computing environment, we present a cooperative persistent cache (CoCache) for virtual disks. CoCache takes advantage of the service ability of the cached nodes by providing virtual image data service for other nodes. CoCache can transfer data between nodes in a P2P pattern, for extending data service ability of the system. CoCache is realized in the kernel space of Linux, can support any kind of VMM. Experiments show that CoCache can effectively reduce the cost during virtual machines read data, and promote the service ability of virtual machine storage system. Layer-aware cache policy is proposed specially for improving cache hit rate in the multi-layer and multipath environment.


service oriented software engineering | 2014

DLSM: Decoupled Live Storage Migration with Distributed Device Mapper Storage

Zhaoning Zhang; Ziyang Li; Kui Wu; Huiba Li; Yuxing Peng; Xicheng Lu

As a key technique in cloud computing, live virtual machine (VM) migration makes cloud computing elastic and renders more efficient resource scheduling. In many cases, live storage migration is also desirable. Compared to live VM migration, however, live storage migration is relatively slow. The speed gap between live VM migration and live storage migration may lead to poor Quality of Service (QoS) for cloud users. In this paper, we present a new resource migration scheme, called Decoupled Live Storage Migration (DLSM), which decouples the live storage migration from the live VM migration. DLSM can migrate the VM immediately, while data blocks actually required by the VM are moved on demand at later times. It achieves the on-demand data migration at the storage level without the need of modifying the hypervisor. The key contribution of DLSM is that it reduces the network traffic and speeds up the live storage migration, by completely eliminating iterative copy of dirty data over the network. This good property is achieved without relying on any tracking mechanism for dirty data blocks. Experimental evaluation demonstrates the efficiency and effectiveness of DLSM.


IEEE Transactions on Parallel and Distributed Systems | 2014

VMThunder: Fast Provisioning of Large-Scale Virtual Machine Clusters

Zhaoning Zhang; Ziyang Li; Kui Wu; Dongsheng Li; Huiba Li; Yuxing Peng; Xicheng Lu


international symposium on neural networks | 2018

Diagonalwise Refactorization: An Efficient Training Method for Depthwise Convolutions

Zheng Qin; Zhaoning Zhang; Dongsheng Li; Yiming Zhang; Yuxing Peng

Collaboration


Dive into the Zhaoning Zhang's collaboration.

Top Co-Authors

Avatar

Dongsheng Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Yuxing Peng

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Hao Yu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Huiba Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xicheng Lu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Kui Wu

University of Victoria

View shared research outputs
Top Co-Authors

Avatar

Minne Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Ziyang Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Shiqing Zhang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaotao Chen

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge