Xingjun Zhang
Xi'an Jiaotong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xingjun Zhang.
computer and information technology | 2010
Xingjun Zhang; Xiao-Hong Peng; Scott Fowler; Dajun Wu
In this work, we present an adaptive unequal loss protection (ULP) scheme for H264/AVC video transmission over lossy networks. This scheme combines erasure coding, H.264/AVC error resilience techniques and importance measures in video coding. The unequal importance of the video packets is identified in the group of pictures (GOP) and the H.264/AVC data partitioning levels. The presented method can adaptively assign unequal amount of forward error correction (FEC) parity across the video packets according to the network conditions, such as the available network bandwidth, packet loss rate and average packet burst loss length. A near optimal algorithm is developed to deal with the FEC assignment for optimization. The simulation results show that our scheme can effectively utilize network resources such as bandwidth, while improving the quality of the video transmission. In addition, the proposed ULP strategy ensures graceful degradation of the received video quality as the packet loss rate increases.
network-based information systems | 2011
Cuiping Jing; Xingjun Zhang; Feilong Tang; Scott Fowler; Huali Cui; Xiaoshe Dong
In this paper we are interested in improving the performance of constructive network coding schemes for video transmission over packet lossy networks. A novel unequal packet loss protection scheme R2NC based on low-triangular global coding matrix with ladder-shaped partition will be presented, which combines redundant and random network coding for robust H.264/SVC video transmission. Firstly, the error-correcting capabilities of redundant network coding make our scheme resilient to loss. Secondly, the implementation of random network coding at the intermediate nodes with multiple input links can reduce the cost of network bandwidth, thus reducing the end to end delay for video transmission. Thirdly, the low-triangular global coding matrix with ladder-shaped partition is maintained throughout R2NC process to provide unequal erasure protection for H.264/SVC priority layers. The redundant network coding avoids the retransmission of lost packets and improves error correcting capabilities of lost packets. Based only on the knowledge of the loss rates on the output links, the source node and intermediate nodes can make decisions for redundant network coding and random network coding (i.e., how much redundancy to add at this node). However, the redundancy caused by redundant network coding makes the network load increases, in order to improve network throughput, we perform random network coding at the intermediate nodes. Our approach is grounded on the overall distortion of reconstructed video minimization by optimizing the amount of redundancy assigned to each layer. Experimental results are shown to demonstrate the significant improvement of H.264/SVC video reconstruction quality with R2NC over packet lossy networks.
annual acis international conference on computer and information science | 2012
Hao Zheng; Xingjun Zhang; Endong Wang; Nan Wu; Xiaoshe Dong
Driver faults are the main reasons of causing failure in operating system. In order to address this issue and improve the kernel reliability, this paper presents an intelligent kernel-mode driver enhancement mechanism - Style Box which can limit the drivers rights to access kernel by a private page table and a call control list. This method captures a variety of type errors, synchronization errors and behavior errors of the driver, and intelligently predicts and rapidly recovers driver errors. Experimental results show that Style Box can effectively detect and deal with driver errors, and obviously improve the reliability of the operating system.
grid computing | 2010
Liang Li; Xingjun Zhang; Jinghua Feng; Xiaoshe Dong
Due to the heterogeneity and the multigrain parallelism of the heterogeneous multi-core computer, communication and memory access show hierarchical characteristics ignored by other models. In this paper, a new model named mPlogP, is presented on the basis of the PlogP model, in which communication and memory access is abstracted by considering these new characteristics of the heterogeneous multi-core computer. It uses memory access to model the behavior of computation, estimates the execution time of every part of applications and guides the optimization of effective parallel programs. Finally this proposed model is validated by experiments that it can precisely evaluate the execution of parallel applications under the heterogeneous multi-core computer.
computer and information technology | 2010
Xingjun Zhang; Yanfei Ding; Yiyuan Huang; Xiaoshe Dong
Integrating reconfigurable computing with high-performance computing, exploiting reconfigurable hardware with their advantages to make up for the inadequacy of the existing high-performance computers had gradually become the high-performance computing solutions and trends. Based on comprehensively investigating the reconfigurable technologies, the paper presented a high-performance computing scheme in which the general-purpose processing nodes are connected to the dynamic partial reconfigurable computing nodes through the high-speed network. Using module-based partial reconfiguration design method, a FPGA based dynamic and partial reconfigurable computing node is designed. This node has the ability to do dynamic and partial reconfiguration and can load different computing units according to the different requirements. Dynamic partial reconfigurable computing node integrated microprocessor, memory, network interface, reconfigurable computing module, interface module are designed and implemented. The experimental results show that the system can achieve more functions with fewer resources; and the reconfigurable computing node can nicely complete the task and the system performance is effectively improved.
network-based information systems | 2012
Guofeng Zhu; Xingjun Zhang; Longxiang Wang; Yueguang Zhu; Xiaoshe Dong
With the explosive growth of data amount, it has become a key issue to reduce the storage space that mass data occupies and the bandwidth consumption during network transmission. Experimental investigation shows that a large amount of redundant data exists in each part of the information processing and storage. Therefore, the issues concerning how to eliminate the redundant information during the backup process are of crucial importance for saving disk space and network bandwidth. This paper adopts the data de-duplication technology to solve the problem of redundant data in the course of backup by designing and implementing a backup system with intelligent data de-duplication named Backup Ded up which includes four de-duplication strategies, that is, SIS, FSP, CDC and SW. Backup Ded up supports the online source-side de-duplication and is capable of selecting different de-duplication algorithms according to the corresponding data types. Meanwhile, it offers the data reliability and security in the backup process. The experimental test results show that Backup Ded up employs multi de-duplication strategies simultaneously to substantially eliminate redundant data in the backup process so as to reach the goal of effectively saving storage space and network bandwidth.
network-based information systems | 2011
Yifei Sun; Xingjun Zhang; Feilong Tang; Scott Fowler; Huali Cui; Xiaoshe Dong
The Scalable Video Coding (SVC) amendment of the H.264/AVC standard is an up-to-date video compression standard. The various scalable layers have different contribution to the quality of the reconstructed video sequence due to the use of hierarchical prediction and the drift propagation. This paper proposes a novel trapezoidal-unequal error protection (UEP) scheme which significantly reduces the redundancy but rarely decreases the performance by taking into account the characteristics of the video coding and the adoptive forward error correction (FEC) sufficiently. In order to optimally distribute FEC codes, the paper then proposes a layer-aware distortion model to accurately estimate the decrement of video quality caused by the loss of quality enhancement layers, drift propagation and error concealment in the scalable H.264/AVC video. Experimental results show that the proposed trapezoidal UEP scheme has better robustness and in the meanwhile reduces the coding redundancy greatly in different channel circumstance compared with the traditional UEP scheme.
multimedia and ubiquitous engineering | 2011
Cuiping Jing; Xingjun Zhang; Yifei Sun; Huali Cui; Xiaoshe Dong
Video streaming applications over lossy packet networks is a challenging task due to a number of factors such as delay, fix bandwidth allocation, and packet loss. The conventional Forward Error Correcting (FEC) code and automatic repeat requestret (ARQ) methods will become ineffective to transmit video streaming for the reason that video frame errors appear in bursts which will lead to reconstruct the video frame very difficult over the lossy packet network. In this paper, we focus on an effective packet loss protection scheme joint deterministic network coding (DNC) and random linear network coding (RLNC) for H.264/AVC video transmission. Considering the complexity of RLNC algorithm, the transmission scheme should make some improvements. Firstly, the source node encodes data packets based on the deterministic network coding, which will improve the efficiency of coding. Secondly, RLNC process at the intermediate nodes needs to do some limited operations for the random encoding coefficient vectors to reduce the redundancy packets in the network. Thirdly, the decoding process at the destination nodes should be simplified. We demonstrate the effectiveness of this approach by system implementation and performance evaluation. Experimental results are given to demonstrate the significant improvement of H.264/AVC streaming reconstructive video quality over the lossy packet network, and the improvement is comparably more remarkable when the packets loss dramatically.
Journal of Zhejiang University Science C | 2016
Longxiang Wang; Xiaoshe Dong; Xingjun Zhang; Yinfeng Wang; Tao Ju; Guo-fu Feng
Modern storage systems incorporate data compressors to improve their performance and capacity. As a result, data content can significantly influence the result of a storage system benchmark. Because real-world proprietary datasets are too large to be copied onto a test storage system, and most data cannot be shared due to privacy issues, a benchmark needs to generate data synthetically. To ensure that the result is accurate, it is necessary to generate data content based on the characterization of real-world data properties that influence the storage system performance during the execution of a benchmark. The existing approach, called SDGen, cannot guarantee that the benchmark result is accurate in storage systems that have built-in word-based compressors. The reason is that SDGen characterizes the properties that influence compression performance only at the byte level, and no properties are characterized at the word level. To address this problem, we present TextGen, a realistic text data content generation method for modern storage system benchmarks. TextGen builds the word corpus by segmenting real-world text datasets, and creates a word-frequency distribution by counting each word in the corpus. To improve data generation performance, the word-frequency distribution is fitted to a lognormal distribution by maximum likelihood estimation. The Monte Carlo approach is used to generate synthetic data. The running time of TextGen generation depends only on the expected data size, which means that the time complexity of TextGen is O(n). To evaluate TextGen, four real-world datasets were used to perform an experiment. The experimental results show that, compared with SDGen, the compression performance and compression ratio of the datasets generated by TextGen deviate less from real-world datasets when end-tagged dense code, a representative of word-based compressors, is evaluated.
The Computer Journal | 2015
Hao Zheng; Xiaoshe Dong; Zhengdong Zhu; Baoke Chen; Yizhi Zhang; Xingjun Zhang
Full virtualization technology is highly reusable. Using this property, various types and versions of existing operating systems and drivers can be reused in a virtual machine to customize users’ application environments. However, these environments are threatened by drivers’ write operation faults, which are caused by bugs in reused drivers. Chariot is a reliability architecture that has been developed to solve this problem. This architecture captures a driver’s write operations by maintaining the write permissions of shadow pages as read-only to examine their correctness. Nevertheless, this capture method produces many page faults in the virtual machine monitor and has an adverse impact on the performance of isolated drivers. To reduce performance losses, this paper examines two algorithms that cache recently used shadow pages using different structures to avoid frequent page faults. The experimental results show that the performance of isolated drivers can be greatly improved using these shadow page caches without significantly impacting the isolation efficiency of Chariot.