Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Han Qiu is active.

Publication


Featured researches published by Han Qiu.


international conference on communications, circuits and systems | 2006

PIFO Output Queued Switch Emulation by a One-cell-Crosspoint Buffered Crossbar Switch

Han Qiu; Yufeng Li; Peng Yi; Jiangxing Wu

It is well known that the buffered crossbar has simpler scheduling algorithms than an unbuffered crossbar. Buffered crossbar can be pipelined to run at a high speed, making it appealing for high performance switches and routers. Recent researches indicate that a buffered crossbar with modest speedup can exactly emulate an output queued (OQ) switch. As for per flow/priority guarantee, additional speedup and storage is required to avoid crosspoint blocking, preventing the use of the buffered crossbar for lager-scale devices. This paper introduces a novel mechanism to solve crosspoint blocking and a simple architecture to provide per flow/priority guarantee. Based on a simple scheduling scheme, named modified group-by-first-in-first-out-group-lowest time-to-leave (MGBFG-LTTL), we sufficiently prove that a one-cell-crosspoint buffered crossbar with input virtual priority output queues, VPOQ/CB-1, switch with two times speedup can exactly emulate a push-in-first-out (PIFO) OQ switch. Our scheme has less hardware requirements and provides a simple path to scale crossbar based routers


international conference on communications | 2008

Compensation Buffer Sizing for Providing User-Level QoS Guarantee of Media Flows

Han Qiu; Yufeng Li; Jiangxing Wu; Xiaozhuo Gu

When transferred in a packet-switched network, the temporal structure of continuous media may be damaged by delay and delay jitter. Compensation buffering is a well-known method to absorb the delay jitter. However, added buffering increases the latency, which may degrade the interactivity between users. As delay and delay jitter are both perceived QoS parameters to users, changing compensation buffer size may result in completely opposite effect on user-level QoS. How to set the buffer size to provide both delay and delay jitter guarantee with preferable user-level QoS? To answer the question, we investigate the effect of buffer size on maintaining the temporal structure of media flows. By performing QoS mapping from network-level to user-level, we prove that there is an optimal buffer size to provide the optimal user-level QoS and obtain the optimal buffer size by differentiating approach. Experiment results validate our studies on the effect of the buffer size.


advanced information networking and applications | 2007

Design and Buffer Sizing of TCAM-Based Pipelined Forwarding Engines

Yufeng Li; Han Qiu; Xiaozhuo Gu; Julong Lan; Jianwen Yang

The ever increasing line speed and the continuous growing demands of various functions support(for example QoS, multicast and security) have interact- tively made it harder for forwarding engines to process packets at line speed, and this will increasingly make the forwarding engines call for additional buffers to accommodate the burst transmission and decrease the packet loss rate. In this paper, a high-speed pipeline designed for TCAM-based forwarding engines is presented, and its buffer analysis model is also given, then, the buffer requirement of the forwarding engine is analyzed under two conditions: the forwarding rate is not less than and less than the input rate. Our analysis results and experiments both show that, the proposed forwarding pipeline is of high performance, and just one pipeline can easily deal with the data transfer rate of 30 Gb/s or even higher; the pipelined forwarding engine only need to buffer a several packets, then the loss rate will be an acceptable value or even zero, further increasing the buffer size will have little effect on reducing the loss rate.


international conference on communications | 2008

A Forwarding Approach for Routers Supporting PIM-SM in the IPv6 Networks

Yufeng Li; Han Qiu; Julong Lan; Binqiang Wang

Protocol independent multicast-sparse mode can use either a shared tree or a shortest path tree to deliver IPv6 multicast packets, consequently the multicast IP lookup engine requires, in some cases, two searches to get a correct forwarding decision, and this will lead to a new requirement of doubling the lookup speed. The ordinary method to satisfy this requirement in TCAM-based (ternary content addressable memory) lookup engines is to exploit parallelism among multiple TCAMs, however, parallel methods always incur more resources and higher design difficulty. We propose in this paper a approach to solve this problem. By arranging multicast forwarding table in class sequence in TCAM together with using the intrinsic characteristic of the TCAM, our approach can use just one search and a single TCAM to get the right lookup result, while keeping the hardware of lookup engine unchanged. Experimental results have shown that the approach can make it possible for just one TCAM to satisfy forwarding IPv6 multicast packets at the full link rate of 20 Gb/s with the current TCAM chip level.


parallel and distributed computing: applications and technologies | 2006

Analysis of the Centralized Algorithm and the Distributed Algorithm for Parallel Packet Switch

Yufeng Li; Han Qiu; Julong Lan; Jianwen Yang

Centralized parallel packet switch algorithm and distributed parallel packet switch algorithm are two typical scheduling algorithms for parallel packet switch. This paper analyzes the two algorithms in detail, addresses several key problems in their implementation and finally presents several available methods and suggestions to make the parallel packet switch more practical


international conference on communications, circuits and systems | 2006

Sizing Buffers for Pipelined Forwarding Engine

Yufeng Li; Peng Yi; Han Qiu; Julong Lan

Packet buffers in routers constitute a central element of packet networks. Selecting appropriate buffer size is an important and open research problem. This paper aims to size buffers for forwarding engine known as one of the main parts of a router. First, a high-speed pipeline which is designed for forwarding engine is presented, and its memory analysis model is also given, then, the memory requirement of the forwarding engine is analyzed under two conditions: the forwarding rate is not less than and less than the input rate. Our analysis results and experiment both show that, the proposed forwarding pipeline is of high performance, and just one pipeline can deal with the data transfer rate of 20 Gb/s or even higher; the pipelined forwarding engine only need to buffer a several packets, then the loss rate will be an acceptable value or even zero, further increasing the buffer size will have little effect on reducing the loss rate


international conference on communications, circuits and systems | 2006

Multiple Priorities in a Virtual-Priority-Output Queueing Buffered Crossbar

Han Qiu; Peng Yi; Yufeng Li; Jiangxing Wu

In order to support multiple priority levels, separate queues per priority are required at each crosspoint, much more memories and many schedulers are needed in a buffered crossbar, which are costly and inefficient. According to effectively support multiple priorities, this paper presents a virtual-priority-output queueing (VPOQ) architecture, based on a buffered crossbar, and proposes a novel scheduling mechanism by definition of dual priority. At each input port of a buffered crossbar, a parallel scheduling on priority queues and a round robin scheduling algorithm, both based on dual priority, are put forward. Schedulers at crosspoints function in a round robin manner. Compared with current scheduling algorithms supporting multiple priorities in a buffered crossbar, our project need not per priority queueing at per crosspoint, can not only avoid complicated scheduling in the fabric but also provide flow isolation and protection, and the implementation is relatively simple. Simulation results are presented to evaluate the delay property of our scheme, indicating that our scheme can match a buffered crossbar with per priority queueing at per crosspoint


international conference on communications | 2006

A New Look at Buffer Sizing for Core Routers

Han Qiu; Yufeng Li; Peng Yi; Jiangxing Wu

Currently, all analyses on the buffer size of routers are based on a single router model of TCP connections and then the results are applied to any core router. As we know, a TCP link may be made up of one router or several routers, and the latter is more popular in the practical networks. Can we apply the conclusions directly to any router in the network? How the routers interact in the network and how can we design the routers practically? We found a queueing model for the routers on a TCP link, explore the interaction of routers, and discover that the link utilization is dependent on the number of routers and their buffer size. By boundary value analysis of the buffer size, we show that our formula can commendably predict the buffer size of routers under low offered load and the impact of interaction between routers is weak under light offered load.


international conference on communication technology | 2006

Efficient and Lossless Multicast Support in Large Routers

Yufeng Li; Julong Lan; Han Qiu; Peng Yi; Fengsen Deng

Today, multicast-capable routers generally handle multicast traffic by appending each cell a local multicast output label (LMOL) containing a bitmap with as many bits as output ports, so as to identify the ports to which a copy of the cell has to be transferred. However, this approach is unfeasible for the very large IP routers having 128 output ports or more, because the LMOL length would rise above 16 bytes, thus the bandwidth wasted for transmitting LMOL will be intolerable for small cells. Recently, there come forth many approaches using various lossy compression algorithms to shorten the LMOL, hence to save bandwidth and improve efficiency. In this paper, we aim to offer both lossless replication and high efficiency for multicast traffic traversing in large routers, and present a scheme characterized by dual lookups and three-stage replications to achieve the goals. Experiment results show that the scheme can support multicast replication with no loss happening for large IP routers with as many as 128 x 128 output ports, in condition that the internal bus width being a normal value of 128, meanwhile, the scheme can also provide the most efficient transmission and replication.


IET International Conference on Wireless Mobile and Multimedia Networks Proceedings (ICWMMN 2006) | 2006

Multicast replication using dual lookups in large packet-based switches

Yufeng Li; Han Qiu; Jianwen Yang; Xiaozhuo Gu; Julong Lan

Collaboration


Dive into the Han Qiu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuxiang Hu

PLA Information Engineering University

View shared research outputs
Researchain Logo
Decentralizing Knowledge