Huichen Dai
Tsinghua University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Huichen Dai.
international conference on distributed computing systems | 2012
Yi Wang; Keqiang He; Huichen Dai; Wei Meng; Junchen Jiang; Bin Liu; Yan Chen
Name-based route lookup is a key function for Named Data Networking (NDN). The NDN names are hierarchical and have variable and unbounded lengths, which are much longer than IPv4/6 address, making fast name lookup a challenging issue. In this paper, we propose an effective Name Component Encoding (NCE) solution with the following two techniques: (1) A code allocation mechanism is developed to achieve memory-efficient encoding for name components, (2) We apply an improved State Transition Arrays to accelerate the longest name prefix matching and design a fast and incremental update mechanism which satisfies the special requirements of NDN forwarding process, namely to insert, modify, and delete name prefixes frequently. Furthermore, we analyze the memory consumption and time complexity of NCE. Experimental results on a name set containing 3,000,000 names demonstrate that compared with the character trie NCE reduces overall 30% memory. Besides, NCE performs a few millions lookups per second (on an Intel 2.8 GHz CPU), a speedup of over 7 times compared with the character trie. Our evaluation results also show that NCE can scale up to accommodate the potential future growth of the name sets.
architectures for networking and communications systems | 2012
Huichen Dai; Bin Liu; Yan Chen; Yi Wang
Internet has witnessed its paramount function transition from host-to-host communication to content dissemination. Named Data Networking (NDN) and Content-Centric Networking (CCN) emerge as a clean slate network architecture to embrace this shift. Pending Interest Table (PIT) in NDN/CCN keeps track of the Interest packets that are received but yet un-responded, which brings NDN/CCN significant features, such as communicating without the knowledge of source or destination, loop and packet loss detection, multipath routing, better security, etc. This paper presents a thorough study of PIT for the first time. Using an approximate, application-driven translation of current IP-generated trace to NDN trace, we firstly quantify the size and access frequencies of PIT. Evaluation results on a 20 Gbps gateway trace show that the corresponding PIT contains 1.5 M entries, and the lookup, insert and delete frequencies are 1.4 M/s, 0.9 M/s and 0.9 M/s, respectively. Faced with this challenging issue and to make PIT more scalable, we further propose a Name Component Encoding (NCE) solution to shrink PIT size and accelerate PIT access operations. By NCE, the memory consumption can be reduced by up to 87.44%, and the access performance significantly advanced, satisfying the access speed required by PIT. Moreover, PIT exhibits good scalability with NCE. At last, we propose to place PIT on (egress channel of) the outgoing line-cards of routers, which meets the NDN design and eliminates the cumbersome synchronization problem among multiple PITs on the line-cards.
conference on computer communications workshops | 2013
Huichen Dai; Yi Wang; Jindou Fan; Bin Liu
Current Internet is reaching the limits of its capabilities due to its function transition from host-to-host communication to content dissemination. Named Data Networking (NDN) - an instantiation of Content-Centric Networking approach, embraces this shift by stressing the content itself, rather than where it locates. NDN tries to provide better security and privacy than current Internet does, and resilience to Distributed Denial of Service (DDoS) is a significant issue. In this paper, we present a specific and concrete scenario of DDoS attack in NDN, where perpetrators make use of NDNs packet forwarding rules to send out Interest packets with spoofed names as attacking packets. Afterwards, we identify the victims of NDN DDoS attacks include both the hosts and routers. But the largest victim is not the hosts, but the routers, more specifically, the Pending Interest Table (PIT) within the router. PIT brings NDN many elegant features, but it suffers from vulnerability. We propose Interest traceback as a counter measure against the studied NDN DDoS attacks, which traces back to the originator of the attacking Interest packets. At last, we assess the harmful consequences brought by these NDN DDoS attacks and evaluate the Interest traceback counter measure. Evaluation results reveal that the Interest traceback method effectively mitigates the NDN DDoS attacks studied in this paper.
international conference on computer communications | 2013
Yi Wang; Tian Pan; Zhian Mi; Huichen Dai; Xiaoyu Guo; Ting Zhang; Bin Liu; Qunfeng Dong
In this paper we design, implement and evaluate NameFilter, a two-stage Bloom filter-based scheme for Named Data Networking name lookup, in which the first stage determines the length of a name prefix, and the second stage looks up the prefix in a narrowed group of Bloom filters based on the results from the first stage. Moreover, we optimize the hash value calculation of name strings, as well as the data structure to store multiple Bloom filters, which significantly reduces the memory access times compared with that of non-optimized Bloom filters. We conduct extensive experiments on a commodity server to test NameFilters throughput, memory occupation, name update as well as scalability. Evaluation results on a name prefix table with 10M entries show that our proposed scheme achieves lookup throughput of 37 million searches per second at low memory cost of only 234.27 MB, which means 12 times speedup and 77% memory savings compared to the traditional character trie structure. The results also demonstrate that NameFilter can achieve 3M per second incremental updates and exhibit good scalability to large-scale prefix tables.
international workshop on quality of service | 2014
Yi Wang; Boyang Xu; Dongzhe Tai; Jianyuan Lu; Ting Zhang; Huichen Dai; Beichuan Zhang; Bin Liu
Complex name constitution plus huge-sized name routing table makes wire speed name lookup a challenging task in Named Data Networking. To overcome this challenge, we propose two techniques to significantly speed up the lookup process. First, we look up name prefixes in an order based on the distribution of prefix length in the forwarding table, which can find the longest match much faster than the linear search of current prototype CCNx. The search order can be dynamically adjusted as the forwarding table changes. Second, we propose a new near-perfect hash table data structure that combines many small sparse perfect hash tables into a larger dense one while keeping the worst-case access time of O(1) and supporting fast update. Also the hash table stores the signature of a key instead of the key itself, which further improves lookup speed and reduces memory use.
global communications conference | 2011
Yi Wang; Huichen Dai; Junchen Jiang; Keqiang He; Wei Meng; Bin Liu
Name-based route lookup is a key function for Named Data Networking (NDN). The NDN names are hierarchical and have variable and unbounded lengths, which are much longer than IPv4/6 address, making fast name lookup a challenging issue. In this paper, we propose a parallel architecture for NDN name lookup called Parallel Name Lookup (PNL) which leverages hardware parallelism to achieve high lookup speedup while keeping a low and controllable memory redundancy. The core of PNL is an allocation algorithm that maps the logically tree-based structure to physically parallel modules, with low computational complexity. We evaluate the PNLs performance and show that PNL dramatically accelerates the name lookup process. Furthermore, with certain knowledge of prior probability, the speedup can be significantly improved.
global communications conference | 2012
Huichen Dai; Jianyuan Lu; Yi Wang; Bin Liu
Routing is undoubtedly the foundation of NDNs data transmission service. We propose a two-layer routing protocol for NDN [1], [2], which is composed of a Topology Maintaining (TM) layer and a Prefix Announcing (PA) layer. The underlying layer (TM) maintains the full topology of an NDN network domain and calculates the shortest-path trees. The upper layer (PA) provides content in two ways: active publishing and passive serving. However, solely adopting either of them will lead to the problem of scalability. We compare the efficiency and cost of the two methods, and evaluation results show that active publishing is much more efficient than the passive serving method in terms of triggered traffic, but actively publishing all the content will lead to Forwarding Information Base (FIB) explosion. Therefore, we further propose a popularity-based active publishing policy and arrive at a compromise between the active and passive methods. Moreover, we put forward several methods to aggregate FIB entries, and the FIB size shrinks effectively after aggregation. This routing protocol is compliant with the NDN characteristics and supports NDN multipath routing.
international conference on distributed computing systems | 2012
Tong Yang; Ruian Duan; Jianyuan Lu; Shenjiang Zhang; Huichen Dai; Bin Liu
The sizes of routing table in backbone routers continue to keep a rapid growth and some of them currently increase up to 400K entries [1]. An effective solution to deflate the large table is the routing table compression. Meanwhile, there is an increasingly urgent demand for fast routing update mainly due to the change of network topology and new emerging Internet functionalities. Furthermore, the Internet link transmission speed has scaled up to 100Gbps commercially and towards 400Gbps Ethernet for laboratory experiments, resulting in a raring need of ultra-fast routing lookup. To achieve high performance, backbone routers must gracefully handle the three issues simultaneously: routing table Compression, fast routing Lookup, and fast incremental Update (CLUE), while previous works often only concentrate on one of the three dimensions. To address these issues, we propose a complete set of solutions-CLUE, by improving previous works and adding a novel incremental update mechanism. CLUE consists of three parts: a routing table compression algorithm, an improved parallel lookup mechanism, and a new fast incremental update mechanism. The routing table compression algorithm is based on ONRTC algorithm [2], a base for fast TCAM parallel lookup and fast update of TCAM. The second part is the improvement of the logical caching scheme for dynamic load balancing parallel lookup mechanism [3]. The third one is the conjunction of the trie, TCAM and redundant prefixes update algorithm. We analyze the performance of CLUE by mathematical proof, and draw the conclusion that speedup factor is proportional to the hit rate of redundant prefixes in the worst case, which is also confirmed by experimental results. Large-scale experimental results show that, compared with the mechanism in [3], CLUE only needs about 71% TCAM entries, 4.29% update time, and 3/4 dynamic redundant prefixes for the same throughput when using four TCAMs. In addition, CLUE has another advantage over the mechanism in [3] - the frequent interactions between control plane and data plane caused by redundant prefixes update can be avoided.
international conference on computer communications | 2015
Huichen Dai; Jianyuan Lu; Yi Wang; Bin Liu
Named Data Networking (NDN) as an instantiation of the Content-Centric Networking (CCN) approach, embraces the major shift of the network function - from host-to-host conversation to content dissemination. The NDN forwarding architecture consists of three tables - Content Store (CS), Pending Interest Table (PIT) and Forwarding Information Base (FIB), as well as two lookup rules - Longest Prefix Match (LPM) and Exact Match (EM). A software-based implementation for this forwarding architecture would be low-cost, flexible and have rich memory resource, but may also make the pipelining technique not readily applicable to table lookups. Therefore, forwarding a packet would go through multiple tables sequentially without pipelining, leading to high latency and low throughput. In order to take advantage of the software-based implementation and overcome its shortcoming, we find that, a single unified index that supports all the three tables and both LPM and EM lookup rules would benefit the forwarding performance. In this paper, we present such an index data structure called BFAST (Bloom Filter-Aided haSh Table). BFAST employs a Counting Bloom Filter to balance the load among hash table buckets, making the number of prefixes in each non-empty bucket close to 1, and thus enabling high lookup throughput and low latency. Evaluation results show that, for solely LMP lookup, BFAST can arrive at 36.41 million lookups per second (M/s) using 24 threads, and the latency is around 0.46 μs. When utilized to build the NDN forwarding architecture, BFAST obtains remarkable performance promotion under various request composition, e.g., BFAST achieves a lookup speed of 81.32 M/s with a synthetic request trace where 30% of the requests hit CS, another 30% hit PIT and the rest 40% hit FIB, while the lookup latency is only 0.29 μs
measurement and modeling of computer systems | 2013
Yi Wang; Dongzhe Tai; Ting Zhang; Jianyuan Lu; Boyang Xu; Huichen Dai; Bin Liu
Different from the IP-based routers, Named Data Networking routers forward packets by content names, which consist of characters and have variable and unbounded length. This kind of complex name constitution plus the huge-sized name routing table makes wire speed name lookup an extremely challenging task. Greedy name lookup mechanism is proposed to speed up name lookup by dynamically adjusting the search path against the changes of the prefix table. Meanwhile, we elaborate a string-oriented perfect hash table to reduce memory consumption which stores the signature of the key in the entry instead of the key itself. Extensive experimental results on a commodity PC server with 3 million name prefix entries demonstrate that greedy name lookup mechanism achieves 57.14 million searches per second using only 72.95 MB memory.