Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianyuan Lu is active.

Publication


Featured researches published by Jianyuan Lu.


international workshop on quality of service | 2014

Fast name lookup for Named Data Networking

Yi Wang; Boyang Xu; Dongzhe Tai; Jianyuan Lu; Ting Zhang; Huichen Dai; Beichuan Zhang; Bin Liu

Complex name constitution plus huge-sized name routing table makes wire speed name lookup a challenging task in Named Data Networking. To overcome this challenge, we propose two techniques to significantly speed up the lookup process. First, we look up name prefixes in an order based on the distribution of prefix length in the forwarding table, which can find the longest match much faster than the linear search of current prototype CCNx. The search order can be dynamically adjusted as the forwarding table changes. Second, we propose a new near-perfect hash table data structure that combines many small sparse perfect hash tables into a larger dense one while keeping the worst-case access time of O(1) and supporting fast update. Also the hash table stores the signature of a key instead of the key itself, which further improves lookup speed and reduces memory use.


global communications conference | 2012

A two-layer intra-domain routing scheme for named data networking

Huichen Dai; Jianyuan Lu; Yi Wang; Bin Liu

Routing is undoubtedly the foundation of NDNs data transmission service. We propose a two-layer routing protocol for NDN [1], [2], which is composed of a Topology Maintaining (TM) layer and a Prefix Announcing (PA) layer. The underlying layer (TM) maintains the full topology of an NDN network domain and calculates the shortest-path trees. The upper layer (PA) provides content in two ways: active publishing and passive serving. However, solely adopting either of them will lead to the problem of scalability. We compare the efficiency and cost of the two methods, and evaluation results show that active publishing is much more efficient than the passive serving method in terms of triggered traffic, but actively publishing all the content will lead to Forwarding Information Base (FIB) explosion. Therefore, we further propose a popularity-based active publishing policy and arrive at a compromise between the active and passive methods. Moreover, we put forward several methods to aggregate FIB entries, and the FIB size shrinks effectively after aggregation. This routing protocol is compliant with the NDN characteristics and supports NDN multipath routing.


international conference on network protocols | 2012

An ultra-fast universal incremental update algorithm for trie-based routing lookup

Tong Yang; Zhian Mi; Ruian Duan; Xiaoyu Guo; Jianyuan Lu; Shenjiang Zhang; Xianda Sun; Bin Liu

With the rapid growth of the Internet, the update messages in backbone routers become more and more frequent due to the ever-increasing dynamic changes on network topologies and new emerging functionalities of the Internet. In addition, update messages often come as a burst. Update action interrupts the packet lookup operation in the routers data plane, thus inefficient incremental update algorithm slows down IP lookup speed, and potentially badly degrades the system performance during bursty updates. Among trie-based routing lookup algorithms, binary trie has the best update complexity O(W) (W is the maximum depth of the trie), but exhibits slow lookup speed, failing to be competent for forwarding tens of gigabit-per-second traffic in backbone routers. Therefore, various improved routing lookup algorithms are proposed to pursue high speed based on binary trie, but sacrificing the performance of incremental update. To minimize the interruption time that update operation incurs, we propose Blind Spot (BS) algorithm by picking out those updating nodes which would have produced domino effect, achieving an update complexity of O(lookup+h), meanwhile keeping the lookup speed almost unchanged. Blind Spot algorithm is a universal methodology, which is applicable to all the trie-based lookup algorithms. To evaluate the performance of BS algorithm, we applied it to Lulea [1] and LC-trie [2] algorithms as two representatives. Extensive experimental results show that both Lulea+BS and LC+BS algorithms achieve a much faster update speed than binary trie, while keeping the same lookup speed as the original Lulea and LC-trie algorithms.


international conference on distributed computing systems | 2012

CLUE: Achieving Fast Update over Compressed Table for Parallel Lookup with Reduced Dynamic Redundancy

Tong Yang; Ruian Duan; Jianyuan Lu; Shenjiang Zhang; Huichen Dai; Bin Liu

The sizes of routing table in backbone routers continue to keep a rapid growth and some of them currently increase up to 400K entries [1]. An effective solution to deflate the large table is the routing table compression. Meanwhile, there is an increasingly urgent demand for fast routing update mainly due to the change of network topology and new emerging Internet functionalities. Furthermore, the Internet link transmission speed has scaled up to 100Gbps commercially and towards 400Gbps Ethernet for laboratory experiments, resulting in a raring need of ultra-fast routing lookup. To achieve high performance, backbone routers must gracefully handle the three issues simultaneously: routing table Compression, fast routing Lookup, and fast incremental Update (CLUE), while previous works often only concentrate on one of the three dimensions. To address these issues, we propose a complete set of solutions-CLUE, by improving previous works and adding a novel incremental update mechanism. CLUE consists of three parts: a routing table compression algorithm, an improved parallel lookup mechanism, and a new fast incremental update mechanism. The routing table compression algorithm is based on ONRTC algorithm [2], a base for fast TCAM parallel lookup and fast update of TCAM. The second part is the improvement of the logical caching scheme for dynamic load balancing parallel lookup mechanism [3]. The third one is the conjunction of the trie, TCAM and redundant prefixes update algorithm. We analyze the performance of CLUE by mathematical proof, and draw the conclusion that speedup factor is proportional to the hit rate of redundant prefixes in the worst case, which is also confirmed by experimental results. Large-scale experimental results show that, compared with the mechanism in [3], CLUE only needs about 71% TCAM entries, 4.29% update time, and 3/4 dynamic redundant prefixes for the same throughput when using four TCAMs. In addition, CLUE has another advantage over the mechanism in [3] - the frequent interactions between control plane and data plane caused by redundant prefixes update can be avoided.


international conference on computer communications | 2015

BFAST: Unified and scalable index for NDN forwarding architecture

Huichen Dai; Jianyuan Lu; Yi Wang; Bin Liu

Named Data Networking (NDN) as an instantiation of the Content-Centric Networking (CCN) approach, embraces the major shift of the network function - from host-to-host conversation to content dissemination. The NDN forwarding architecture consists of three tables - Content Store (CS), Pending Interest Table (PIT) and Forwarding Information Base (FIB), as well as two lookup rules - Longest Prefix Match (LPM) and Exact Match (EM). A software-based implementation for this forwarding architecture would be low-cost, flexible and have rich memory resource, but may also make the pipelining technique not readily applicable to table lookups. Therefore, forwarding a packet would go through multiple tables sequentially without pipelining, leading to high latency and low throughput. In order to take advantage of the software-based implementation and overcome its shortcoming, we find that, a single unified index that supports all the three tables and both LPM and EM lookup rules would benefit the forwarding performance. In this paper, we present such an index data structure called BFAST (Bloom Filter-Aided haSh Table). BFAST employs a Counting Bloom Filter to balance the load among hash table buckets, making the number of prefixes in each non-empty bucket close to 1, and thus enabling high lookup throughput and low latency. Evaluation results show that, for solely LMP lookup, BFAST can arrive at 36.41 million lookups per second (M/s) using 24 threads, and the latency is around 0.46 μs. When utilized to build the NDN forwarding architecture, BFAST obtains remarkable performance promotion under various request composition, e.g., BFAST achieves a lookup speed of 81.32 M/s with a synthetic request trace where 30% of the requests hit CS, another 30% hit PIT and the rest 40% hit FIB, while the lookup latency is only 0.29 μs


measurement and modeling of computer systems | 2013

Greedy name lookup for named data networking

Yi Wang; Dongzhe Tai; Ting Zhang; Jianyuan Lu; Boyang Xu; Huichen Dai; Bin Liu

Different from the IP-based routers, Named Data Networking routers forward packets by content names, which consist of characters and have variable and unbounded length. This kind of complex name constitution plus the huge-sized name routing table makes wire speed name lookup an extremely challenging task. Greedy name lookup mechanism is proposed to speed up name lookup by dynamically adjusting the search path against the changes of the prefix table. Meanwhile, we elaborate a string-oriented perfect hash table to reduce memory consumption which stores the signature of the key in the entry instead of the key itself. Extensive experimental results on a commodity PC server with 3 million name prefix entries demonstrate that greedy name lookup mechanism achieves 57.14 million searches per second using only 72.95 MB memory.


international workshop on quality of service | 2014

Towards Line-Speed and Accurate On-line Popularity Monitoring on NDN Routers

Huichen Dai; Yi Wang; Hao Wu; Jianyuan Lu; Bin Liu

NDN enables routers to cache received contents for future requests to reduce upstream traffic. To this end, various caching policies are proposed, typically based on some notion of content popularity, e.g., LFU. But these policies simply assume the availability of content popularity information without elaborating how that information is obtained and maintained in routers. Towards line-speed and accurate on-line popularity monitoring on NDN routers, we propose a Bloom filter-based method to continuously capture content popularity with efficient usage of memory. In this method, multiple Bloom filters are employed and each one is responsible for a particular range of popularity. Content objects whose popularities fall into a Bloom filters range will be inserted into that Bloom filter. Meanwhile, a sliding window monitoring scheme is proposed to implement more frequent and real-time update of the popularities. Moreover, we put forward three optimization schemes to further speed up the monitoring operations. Using a real trace stored in off-chip memory as input and setting the monitoring time window to 30 min, this method achieves a monitoring speed of 20.92 million objects per second (M/s) with multiple threads. This speed is equivalent to 16.74 Gbps throughput assuming the content length is 100 Bytes in average, but only consumes around 32 MB memory. By simulating the environment on the line card using a real-time generated synthetic trace, this method even reaches a speed of 251.07 M/s (equivalent to 200.86 Gbps) because the trace is fetched from high speed on-chip memory, rather than the off-chip DRAMs. Furthermore, both theoretical and experimental analyses elucidate very low relative error of this method. At last, a real trace-driven comparison shows that LFU policy achieves higher hit rate than LRU with much less unnecessary cache replacements.


global communications conference | 2013

NDNBench: A benchmark for Named Data Networking lookup

Ting Zhang; Yi Wang; Tong Yang; Jianyuan Lu; Bin Liu

Content-centric Networking (CCN) and the later proposed Named Data Networking (NDN) have attracted wide attention in both academia and industry, as the clean slate future Internet architecture. Wire speed name lookup for packet forwarding is one of the most challenging tasks in CCN/NDN. As a promising technology, its feasibilities including reachable speed, scalability, and update performance are imperative to be deeply evaluated. However, CCN/NDN is currently on its initial stage and no actual network is deployed, which means no real name routing tables and NDN traffic are available. In order to fulfill performance comparisons among various innovative name lookup solutions and facilitate future name lookup researches, we present NDNBench, a publicly available platform for evaluation, comparison and experiments with different name lookup approaches. NDNBench can generate various Forwarding Information Bases (FIBs), traces with structure and size diversity to conduct the tests thoroughly by adjusting the parameters. NDNBench provides a simulation package tool with flexibility to evaluate various name lookup approaches. Furthermore, in order to verify the effectiveness of NDNBench, we benchmark some existing name lookup schemes and the results are very supportive. NDNBench has been applied to recent work and is publicly available at the following site: http://s-router.cs.tsinghua.edu.cn/~zhangting/.


international conference on parallel and distributed systems | 2012

Effective Caching Schemes for Minimizing Inter-ISP Traffic in Named Data Networking

Jun Li; Hao Wu; Bin Liu; Jianyuan Lu

Internet has evolved to be content-oriented and its key usage focuses on content dissemination and retrieval, while Internet architecture is designed for host-oriented services. To address the challenge, Named Data Networking (NDN) has been proposed, where in-network caching becomes a new research topic due to its dominant position in NDN architecture. This work develops efficient caching schemes for Internet Service Providers (ISPs) so as to maximize the inter-ISP traffic savings. With the special goal, we design caching system according to the NDN network model and present coordinated caching algorithms which can dynamically determine cache placement along the forwarding path. Comprehensive simulation results show that our schemes outperform the widely used Leaving Copies Everywhere (LCE) both in inter-ISP traffic savings and the average number of access hops by up to 20%. In addition, we demonstrate good feasibility of the proposed caching algorithms in a set of simulations spanning a wide range of parameter values.


international workshop on quality of service | 2014

Power-proportional Router: Architectural Design and Experimental Evaluation

Bin Liu; Jianyuan Lu; Yi Kai; Yi Wang; Tian Pan

High speed routers in Internet are becoming increasingly more powerful, as well as more energy hungry. However, they always show power-inefficient property due to we unilaterally in pursuit of high speed before. In response to this problem, we present a power-efficient router architecture named GreenRouter in this paper. GreenRouter separates a line card into two parts physically: the network interface card (named as DB) and the packet processing card (named as MB), which are interconnected by a two-stage unidirectional switch fabric. Traffic from all the DBs shares all the MBs in GreenRouter, thus the traffic can be aggregated to a few active MBs when traffic is light and the inactive MBs can be shut down to save power. We give the detailed architectural design of GreenRouter. Real-trace driven experiments show that GreenRouter can save about 50% power compared to the conventional router when the average traffic load is 30%, while providing quality of service guarantee at the same time.

Collaboration


Dive into the Jianyuan Lu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hao Wu

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge