Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shaoli Liu is active.

Publication


Featured researches published by Shaoli Liu.


international symposium on microarchitecture | 2014

DaDianNao: A Machine-Learning Supercomputer

Yunji Chen; Tao Luo; Shaoli Liu; Shijin Zhang; Liqiang He; Jia Wang; Ling Li; Tianshi Chen; Zhiwei Xu; Ninghui Sun; Olivier Temam

Many companies are deploying services, either for consumers or industry, which are largely based on machine-learning algorithms for sophisticated processing of large amounts of data. The state-of-the-art and most popular such machine-learning algorithms are Convolutional and Deep Neural Networks (CNNs and DNNs), which are known to be both computationally and memory intensive. A number of neural network accelerators have been recently proposed which can offer high computational capacity/area ratio, but which remain hampered by memory accesses. However, unlike the memory wall faced by processors on general-purpose workloads, the CNNs and DNNs memory footprint, while large, is not beyond the capability of the on chip storage of a multi-chip system. This property, combined with the CNN/DNN algorithmic characteristics, can lead to high internal bandwidth and low external communications, which can in turn enable high-degree parallelism at a reasonable area cost. In this article, we introduce a custom multi-chip machine-learning architecture along those lines. We show that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip system. We implement the node down to the place and route at 28nm, containing a combination of custom storage and computational units, with industry-grade interconnects.


architectural support for programming languages and operating systems | 2015

PuDianNao: A Polyvalent Machine Learning Accelerator

Daofu Liu; Tianshi Chen; Shaoli Liu; Jinhong Zhou; Shengyuan Zhou; Olivier Teman; Xiaobing Feng; Xuehai Zhou; Yunji Chen

Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.


international symposium on computer architecture | 2016

Cambricon: an instruction set architecture for neural networks

Shaoli Liu; Zidong Du; Jinhua Tao; Dong Han; Tao Luo; Yuan Xie; Yunji Chen; Tianshi Chen

Neural Networks (NN) are a family of models for a broad range of emerging machine learning and pattern recondition applications. NN techniques are conventionally executed on general-purpose processors (such as CPU and GPGPU), which are usually not energy-efficient since they invest excessive hardware resources to flexibly support various workloads. Consequently, application-specific hardware accelerators for neural networks have been proposed recently to improve the energy-efficiency. However, such accelerators were designed for a small set of NN techniques sharing similar computational patterns, and they adopt complex and informative instructions (control signals) directly corresponding to high-level functional blocks of an NN (such as layers), or even an NN as a whole. Although straightforward and easy-to-implement for a limited set of similar NN techniques, the lack of agility in the instruction set prevents such accelerator designs from supporting a variety of different NN techniques with sufficient flexibility and efficiency. In this paper, we propose a novel domain-specific Instruction Set Architecture (ISA) for NN accelerators, called Cambricon, which is a load-store architecture that integrates scalar, vector, matrix, logical, data transfer, and control instructions, based on a comprehensive analysis of existing NN techniques. Our evaluation over a total of ten representative yet distinct NN techniques have demonstrated that Cambricon exhibits strong descriptive capacity over a broad range of NN techniques, and provides higher code density than general-purpose ISAs such as ×86, MIPS, and GPGPU. Compared to the latest state-of-the-art NN accelerator design DaDianNao [5] (which can only accommodate 3 types of NN techniques), our Cambricon-based accelerator prototype implemented in TSMC 65nm technology incurs only negligible latency/power/area overheads, with a versatile coverage of 10 different NN benchmarks.


international symposium on microarchitecture | 2016

Cambricon-x: an accelerator for sparse neural networks

Shijin Zhang; Zidong Du; Lei Zhang; Huiying Lan; Shaoli Liu; Ling Li; Qi Guo; Tianshi Chen; Yunji Chen

Neural networks (NNs) have been demonstrated to be useful in a broad range of applications such as image recognition, automatic translation and advertisement recommendation. State-of-the-art NNs are known to be both computationally and memory intensive, due to the ever-increasing deep structure, i.e., multiple layers with massive neurons and connections (i.e., synapses). Sparse neural networks have emerged as an effective solution to reduce the amount of computation and memory required. Though existing NN accelerators are able to efficiently process dense and regular networks, they cannot benefit from the reduction of synaptic weights. In this paper, we propose a novel accelerator, Cambricon-X, to exploit the sparsity and irregularity of NN models for increased efficiency. The proposed accelerator features a PE-based architecture consisting of multiple Processing Elements (PE). An Indexing Module (IM) efficiently selects and transfers needed neurons to connected PEs with reduced bandwidth requirement, while each PE stores irregular and compressed synapses for local computation in an asynchronous fashion. With 16 PEs, our accelerator is able to achieve at most 544 GOP/s in a small form factor (6.38 mm2 and 954 mW at 65 nm). Experimental results over a number of representative sparse networks show that our accelerator achieves, on average, 7.23x speedup and 6.43x energy saving against the state-of-the-art NN accelerator.


IEEE Transactions on Computers | 2017

DaDianNao: A Neural Network Supercomputer

Tao Luo; Shaoli Liu; Ling Li; Yuqing Wang; Shijin Zhang; Tianshi Chen; Zhiwei Xu; Olivier Temam; Yunji Chen

Many companies are deploying services largely based on machine-learning algorithms for sophisticated processing of large amounts of data, either for consumers or industry. The state-of-the-art and most popular such machine-learning algorithms are Convolutional and Deep Neural Networks (CNNs and DNNs), which are known to be computationally and memory intensive. A number of neural network accelerators have been recently proposed which can offer high computational capacity/area ratio, but which remain hampered by memory accesses. However, unlike the memory wall faced by processors on general-purpose workloads, the CNNs and DNNs memory footprint, while large, is not beyond the capability of the on-chip storage of a multi-chip system. This property, combined with the CNN/DNN algorithmic characteristics, can lead to high internal bandwidth and low external communications, which can in turn enable high-degree parallelism at a reasonable area cost. In this article, we introduce a custom multi-chip machine-learning architecture along those lines, and evaluate performance by integrating electrical and optical inter-chip interconnects separately. We show that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 656.63× over a GPU, and reduce the energy by 184.05× on average for a 64-chip system. We implement the node down to the place and route at 28 nm, containing a combination of custom storage and computational units, with electrical inter-chip interconnects.


IEEE Transactions on Parallel and Distributed Systems | 2015

FreeRider: Non-Local Adaptive Network-on-Chip Routing with Packet-Carried Propagation of Congestion Information

Shaoli Liu; Tianshi Chen; Ling Li; Xi Li; Mingzhe Zhang; Chao Wang; Haibo Meng; Xuehai Zhou; Yunji Chen

Non-local adaptive routing techniques, which utilize statuses of both local and distant links to make routing decisions, have recently been shown to be effective solutions for promoting the performance of Network-on-Chip (NoC). The essence of non-local adaptive routing was an additional network dedicated to propagate congestion information of distant links on the NoC. While the dedicated Congestion Propagation Network (CPN) helps routers to make promising routing decisions, it incurs additional wiring and power costs and becomes an unnecessary decoration when the load of NoC is light. Moreover, the CPN has to be extended if one would utilize more sophisticated congestion information to enhance the performance of NoC, bringing in even larger wiring and power costs. This paper proposes an innovative non-local adaptive routing technique called FreeRider, which does not use a dedicated CPN but instead leverages free bits in head flits of existing packets to carry and propagate rich congestion information without introducing additional wires or flits. In order to balance the network load, FreeRider adopts a novel three-stage strategy of output link selection, which adequately utilizes the propagated information to make routing decisions. Experimental results on both synthetic traffic patterns and application traces show that FreeRider achieves better throughput, shorter latency, and smaller power consumption than a state-of-the-art adaptive routing technique with dedicated CPN.


ACM Transactions on Computer Systems | 2015

A Small-Footprint Accelerator for Large-Scale Neural Networks

Tianshi Chen; Shijin Zhang; Shaoli Liu; Zidong Du; Tao Luo; Yuan Gao; Junjie Liu; Dongsheng Wang; Chueh-Hung Wu; Ninghui Sun; Yunji Chen; Olivier Temam

Machine-learning tasks are becoming pervasive in a broad range of domains, and in a broad range of systems (from embedded systems to data centers). At the same time, a small set of machine-learning algorithms (especially Convolutional and Deep Neural Networks, i.e., CNNs and DNNs) are proving to be state-of-the-art across many applications. As architectures evolve toward heterogeneous multicores composed of a mix of cores and accelerators, a machine-learning accelerator can achieve the rare combination of efficiency (due to the small number of target algorithms) and broad application scope. Until now, most machine-learning accelerator designs have been focusing on efficiently implementing the computational part of the algorithms. However, recent state-of-the-art CNNs and DNNs are characterized by their large size. In this study, we design an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance, and energy. We show that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s (key NN operations such as synaptic weight multiplications and neurons outputs additions) in a small footprint of 3.02mm2 and 485mW; compared to a 128-bit 2GHz SIMD processor, the accelerator is 117.87 × faster, and it can reduce the total energy by 21.08 ×. The accelerator characteristics are obtained after layout at 65nm. Such a high throughput in a small footprint can open up the usage of state-of-the-art machine-learning algorithms in a broad set of systems and for a broad set of applications.


IEEE Transactions on Parallel and Distributed Systems | 2016

IMR: High-Performance Low-Cost Multi-Ring NoCs

Shaoli Liu; Tianshi Chen; Ling Li; Xiaoxue Feng; Zhiwei Xu; Haibo Chen; Frederic T. Chong; Yunji Chen

A ring topology is a common solution of network-on-chip (NoC) in industry, but is frequently criticized to have poor scalability. In this paper, we present a novel type of multi-ring NoC called isolated multi-ring (IMR), which can even support chip multiprocessors (CMPs) with 1,024 cores. In IMR, any pair of cores are connected via at least one isolated ring, so that each packet can reach the destination without transferring from one ring to another. Therefore, IMR no longer needs expensive routers as mesh, which not only enhances the network performance but also reduces hardware overheads. We utilize simulated evolution to design optimized IMR topologies. We compare these IMR topologies against nine representative NoCs (e.g., traditional mesh, multi mesh, low-cost mesh, Express-virtual-channels mesh (EVC), torus ring, and hierarchical ring). We observe from experiments that IMR significantly outperforms its competitors in both saturation throughput and latency across all scenarios considered. For example, in a 16 × 16 CMP, IMR improves the saturation throughput of a state-of-the-art mesh (EVC) by 265.29 percent on average, and reduces the average packet latency on SPLASH-2 application traces by 71.58 percent, while consuming 5.08 percent less area and 9.76 percent less power. In a 32 × 32 CMP, IMR averagely improves the saturation throughput of EVC by 191.58 percent, and averagely reduces the packet latency on SPLASH-2 application traces by 23.09 percent, while consuming 2.86 percent less area and 10.81 percent less power.


Journal of Computer Science and Technology | 2018

BENCHIP: Benchmarking Intelligence Processors

Jinhua Tao; Zidong Du; Qi Guo; Huiying Lan; Lei Zhang; Shengyuan Zhou; Lingjie Xu; Cong Liu; Haifeng Liu; Shan Tang; Allen Rush; Willian Chen; Shaoli Liu; Yunji Chen; Tianshi Chen

The increasing attention on deep learning has tremendously spurred the design of intelligence processing hardware. The variety of emerging intelligence processors requires standard benchmarks for fair comparison and system optimization (in both software and hardware). However, existing benchmarks are unsuitable for benchmarking intelligence processors due to their non-diversity and nonrepresentativeness. Also, the lack of a standard benchmarking methodology further exacerbates this problem. In this paper, we propose BenchIP, a benchmark suite and benchmarking methodology for intelligence processors. The benchmark suite in BenchIP consists of two sets of benchmarks: microbenchmarks and macrobenchmarks. The microbenchmarks consist of single-layer networks. They are mainly designed for bottleneck analysis and system optimization. The macrobenchmarks contain state-of-the-art industrial networks, so as to offer a realistic comparison of different platforms. We also propose a standard benchmarking methodology built upon an industrial software stack and evaluation metrics that comprehensively reflect various characteristics of the evaluated intelligence processors. BenchIP is utilized for evaluating various hardware platforms, including CPUs, GPUs, and accelerators. BenchIP will be open-sourced soon.


ieee acm international symposium cluster cloud and grid computing | 2017

TuNao: A High-Performance and Energy-Efficient Reconfigurable Accelerator for Graph Processing

Jinhong Zhou; Shaoli Liu; Qi Guo; Xuda Zhou; Tian Zhi; Daofu Liu; Chao Wang; Xuehai Zhou; Yunji Chen; Tianshi Chen

Large-scale graph processing is now a crucial task of many commercial applications, and it is conventionally supported by general-purpose processors. These processors are designed to flexibly support highly diverse workloads with classic techniques such as on-chip cache and dynamic pipelining. Yet, it is difficult for the on-chip cache to exploit irregular data locality in large-scale graph processing, even though there are a few high-degree vertices that are frequently accessed in real-world graphs, it is not efficient to perform regular arithmetic operations via sophisticated dynamic pipelining. In short, general-purpose processors could not be the ideal platforms to graph processing. In this paper, we design a reconfigurable graph processing accelerator, with the purpose of providing an energy-efficient and flexible hardware platform for large-scale graph processing. This accelerator features two main components, i.e., the on-chip storage to exploit the data locality of graph processing, and the reconfigurable functional units to adapt to diversified operations in different graph processing tasks. On a total of 36 practical graph processing tasks, we demonstrate that, on average, our accelerator design achieves 1.58x and 25.56x better performance and energy efficiency, respectively, than the GPU baseline.

Collaboration


Dive into the Shaoli Liu's collaboration.

Top Co-Authors

Avatar

Tianshi Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yunji Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ling Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Tao Luo

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shijin Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zidong Du

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Huiying Lan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jinhua Tao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Lei Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge