Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jinglan Liu is active.

Publication


Featured researches published by Jinglan Liu.


international conference on computer aided design | 2015

1-Bit Compressed Sensing Based Framework for Built-in Resonance Frequency Prediction Using On-Chip Noise Sensors

Tao Wang; Jinglan Liu; Cheng Zhuo; Yiyu Shi

Significant noise will occur when the load currents of a chip contain frequency components that are close to its resonance frequency, which is mainly decided by power delivery network (PDN) capacitance and package inductance. Yet with technology scaling, the wire parasitic capacitance, which suffers from large process variations, starts to become a dominant contributor in the PDN capacitance, leading to a large resonance frequency variation across dies. It is thus important to know the resonance frequency of individual chips to effectively avoid resonance noise at runtime. Existing methods are mostly based on frequency sweeping, which are too expensive to apply to individual chips. In this paper, we propose a novel framework to predict the resonance frequency using existing on-chip noise sensors, based on the theory of 1-bit compressed sensing. Experimental results on industrial designs show that compared with frequency sweeping, our proposed framework can achieve up to 7.6× measurement time reduction under the same accuracy, with 15% resonance frequency variation. To the best of the authors knowledge, this is the very first work to point out the need of as well as a practical solution to the resonance frequency prediction for individual chips.


international conference on computer aided design | 2015

Effective CAD Research in the Sea of Papers

Jinglan Liu; Da-Cheng Juan; Yiyu Shi

In the past decade, there has been a rapid growth in the number of journal, conference and workshop publications from academic research. The growth seems to be accelerated as time goes by. Accordingly, it has become increasingly difficult for researchers to efficiently identify papers related to a given topic, leading to missing important references or even repetitive work. Moreover, even when these papers are found, it is very time-consuming to find their inherent relations. In this paper, using CAD research as a vehicle, we will demonstrate a novel deep learning based framework that can automatically search for papers related to a given abstract of research, and suggest how they are correlated. We also provide the analysis and comparison among several classic machine-learning approaches. Experimental results show that the proposed approach always outperforms the conventional keyword-based rankings, in both accuracy and F1 scores.


conference on information and knowledge management | 2018

Optimizing Boiler Control in Real-Time with Machine Learning for Sustainability

Yukun Ding; Jinglan Liu; Jinjun Xiong; Meng Jiang; Yiyu Shi

In coal-fired power plants, it is critical to improve the operational efficiency of boilers for sustainability. In this work, we formulate real-time boiler control as an optimization problem that looks for the best distribution of temperature in different zones and oxygen content from the flue to improve the boilers stability and energy efficiency. We employ an efficient algorithm by integrating appropriate machine learning and optimization techniques. We obtain a large dataset collected from a real boiler for more than two months from our industry partner, and conduct extensive experiments to demonstrate the effectiveness and efficiency of the proposed algorithm.


Proceedings of the Neuromorphic Computing Symposium on | 2017

Efficient hardware implementation of cellular neural networks with powers-of-two based incremental quantization

Xiaowei Xu; Qing Lu; Tianchen Wang; Jinglan Liu; Yu Hu; Yiyu Shi

Cellular neural networks (CeNNs) have been widely adopted in image processing tasks. Recently, various hardware implementations of CeNNs have emerged in the literature, with Field Programmable Gate Array (FPGA) being one of the most popular choices due to its high flexibility and low time-to-market. However, existing FPGA implementations of CeNNs are typically bounded by the limited number of embedded multipliers available therein, while the vast number of Logic Elements (LEs) and registers are never utilized. Apparently, such unbalanced resource utilization leads to sub-optimal CeNN performance and speed. To address this issue, in this paper we propose an incremental quantization based approach for the FPGA implementation of CeNNs. It quantizes the numbers in CeNN templates to powers of two, so that complex and expensive multiplications can be converted to simple and cheap shift operations, which only require a minimum number of registers and LEs. While similar concept has been explored in hardware implementations of Convolutional Neural Networks (CNNs), CeNNs have completely different computation patterns which require different quantization and implementation strategies. Experimental results on FPGAs show that our approach can significantly improve the resource utilization, and as a direct consequence a speedup up to 7.8x can be achieved with no performance loss compared with the state-of-the-art implementations. We also discover that different from CNNs, the optimal quantization strategies of CeNNs depend heavily on the applications. We hope that our work can serve as a pioneer in the hardware optimization of CeNNs.


Integration | 2016

Selective body biasing for post-silicon tuning of sub-threshold designs

Hui Geng; Jianming Liu; Jinglan Liu; Pei-Wen Luo; Liang-Chia Cheng; Steven L. Grant; Yiyu Shi

Sub-threshold designs have become a popular option in many energy constrained applications. However, a major bottleneck for these designs is the challenge in attaining timing closure. Most of the paths in sub-threshold designs can become critical paths due to the purely random process variation on threshold voltage, which exponentially impacts the gate delay. In order to address timing violations caused by process variation, post-silicon tuning is widely used through body biasing technology, which incurs heavy power and area overhead. Therefore, it is imperative to select only a small group of the gates with body biasing for post-silicon-tuning. In this paper, we first formulate this problem as a linear semi-infinite programming (LSIP). Then an efficient algorithm based on the novel concept of Incremental Hypercubic Sampling (IHCS), specially tailored to the problem structure, is proposed along with the convergence analysis. Compared with the state-of-the-art approach based on adaptive filtering, experimental results on industrial designs using 65nm sub-threshold library demonstrate that our proposed IHCS approach can improve the pass rate by up to 7.3 with a speed up to 4.1, using the same number of body biasing gates with about the same power consumption. The selective body biasing problem is formulated as linear semi-infinite programming.Incremental Hypercubic Sampling is specially tailored to the problem structure.Results show that the pass rate was improved by up to 7.3 with a speed up to 4.1.


international conference on computer aided design | 2016

Privacy protection via appliance scheduling in smart homes

Jie Wu; Jinglan Liu; Xiaobo Sharon Hu; Yiyu Shi


Archive | 2018

On the Universal Approximability of Quantized ReLU Neural Networks.

Yukun Ding; Jinglan Liu; Yiyu Shi


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2018

Design-Phase Buffer Allocation for Post-Silicon Clock Binning by Iterative Learning

Grace Li Zhang; Bing Li; Jinglan Liu; Yiyu Shi; Ulf Schlichtmann


arXiv: Learning | 2018

On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks.

Yukun Ding; Jinglan Liu; Jinjun Xiong; Yiyu Shi


arXiv: Computer Vision and Pattern Recognition | 2018

PBGen: Partial Binarization of Deconvolution-Based Generators for Edge Intelligence

Jinglan Liu; Jiaxin Zhang; Yukun Ding; Xiaowei Xu; Meng Jiang; Yiyu Shi

Collaboration


Dive into the Jinglan Liu's collaboration.

Top Co-Authors

Avatar

Yiyu Shi

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Yukun Ding

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Xiaowei Xu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Meng Jiang

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Qing Lu

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Tianchen Wang

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Da-Cheng Juan

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge