Archive | 2021

LightSeq: A High Performance Inference Library for Transformers

 
 
 
 
 

Abstract


Transformer and its variants have achieved great success in natural language processing. Since Transformer models are huge in size, serving these models is a challenge for real industrial applications. In this paper, we propose , a highly efficient inference library for models in the Transformer family. includes a series of GPU optimization techniques to both streamline the computation of Transformer layers and reduce memory footprint. supports models trained using PyTorch and Tensorflow. Experimental results on standard machine translation benchmarks show that achieves up to 14x speedup compared with TensorFlow and 1.4x speedup compared with , a concurrent CUDA implementation. The code will be released publicly after the review.

Volume None
Pages 113-120
DOI 10.18653/v1/2021.naacl-industry.15
Language English
Journal None

Full Text