J. Supercomput. | 2021

Low precision matrix multiplication for efficient deep learning in NVIDIA Carmel processors

 
 
 
 
 

Abstract


We introduce a high performance, multi-threaded realization of the gemm kernel for the ARMv8.2 architecture that operates with 16-bit (half precision)/queryKindly check and confirm whether the corresponding author is correctly identified. floating point operands. Our code is especially designed for efficient machine learning inference (and to a certain extent, also training) with deep neural networks. The results on the NVIDIA Carmel multicore processor, which implements the ARMv8.2 architecture, show considerable performance gains for the gemm kernel, close to the theoretical peak acceleration that could be expected when moving from 32-bit arithmetic/data to 16-bit. Combined with the type of convolution operator arising in convolutional neural networks, the speed-ups are more modest though still relevant.

Volume 77
Pages 11257-11269
DOI 10.1007/S11227-021-03636-4
Language English
Journal J. Supercomput.

Full Text