2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS) | 2019

Novel Arithmetics to Accelerate Machine Learning Classifiers in Autonomous Driving Applications

 
 
 
 

Abstract


Autonomous driving techniques frequently need the clustering and the classification of data coming from several input sensors, like cameras, radar and lidars. These sub-tasks need to be implemented in real-time in embedded on-board computing units. The trend for data classification and clustering in the signal processing community is moving towards machine learning (ML) algorithms. One of them, which plays a central role, is the $k$-nearest neighbors ($k$-NN) algorithm. To meet stringent requirements in terms of real-time computing capability and circuit/memory complexity, ML accelerators are needed. Innovation is required in terms of computing arithmetic since classic integer numbers lead to low classification accuracy with respect to the needs of safety critical applications like autonomous driving. Instead, floating numbers require too much circuit and memory. To overcome these issues the paper shows that the use of a new format, called Posit, implemented in a new cppPosit software library, can lead to a $k$-NN implementation having the same accuracy of floats, but with halved bit-size. This means that a Posit Processing Unit (PPU) reduces by a factor higher than 2 the data transfer and storage complexity of ML accelerators. We also prove that a LUT-based complete tabulated implementation of a PPU for a 8-bit requires just 64 kB storage size, compliant with memory-constrained devices.

Volume None
Pages 779-782
DOI 10.1109/ICECS46596.2019.8965031
Language English
Journal 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS)

Full Text