IEEE Access | 2021

Selective Fine-Tuning on a Classifier Ensemble: Realizing Adaptive Neural Networks With a Diversified Multi-Exit Architecture

 
 
 
 

Abstract


Adaptive neural networks that provide a trade-off between computing costs and inference performance can be a crucial solution for edge artificial intelligence (AI) computing where resource and energy consumption are significantly constrained. Edge AIs require a fine-tuning technique to achieve target accuracy with less computation for pre-trained models on the cloud. However, a multi-exit network, which realizes adaptive inference costs, requires significant training costs because it has many classifiers that need to be fine-tuned. In this study, we propose a novel fine-tuning method for an ensemble of classifiers that efficiently retrain the multi-exit network. The proposed fine-tuning method exploits individualities by assembling the output of the intermediate classifiers trained with distinct preprocessed data. The evaluation results show that the proposed method achieved 0.2%-5.8%, 0.2%-4.6% higher accuracy with only 77%-93%, 73%-84% training computation compared to the entire fine-tuning of classifiers on the pre-modified CIFAR-100 and Imagenet, respectively, although it depends on assumed edge environments.

Volume 9
Pages 6179-6187
DOI 10.1109/ACCESS.2020.3047799
Language English
Journal IEEE Access

Full Text