2021 7th International Conference on Big Data Computing and Communications (BigCom) | 2021

Collaborative Deep Sensing by Dynamically Fusing Multiple Models

 
 
 
 

Abstract


Smart activity sensing has gained more and more attention with the development of sensing devices and recognition techniques. For multi-modal sensing scenarios like smart home, fusing results of multiple models brings opportunities to achieve more comprehensive and accurate recognition, as well as challenges to coordinate a collection of models under strict resource limitations. In this paper, we firstly model the multi-modal sensing problem with strict orthogonal resource constraints. Then, for scenarios with fixed and changeable resource limitations, we propose two online decision methods correspondingly to optimize recognition accuracy through dynamically selecting models and fusing their predictions, given a pre-trained model library. Specifically, by utilizing reward feedback based on an actor-critic scheme, we deal with fixed resources and accuracy optimization in one shot. Furthermore, for changeable resources, we decouple resource allocation and model evaluation to support model portability with comparable accuracy. Three types of sensing devices and nine recognition models have been investigated in our work. Experiments show that our method improves the recognition accuracy compared to that achieved by a single-modality model, and also achieves higher accuracy with less resource costs compared to the end-to-end multi-modal model.

Volume None
Pages 316-323
DOI 10.1109/BigCom53800.2021.00027
Language English
Journal 2021 7th International Conference on Big Data Computing and Communications (BigCom)

Full Text