2021 Forum on specification & Design Languages (FDL) | 2021

Improving Parallelism in System Level Models by Assessing PDES Performance

 
 

Abstract


For effective embedded system design, transaction level modeling (TLM) must explicitly expose any available parallelism in the application. Traditional TLM in SystemC utilizes channels for communication and synchronization between concurrent modules, whereas modern TLM-2.0 emphasizes address-accurate communication via explicit interconnect and memories. In both modeling styles, the choice of synchronization mechanisms has a significant impact on the available parallelism in the model which can be exploited by parallel discrete event simulation (PDES). In this work, we propose and analyze a set of non-invasive standard-compliant modeling techniques to increase parallelism in IEEE SystemC TLM-1 and TLM-2.0 models. We measure the performance of aggressive out-of-order PDES in the Recoding Infrastructure for SystemC (RISC) and analyze the parallelism in the models. Our case study on six modeling styles of a state-of-art deep neural network (DNN), namely the GoogLeNet image classification algorithm, demonstrates the impact of varying synchronization mechanisms with simulator run time reduced by 38% compared to a synchronous parallel reference model on a 16-core host machine. Our study also suggests that increased parallel simulation performance indicates better models with higher amounts of parallelism exposed.

Volume None
Pages 01-07
DOI 10.1109/FDL53530.2021.9568385
Language English
Journal 2021 Forum on specification & Design Languages (FDL)

Full Text