2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | 2021

ObjectGraphs: Using Objects and a Graph Convolutional Network for the Bottom-up Recognition and Explanation of Events in Video

 
 
 
 

Abstract


In this paper a novel bottom-up video event recognition approach is proposed, ObjectGraphs, which utilizes a rich frame representation and the relations between objects within each frame. Following the application of an object detector (OD) on the frames, graphs are used to model the object relations and a graph convolutional network (GCN) is utilized to perform reasoning on the graphs. The resulting object-based frame-level features are then forwarded to a long short-term memory (LSTM) network for video event recognition. Moreover, the weighted in-degrees (WiDs) derived from the graph s adjacency matrix at frame level are used for identifying the objects that were considered most (or least) salient for event recognition and contributed the most (or least) to the final event recognition decision, thus providing an explanation for the latter. The experimental results show that the proposed method achieves state-of-the-art performance on the publicly available FCVID and YLI-MED datasets 1.

Volume None
Pages 3370-3378
DOI 10.1109/CVPRW53098.2021.00376
Language English
Journal 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

Full Text