2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST) | 2021

Exploiting Spatiotemporal Features for Action Recognition

 
 
 

Abstract


Action recognition from videos is important due to its numerous applications in real life scenarios. Most commonly action recognition is used for surveillance but it is also used in other applications such as entertainment and gaming. Moreover, due to high demand of automation and easy availability of higher processing power, researchers are aiming for better ways to accurately classify human actions using videos. Different methods have been proposed in literature in order to identify actions. However, most of these methods suffer from limitations such as implementation complexity and inability to recognize complex actions. In this paper, we propose a novel action recognition technique which is easy to implement and is robust to complex actions. While there are different methods to capture motion, trajectories based methods have proven to be an excellent way to represent motion in videos. Inspired by that, we sampled dense points in different frames of the video to track down their displacement information to encode the trajectories. We used dense trajectories along with oriented gradients and motion boundary information as features. We use codebook generated using Gaussian mixture model and Fisher vector to encode our features. Finally, after reducing the dimensionality of our data, we use linear Support Vector Machine to identify the actions. The experimental evaluations performed on a recent benchmark action recognition dataset showed that our method is able to achieve encouraging results compared to other competing methods.

Volume None
Pages 1-7
DOI 10.1109/IBCAST51254.2021.9393316
Language English
Journal 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST)

Full Text