Attention Boosted Deep Networks for Video Classification

Junyong You*, Jari Korhonen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

8 Citations (Scopus)


Video classification can be performed by summarizing image contents of individual frames into one class by deep neural networks, e.g., CNN and LSTM. Human interpretation of video content is influenced by the attention mechanism. In other words, video class can be more attentively decided by certain information than others. In this paper, we propose to integrate the attention mechanism into deep networks for video classification. The proposed framework employs 2D CNN networks with ImageNet pretrained weights to extract features of video frames that are then fed to a bidirectional LSTM network for video classification. An attention block has been developed that can be added after the LSTM network in the proposed framework. Several different 2D CNN architectures have been tested in the experiments. The results with respect to two publicly available datasets have demonstrated that integrating attention can boost the performance of deep networks in video classification compared to not applying the attention block. We also found out that applying attention to the LSTM outputs on the VGG19 architecture provides the highest classification accuracy in the proposed framework.

Original languageEnglish
Title of host publicationProceedings - International Conference on Image Processing (ICIP)
PublisherIEEE Explore
Number of pages5
Publication statusPublished - 2020
EventIEEE International Conference on Image Processing (ICIP) -
Duration: 25 Sept 202028 Sept 2020


ConferenceIEEE International Conference on Image Processing (ICIP)


  • Attention
  • bidirectional LSTM
  • CNN
  • video classification


Dive into the research topics of 'Attention Boosted Deep Networks for Video Classification'. Together they form a unique fingerprint.

Cite this