TASTA: Text‐Assisted Spatial and Temporal Attention Network for Video Question Answering - Université de technologie de Troyes Accéder directement au contenu
Article Dans Une Revue Advanced Intelligent Systems Année : 2023

TASTA: Text‐Assisted Spatial and Temporal Attention Network for Video Question Answering

Tian Wang
Boyao Hou
  • Fonction : Auteur
Jiakun Li
  • Fonction : Auteur

Résumé

Video question answering (VideoQA) is a typical task that integrates language and vision. The key for VideoQA is to extract relevant and effective visual information for answering a specific question. Information selection is believed to be necessary for this task due to the large amount of irrelevant information in the video, and explicitly learning an attention model can be a reasonable and effective solution for the selection. Herein, a novel VideoQA model called Text‐Assisted Spatial and Temporal Attention Network (TASTA) is proposed, which shows the great potential of explicitly modeling attention. TASTA is made to be simple, small, clean, and efficient for clear performance justification and possible easy extension. Its success is mainly from two new strategies of better using the textual information. Experimental results on a large and most representative dataset, TGIF‐QA, show the significant superiority of TASTA w.r.t. the state‐of‐the‐art and demonstrate the effectiveness of its key components via ablation studies.

Dates et versions

hal-04437553 , version 1 (04-02-2024)

Identifiants

Citer

Tian Wang, Boyao Hou, Jiakun Li, Peng Shi, Baochang Zhang, et al.. TASTA: Text‐Assisted Spatial and Temporal Attention Network for Video Question Answering. Advanced Intelligent Systems, 2023, 5 (4), ⟨10.1002/aisy.202200131⟩. ⟨hal-04437553⟩
16 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More