Video meets knowledge in visual question answering

Abstract

In this work, we address knowledge-based visual question answering in videos. First, we introduce KnowIT VQA, a video dataset with 24,282 human-generated question-answer pairs that combines visual, textual and temporal coherence reasoning together with knowledge-based questions. Second, we propose a video understanding model by combining the visual and textual video information with specific knowledge about the dataset. We find that the incorporation of knowledge produces outstanding improvements for VQA in video. However, the performance on KnowIT VQA still lags well behind human accuracy, indicating its usefulness for studying current video modelling limitations.

Publication
画像の認識・理解シンポジウム(MIRU2019)論文集
Noa Garcia
Noa Garcia
Specially-Appointed Assistant Professor

Her research interests lie in computer vision and machine learning applied to visual retrieval and joint models of vision and language for high-level understanding tasks.

Chenhui Chu
Chenhui Chu
Guest Associate Professor
Yuta Nakashima
Yuta Nakashima
Associate Professor

Yuta Nakashima is an associate professor with Institute for Datability Science, Osaka University. His research interests include computer vision, pattern recognition, natural langauge processing, and their applications.

Related