Visual Question Answering Workshop
at CVPR 2021, June 19

Home Program


The primary goal of this workshop is two-fold. First is to benchmark progress in Visual Question Answering.

    There will be three tracks in the Visual Question Answering Challenge this year.

  • VQA: This track is the 6th challenge on the VQA v2.0 dataset introduced in Goyal et al., CVPR 2017. The 2nd, 3rd, 4th and 5th editions were organised at CVPR 2017, CVPR 2018, CVPR 2019 and CVPR 2020 on the VQA v2.0 dataset, and the 1st edition was organised at CVPR 2016 on the VQA v1.0 dataset introduced in Antol et al., ICCV 2015. VQA v2.0 is more balanced and reduces language biases over VQA v1.0, and is about twice the size of VQA v1.0.

    Challenge link:
    Evaluation Server:
    Submission Deadline: Friday, May 7, 2021 23:59:59 GMT ()

  • TextVQA: This track is the 3rd challenge on the TextVQA dataset introduced in Singh et al., CVPR 2019. TextVQA requires models to read and reason about text in an image to answer questions based on them. In order to perform well on this task, models need to first detect and read text in the images. Models then need to reason about this to answer the question. The 1st edition and 2nd edition of the TextVQA Challenge were organised at CVPR 2019 and CVPR 2020.

    Evaluation Server:
    Submission Deadline: May 14, 2021 23:59:59 GMT ()

  • TextCaps: This track is the 2nd challenge on the TextCaps dataset introduced in Sidrov et al., ECCV 2020. TextCaps requires models to read and reason about text in images to generate captions about them. Specifically, models need to incorporate a new modality of text present in the images and reason over it and visual content in the image to generate image descriptions. The 1st edition of the TextCaps Challenge was organised at CVPR 2020.
    Challenge link: Coming Soon!
    Evaluation Server: Coming Soon!

  • The second goal of this workshop is to continue to bring together researchers interested in visually-grounded question answering, dialog systems, and language in general to share state-of-the-art approaches, best practices, and future directions in multi-modal AI. In addition to invited talks from established researchers, we invite submissions of extended abstracts of at most 2 pages describing work in the relevant areas including: Visual Question Answering, Visual Dialog, (Textual) Question Answering, (Textual) Dialog Systems, Commonsense knowledge, Vision + Language, etc. The submissions are not specific to any challenge track. All accepted abstracts will be presented as posters at the workshop to disseminate ideas. The workshop is on June 19, 2021, at the IEEE Conference on Computer Vision and Pattern Recognition, 2021.

Invited Speakers

Raquel Fernández
University of Amsterdam

He He
New York University

Anirudh Koul

Olga Russakovsky
Princeton University

Justin Johnson
University of Michigan

Damien Teney
Australian Institute for Machine Learning, University of Adelaide

Mohit Bansal
UNC Chapel Hill

Katerina Fragkiadaki
Carnegie Mellon University

Submission Instructions

We invite submissions of extended abstracts of at most 2 pages (excluding references) describing work in areas such as: Visual Question Answering, Visual Dialog, (Textual) Question Answering, (Textual) Dialog Systems, Commonsense knowledge, Video Question Answering, Video Dialog, Vision + Language, and Vision + Language + Action (Embodied Agents). Accepted submissions will be presented as posters at the workshop. The extended abstract should follow the CVPR formatting guidelines and be emailed as a single PDF to the email id mentioned below.

    Dual Submissions
    We encourage submissions of relevant work that has been previously published, or is to be presented at the main conference. The accepted abstracts will not appear in the official IEEE proceedings.

    Where to Submit?
    Please send your abstracts to


March 2021 Challenge Announcements
May 14, 2021 Workshop Paper Submission
mid-May 2021 Challenge Submission Deadlines
May 28, 2021 Notification to Authors
June 19, 2021 Workshop


Ayush Shrivastava
Georgia Tech

Yash Kant
Georgia Tech

Sashank Gondala
Georgia Tech

Satwik Kottur
Facebook AI

Dhruv Batra
Georgia Tech / Facebook AI Research

Devi Parikh
Georgia Tech / Facebook AI Research

Aishwarya Agrawal
University of Montreal / Mila / Deepmind


This work is supported by grants awarded to Dhruv Batra and Devi Parikh.