VQA Challenge and Visual Dialog Workshop
at CVPR 2018, June 18, Salt Lake City, Utah, USA
The primary purpose of this workshop is two-fold. First is to benchmark progress in Visual Question Answering by hosting the 3rd edition of the Visual Question Answering Challenge on the VQA v2.0 dataset introduced in Goyal et al., CVPR 2017. The 2nd edition of the VQA Challenge was organized at CVPR 2017 on VQA v2.0 dataset.
The 1st edition of the VQA Challenge was organized at CVPR 2016 on the 1st edition (v1.0) of the VQA dataset introduced in Antol et al., ICCV 2015. VQA v2.0 is more balanced and reduces language biases over VQA v1.0, and is about twice the size of VQA v1.0.
Our idea in creating this new "balanced" VQA dataset is the following: For every (image, question, answer) triplet (I,Q,A) in the VQA v1.0 dataset, we identify an image I' that has an answer A' to Q such that A and A' are different. Both the old (I,Q,A) and the new (I',Q,A') triplets are present in the VQA v2.0 dataset balancing the VQA v1.0 dataset on a per question basis. Since I and I' are semantically similar, a VQA model will have to understand the subtle differences between I and I' to provide the right answer to both images. It cannot succeed as easily by making "guesses" based on the language alone. This workshop will provide an opportunity to benchmark algorithms on VQA v2.0 and to identify state-of-the-art algorithms that need to truly understand the image content in order to perform well on this balanced VQA dataset.
The second goal of this workshop is to continue to bring together researchers interested in visually grounded question answering and dialog systems to share state-of-the-art approaches, best practices, and future directions in multi-modal AI. In particular, this year, the workshop committee will extend the scope of invited talks, submitted papers, and poster presentations to include work on the related topic of Visual Dialog introduced by Das et al., CVPR 2017. The concrete task in Visual Dialog is the following -- given an image I, a history of a dialog consisting of a sequence of question-answer pairs (Q1: `How many people are in wheelchairs?', A1: `Two', Q2: `What are their genders?', A2: `One male and one female'), and a natural language follow-up question (Q3: `Which one is holding a racket?'), the task for the machine is to answer the question in free-form natural language (A3: `The woman'). In addition to VQA style reasoning, Visual Dialog requires rich language understanding including co-reference resolution and memory.
We invite submissions of extended abstracts of at most 2 pages describing work in relevant areas including: Visual Question Answering, Visual Dialog, (Textual) Question Answering, (Textual) Dialog Systems, Commonsense knowledge, Video Question Answering, Video Dialog, Vision + Language, and Vision + Language + Action (Embodied Agents). All accepted abstracts will be presented as posters at the workshop. The workshop is on June 18th, 2018 at the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
We invite submissions of extended abstracts of at most 2 pages describing work in areas such as: Visual Question Answering, Visual Dialog, (Textual) Question Answering, (Textual) Dialog Systems, Commonsense knowledge, Video Question Answering, Video Dialog, Vision + Language, and Vision + Language + Action (Embodied Agents). Accepted submissions will be presented as posters at the workshop. The extended abstract should follow the CVPR formatting guidelines and be emailed as a single PDF to the email id mentioned below. Please use the following LaTeX/Word templates.
We encourage submissions of relevant work that has been previously published, or is to be presented at the main conference. The accepted abstracts will be posted on the workshop website and will not appear in the official IEEE proceedings.
In case you need a decision before the CVPR early registration deadline (April 30), please let us know.
Where to Submit?
Please send your abstracts to email@example.com