VQA Challenge Workshop
Location: Room 301AB, Hawaii Convention Center
at CVPR 2017, July 26, Honolulu, Hawaii, USA

Home Program SubmissionAccepted Abstracts


Accepted Abstracts

An Analysis of Visual Question Answering Algorithms
Kushal Kafle, Christopher Kanan

What's in a Question: Using Visual Questions as a Form of Supervision
Siddha Ganju, Olga Russakovsky, Abhinav Gupta

Reasoning about Fine-grained Attribute Phrases using Reference Games
Jong-Chyi Su*, Chenyun Wu*, Huaizu Jiang, Subhransu Maji

Visual Discriminative Question Generation
Yining Li, Chen Huang, Chen Change Loy, Xiaoou Tang

Zero-Shot Visual Question Answering
Damien Teney, Anton van den Hengel

MUTAN 2.0: Multimodal Tucker Fusion for Visual Question Answering
Hedi Ben-younes*, Remi Cadene*, Matthieu Cord, Nicolas Thome

End-to-end optimization of goal-driven and visually grounded dialogue systems
Florian Strub, Harm de Vries, Jeremie Mary, Bilal Piot, Aaron Courville, Olivier Pietquin

Visual Reference Resolution using Attention Memory for Visual Dialog
Paul Hongsuck Seo, Andreas Lehrmann, Bohyung Han, Leonid Sigal

FVQA: Fact-based Visual Question Answering
Peng Wang*, Qi Wu*, Chunhua Shen, Anthony Dick, Anton van den Hengel

The VQA-Machine: Learning How to Use Existing Vision Algorithms to Answer New Questions
Peng Wang*, Qi Wu*, Chunhua Shen, Anton van den Hengel

MemexQA: A Personal Question Answering Task
Lu Jiang, Liangliang Cao, KwakYannis Kalantidis, Sachin Farfade, Junwei Liang, Alexander Hauptmann

TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering
Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, Gunhee Kim

Vision and Reasoning base Image Riddles Answering through Probabilistic Soft Logic
Somak Aditya, Yezhou Yang, Chitta Baral, Yiannis Aloimonos

Towards Good Practices for Visual Question Answering
Zhe Wang, Xiaoyi Liu, Liangjian Chen, Limin Wang, Yu Qiao, Xiaohui Xie, Charless Fowlkes

Compact Tensor Pooling for Visual Question Answering
Yang Shi, Tommaso Furlanello, Anima Anandkumar

Attention Memory for Locating an Object through Visual Dialogue
Cheolho Han*, Yujung Heo*, Wooyoung Kang, Jaehyun Jun, Byoung-Tak Zhang

VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic Segmentation
Chuang Gan, Haoxiang Li, Chen Sun, Boqing Gong

Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question Answering
Zhou Yu, Jun Yu, Chenchao Xiang, Dalu Guo, Jianping Fan, Dacheng Tao

VQABQ: Visual Question Answering by Basic Questions
Jia-Hong Huang, Modar Alfadly, Bernard Ghanem

Hybrid Memory Enabled Neural Turing Machine for Visual Question Answering
Jun Zhang, Peng Xia, Yingxuan Zhu, Lifeng Liu, Xiaotian Yin, Jian Li

On the Importance of Location for VQA: Stacked Twin Attention Networks
Haoqi Fan, Jiatong Zhou

Creativity: Generating Diverse Questions using Variational Autoencoders
Unnat Jain*, Ziyu Zhang*, Alexander Schwing

A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models
Ilija Ilievski, Jiashi Feng

Don’t Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering
Aishwarya Agrawal, Dhruv Batra, Devi Parikh, Aniruddha Kembhavi