Paper, dataset and code list for multimodal dialogue.
-
Updated
Aug 20, 2024
Paper, dataset and code list for multimodal dialogue.
This framework provides out-of-the-box implementations of Referential Games variants in order to study the emergence of artificial languages using deep learning, relying on PyTorch (https://www.pytorch.org).
💬 Official PyTorch Implementation for CVPR'23 Paper, "The Dialog Must Go On: Improving Visual Dialog via Generative Self-Training"
Conversational AI Reading Materials
A curated publication list on visual dialog
Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"
This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?
Starter code in PyTorch for the Visual Dialog challenge
✨ Official PyTorch Implementation for EMNLP'19 Paper, "Dual Attention Networks for Visual Reference Resolution in Visual Dialog"
🌈 PyTorch Implementation for EMNLP'21 Findings "Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer"
[EMNLP 22] Extending Phrase Grounding with Pronouns in Visual Dialogues.
Visual dialog agents with pre-trained vision-and-language encoders.
A list of research papers on knowledge-enhanced multimodal learning
(CVPR19Oral) Reasoning Visual Dialogs with Structural and Partial Observations
Recent Advances in Visual Dialog
Summary of Visual Dialogue Papers
Code repository for the final year project focusing on improving visual dialog by reducing modality biases.
Code for reproducing results in our paper SeqDialN: Sequential Visual Dialog Networks in Joint Visual-Linguistic Representation Space.
Visual Dialog: Light-weight Transformer for Many Inputs (ECCV 2020)
PyTorch code for Reasoning Visual Dialogs with Structural and Partial Observations
Add a description, image, and links to the visual-dialog topic page so that developers can more easily learn about it.
To associate your repository with the visual-dialog topic, visit your repo's landing page and select "manage topics."