Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TikTalk: A Video-Based Dialogue Dataset for Multi-Modal Chitchat in Real World (2301.05880v3)

Published 14 Jan 2023 in cs.CL and cs.AI

Abstract: To facilitate the research on intelligent and human-like chatbots with multi-modal context, we introduce a new video-based multi-modal dialogue dataset, called TikTalk. We collect 38K videos from a popular video-sharing platform, along with 367K conversations posted by users beneath them. Users engage in spontaneous conversations based on their multi-modal experiences from watching videos, which helps recreate real-world chitchat context. Compared to previous multi-modal dialogue datasets, the richer context types in TikTalk lead to more diverse conversations, but also increase the difficulty in capturing human interests from intricate multi-modal information to generate personalized responses. Moreover, external knowledge is more frequently evoked in our dataset. These facts reveal new challenges for multi-modal dialogue models. We quantitatively demonstrate the characteristics of TikTalk, propose a video-based multi-modal chitchat task, and evaluate several dialogue baselines. Experimental results indicate that the models incorporating LLMs (LLM) can generate more diverse responses, while the model utilizing knowledge graphs to introduce external knowledge performs the best overall. Furthermore, no existing model can solve all the above challenges well. There is still a large room for future improvements, even for LLM with visual extensions. Our dataset is available at \url{https://ruc-aimind.github.io/projects/TikTalk/}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Hongpeng Lin (3 papers)
  2. Ludan Ruan (7 papers)
  3. Wenke Xia (12 papers)
  4. Peiyu Liu (27 papers)
  5. Jingyuan Wen (5 papers)
  6. Yixin Xu (29 papers)
  7. Di Hu (88 papers)
  8. Ruihua Song (48 papers)
  9. Wayne Xin Zhao (196 papers)
  10. Qin Jin (94 papers)
  11. Zhiwu Lu (51 papers)
Citations (5)