Information Retrieval Meets Large Language Models: A Strategic Report from Chinese IR Community (2307.09751v2)
Abstract: The research field of Information Retrieval (IR) has evolved significantly, expanding beyond traditional search to meet diverse user information needs. Recently, LLMs have demonstrated exceptional capabilities in text understanding, generation, and knowledge inference, opening up exciting avenues for IR research. LLMs not only facilitate generative retrieval but also offer improved solutions for user understanding, model evaluation, and user-system interactions. More importantly, the synergistic relationship among IR models, LLMs, and humans forms a new technical paradigm that is more powerful for information seeking. IR models provide real-time and relevant information, LLMs contribute internal knowledge, and humans play a central role of demanders and evaluators to the reliability of information services. Nevertheless, significant challenges exist, including computational costs, credibility concerns, domain-specific limitations, and ethical considerations. To thoroughly discuss the transformative impact of LLMs on IR research, the Chinese IR community conducted a strategic workshop in April 2023, yielding valuable insights. This paper provides a summary of the workshop's outcomes, including the rethinking of IR's core values, the mutual enhancement of LLMs and IR, the proposal of a novel IR technical paradigm, and open challenges.
- Qingyao Ai (113 papers)
- Ting Bai (29 papers)
- Zhao Cao (36 papers)
- Yi Chang (150 papers)
- Jiawei Chen (160 papers)
- Zhumin Chen (78 papers)
- Zhiyong Cheng (52 papers)
- Shoubin Dong (8 papers)
- Zhicheng Dou (113 papers)
- Fuli Feng (143 papers)
- Shen Gao (49 papers)
- Jiafeng Guo (161 papers)
- Xiangnan He (200 papers)
- Yanyan Lan (87 papers)
- Chenliang Li (92 papers)
- Yiqun Liu (131 papers)
- Ziyu Lyu (8 papers)
- Weizhi Ma (43 papers)
- Jun Ma (347 papers)
- Zhaochun Ren (117 papers)