Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generating Multi-Aspect Queries for Conversational Search (2403.19302v3)

Published 28 Mar 2024 in cs.IR
Generating Multi-Aspect Queries for Conversational Search

Abstract: Conversational information seeking (CIS) systems aim to model the user's information need within the conversational context and retrieve the relevant information. One major approach to modeling the conversational context aims to rewrite the user utterance in the conversation to represent the information need independently. Recent work has shown the benefit of expanding the rewritten utterance with relevant terms. In this work, we hypothesize that breaking down the information of an utterance into multi-aspect rewritten queries can lead to more effective retrieval performance. This is more evident in more complex utterances that require gathering evidence from various information sources, where a single query rewrite or query representation cannot capture the complexity of the utterance. To test this hypothesis, we conduct extensive experiments on five widely used CIS datasets where we leverage LLMs to generate multi-aspect queries to represent the information need for each utterance in multiple query rewrites. We show that, for most of the utterances, the same retrieval model would perform better with more than one rewritten query by 85% in terms of nDCG@3. We further propose a multi-aspect query generation and retrieval framework, called MQ4CS. Our extensive experiments show that MQ4CS outperforms the state-of-the-art query rewriting methods. We make our code and our new dataset of generated multi-aspect queries publicly available.

Generate then Retrieve: Enhancing Conversational Response Retrieval with LLMs

Methods Overview

The paper introduces novel approaches to improve conversational response retrieval by leveraging LLMs. It identifies the main limitation of existing retrieval systems, which typically employ a single rewritten query for passage retrieval, failing to address complex information needs that require reasoning over multiple facts. To overcome this, the authors propose three methods:

  1. Answer-driven Query Generation (AD): Utilizing the LLM's generated answer as a singular long query for retrieval.
  2. Query Generation (QD): Prompting the LLM to directly generate multiple queries from the conversational context.
  3. Answer and Query Generation (AQD): A two-step method where the LLM first generates an answer and then produces multiple queries to refine this answer.

An additional variant, AQDAnswer, re-ranks results based on predicted relevance to the LLM's generated response, aiming to improve the quality of retrieved passages. The paper compares these methods against standard approaches and evaluates them using LLMs including GPT-4 and Llama-2 in different settings.

Experimental Setup and Results

The experiments are conducted on the TREC Interactive Knowledge Assistance Track (iKAT) dataset, showcasing the complexity of conversational information seeking tasks. The proposed methods are evaluated against baselines that follow either generate-then-retrieval or retrieval-then-generate paradigms, using a variety of LLMs.

Results indicate that AQD and AD methods, particularly when utilizing GPT-4, significantly outperform the baselines. AQD shows superior performance over single-query rewriting approaches (QR) and even outpaces human-rewritten queries in certain metrics. Notably, AQDAnswer's re-ranking strategy based on the initial generated answer leads to further improvements, showcasing the potential of LLMs in enhancing retrieval through a nuanced understanding of the conversational context and the user's information need.

Implications and Future Work

This paper presents a significant shift towards utilizing the generative capabilities of LLMs for improving information retrieval in conversational systems. By demonstrating that multiple queries generated from LLMs' responses can lead to better retrieval outcomes, it opens up new avenues for research in conversational search systems. It also highlights the importance of leveraging LLMs not just for generating responses but as integral components of the information retrieval process.

One promising direction for future work is exploring the optimal number of queries to generate and the impact of query quality on retrieval effectiveness. Additionally, integrating user feedback into the generative process could further personalize and refine the retrieval outcomes, making the conversational system more responsive to the user's specific needs.

Ethical Considerations and Limitations

The reliance on LLMs introduces potential biases and errors inherent in these models, which can affect the quality of generated responses and queries. Moreover, the effectiveness of the proposed methods is contingent upon the quality of the LLM's initial response, highlighting a dependency that could be problematic if the LLM fails to understand the user's request accurately. Future research should address these challenges, ensuring that conversational systems remain reliable, unbiased, and user-centric in their approach to information retrieval.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Trec ikat 2023: The interactive knowledge assistance track overview. arXiv preprint arXiv:2401.01330.
  2. Conversational search (Dagstuhl Seminar 19461). In Dagstuhl Reports, volume 9. Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
  3. Open-domain question answering goes conversational via question rewriting. In NAACL-HLT, pages 520–534. Association for Computational Linguistics.
  4. Quac: Question answering in context. arXiv preprint arXiv:1808.07036.
  5. Trec cast 2019: The conversational assistance track overview. arXiv preprint arXiv:2003.13624.
  6. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR (Poster). OpenReview.net.
  7. Can you unpack that? learning to rewrite questions-in-context. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5918–5924, Hong Kong, China. Association for Computational Linguistics.
  8. Perspectives on large language models for relevance judgment. In ICTIR, pages 39–50. ACM.
  9. Multidoc2dial: Modeling dialogues grounded in multiple documents. In EMNLP (1), pages 6162–6176. Association for Computational Linguistics.
  10. Precise zero-shot dense retrieval without relevance labels. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1762–1777, Toronto, Canada. Association for Computational Linguistics.
  11. Samsum corpus: A human-annotated dialogue dataset for abstractive summarization. CoRR, abs/1911.12237.
  12. Cosplade: Contextualizing SPLADE for conversational information retrieval. In ECIR (1), volume 13980 of Lecture Notes in Computer Science, pages 537–552. Springer.
  13. Knowledge-grounded dialogue generation with a unified knowledge representation. In NAACL-HLT, pages 206–218. Association for Computational Linguistics.
  14. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356–2362.
  15. Multi-stage conversational passage retrieval: An approach to fusing term importance estimation and neural query rewriting. ACM Transactions on Information Systems (TOIS), 39(4):1–29.
  16. Llms as narcissistic evaluators: When ego inflates evaluation scores. CoRR, abs/2311.09766.
  17. Faithful chain-of-thought reasoning. CoRR, abs/2301.13379.
  18. Sean MacAvaney and Luca Soldaini. 2023. One-shot labeling for automatic relevance estimation. In SIGIR, pages 2230–2235. ACM.
  19. Quinn Patwardhan and Grace Hui Yang. 2023. Sequencing matters: A generate-retrieve-generate model for building conversational agents.
  20. Hongjin Qian and Zhicheng Dou. 2022. Explicit query rewriting for conversational dense retrieval. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4725–4737.
  21. Filip Radlinski and Nick Craswell. 2017. A theoretical framework for conversational search. In CHIIR, pages 117–126.
  22. Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr., 3(4):333–389.
  23. WikiChat: Stopping the hallucination of large language model chatbots by few-shot grounding on Wikipedia. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2387–2413, Singapore. Association for Computational Linguistics.
  24. Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion. In EMNLP (Findings), pages 373–393. Association for Computational Linguistics.
  25. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. CoRR, abs/2208.03188.
  26. Question rewriting for conversational question answering. In Proceedings of the 14th ACM international conference on web search and data mining, pages 355–363.
  27. Ilps at trec 2019 conversational assistant track. In TREC.
  28. Query resolution for conversational search with limited supervision. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM.
  29. Few-shot generative conversational query rewriting. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 1933–1936.
  30. Few-shot conversational dense retrieval. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 829–838.
  31. Generate rather than retrieve: Large language models are strong context generators. In ICLR. OpenReview.net.
  32. Conversational information seeking. arXiv preprint arXiv:2201.08808.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Zahra Abbasiantaeb (11 papers)
  2. Mohammad Aliannejadi (85 papers)
  3. Simon Lupart (11 papers)
Citations (1)