Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Search Arena: Analyzing Search-Augmented LLMs (2506.05334v1)

Published 5 Jun 2025 in cs.CL, cs.IR, and cs.LG

Abstract: Search-augmented LLMs combine web search with LLMs to improve response groundedness and freshness. However, analyzing these systems remains challenging: existing datasets are limited in scale and narrow in scope, often constrained to static, single-turn, fact-checking questions. In this work, we introduce Search Arena, a crowd-sourced, large-scale, human-preference dataset of over 24,000 paired multi-turn user interactions with search-augmented LLMs. The dataset spans diverse intents and languages, and contains full system traces with around 12,000 human preference votes. Our analysis reveals that user preferences are influenced by the number of citations, even when the cited content does not directly support the attributed claims, uncovering a gap between perceived and actual credibility. Furthermore, user preferences vary across cited sources, revealing that community-driven platforms are generally preferred and static encyclopedic sources are not always appropriate and reliable. To assess performance across different settings, we conduct cross-arena analyses by testing search-augmented LLMs in a general-purpose chat environment and conventional LLMs in search-intensive settings. We find that web search does not degrade and may even improve performance in non-search settings; however, the quality in search settings is significantly affected if solely relying on the model's parametric knowledge. We open-sourced the dataset to support future research in this direction. Our dataset and code are available at: https://github.com/lmarena/search-arena.

An Expert Analysis of "Search Arena: Analyzing Search-Augmented LLMs"

The paper "Search Arena: Analyzing Search-Augmented LLMs" presents an in-depth examination of LLMs equipped with web search functionalities to overcome their inherent limitations in engaging with time-sensitive, emerging, and domain-specific queries. This research delineates the construction and analysis of Search Arena, a comprehensive dataset representing a multi-turn user preference database exhibiting over 24,000 interaction pairs with search-augmented LLMs across diverse intents and languages.

Dataset and Methodology

The authors introduce the Search Arena platform, a crowd-sourced evaluation mechanism operating as a distinctive component of the Chatbot Arena interface, dedicated to search-enabled models. Through this, they collected a substantial volume of user interactions, supplemented with preference votes to gauge user satisfaction. The dataset is distinguished by its scale, multicultural and multilingual diversity, and its embrace of multifaceted user intents, thereby extending beyond static fact-checking queries typical of previous datasets like SimpleQA and BrowseComp.

Key Findings

The dataset analysis indicates that user preferences are not purely contingent upon factual precision. A remarkable finding is users' affinity towards responses featuring a higher count of citations, even when they do not substantiate the claims made by the LLM, thereby highlighting a gap between perceived and actual credibility. This inclination is further complicated by variations in user preference depending upon the nature of the cited source, favoring community-driven platforms over static encyclopedic resources like Wikipedia.

A cross-arena experimental deployment elucidates the differential performance of non-search and search-augmented LLMs across varying contexts. Models equipped with search capabilities demonstrated enhanced performance in environments requiring factual retrieval, while their presence did not degrade efficacy in general-purpose settings, suggesting the robustness and adaptability of search-augmented frameworks.

Implications and Speculations

The findings underscore several practical and theoretical implications. Practically, the preference towards copious citations poses questions about user trust mechanisms and emphasizes the need for enhancing citation accuracy and relevance within search-augmented models. Theoretically, this insight fuels the discourse on how future LLMs might better navigate user trust landscapes, with implications for designing models that judiciously balance recall and precision.

Furthermore, the cross-setting evaluation signals a potential avenue for optimizing LLMs to seamlessly integrate search capabilities, fostering environments where user queries necessitate dynamic, real-time content retrieval without sacrificing response coherence or quality.

Contributions and Future Directions

The contributions of this research are manifold: the introduction and release of the Search Arena dataset mark a significant step towards understanding and evaluating LLMs in contexts beyond static knowledge retrieval. The analysis of user preference dynamics provides a foundational understanding of how real-world users interact with search-enabled AI, a valuable insight for future model iterations.

Future research avenues may include refining attribution mechanisms to align user perception with factual credibility, and expanding the scope of evaluation metrics to incorporate user engagement and satisfaction holistically. Furthermore, as LLMs further entwine with real-world applications across varied domains, this work prompts an ongoing dialogue on the ethical and practical dimensions of deploying AI that interacts with the dynamism of the web and human preference.

In conclusion, "Search Arena: Analyzing Search-Augmented LLMs" paves the way for future explorations into the performance and perceived credibility of augmented LLMs, providing a rich dataset and insightful findings that benefit researchers seeking to enhance the interoperability and efficacy of AI in handling contemporaneous, domain-specific human-AI interactions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Mihran Miroyan (2 papers)
  2. Tsung-Han Wu (29 papers)
  3. Logan King (1 paper)
  4. Tianle Li (25 papers)
  5. Jiayi Pan (19 papers)
  6. Xinyan Hu (5 papers)
  7. Wei-Lin Chiang (19 papers)
  8. Anastasios N. Angelopoulos (28 papers)
  9. Trevor Darrell (324 papers)
  10. Narges Norouzi (16 papers)
  11. Joseph E. Gonzalez (167 papers)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com