Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Text Encoders Lack Knowledge: Leveraging Generative LLMs for Domain-Specific Semantic Textual Similarity (2309.06541v1)

Published 12 Sep 2023 in cs.CL

Abstract: Amidst the sharp rise in the evaluation of LLMs on various tasks, we find that semantic textual similarity (STS) has been under-explored. In this study, we show that STS can be cast as a text generation problem while maintaining strong performance on multiple STS benchmarks. Additionally, we show generative LLMs significantly outperform existing encoder-based STS models when characterizing the semantic similarity between two texts with complex semantic relationships dependent on world knowledge. We validate this claim by evaluating both generative LLMs and existing encoder-based STS models on three newly collected STS challenge sets which require world knowledge in the domains of Health, Politics, and Sports. All newly collected data is sourced from social media content posted after May 2023 to ensure the performance of closed-source models like ChatGPT cannot be credited to memorization. Our results show that, on average, generative LLMs outperform the best encoder-only baselines by an average of 22.3% on STS tasks requiring world knowledge. Our results suggest generative LLMs with STS-specific prompting strategies achieve state-of-the-art performance in complex, domain-specific STS tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Joseph Gatto (17 papers)
  2. Omar Sharif (21 papers)
  3. Parker Seegmiller (11 papers)
  4. Philip Bohlman (2 papers)
  5. Sarah Masud Preum (9 papers)
Citations (6)