Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linguistically Conditioned Semantic Textual Similarity (2406.03673v1)

Published 6 Jun 2024 in cs.CL and cs.AI

Abstract: Semantic textual similarity (STS) is a fundamental NLP task that measures the semantic similarity between a pair of sentences. In order to reduce the inherent ambiguity posed from the sentences, a recent work called Conditional STS (C-STS) has been proposed to measure the sentences' similarity conditioned on a certain aspect. Despite the popularity of C-STS, we find that the current C-STS dataset suffers from various issues that could impede proper evaluation on this task. In this paper, we reannotate the C-STS validation set and observe an annotator discrepancy on 55% of the instances resulting from the annotation errors in the original label, ill-defined conditions, and the lack of clarity in the task definition. After a thorough dataset analysis, we improve the C-STS task by leveraging the models' capability to understand the conditions under a QA task setting. With the generated answers, we present an automatic error identification pipeline that is able to identify annotation errors from the C-STS data with over 80% F1 score. We also propose a new method that largely improves the performance over baselines on the C-STS data by training the models with the answers. Finally we discuss the conditionality annotation based on the typed-feature structure (TFS) of entity types. We show in examples that the TFS is able to provide a linguistic foundation for constructing C-STS data with new conditions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jingxuan Tu (8 papers)
  2. Keer Xu (1 paper)
  3. Liulu Yue (1 paper)
  4. Bingyang Ye (4 papers)
  5. Kyeongmin Rim (4 papers)
  6. James Pustejovsky (33 papers)

Summary

The paper "Linguistically Conditioned Semantic Textual Similarity" addresses the Semantic Textual Similarity (STS) task, which evaluates how semantically similar two sentences are. Recognizing that existing measures can be ambiguous, the authors delve into Conditional STS (C-STS), which assesses similarity conditioned on specific aspects. They identify numerous issues with the current C-STS dataset, such as annotation errors, ill-defined conditions, and ambiguous task definitions.

The authors undertake a reannotation of the C-STS validation set, revealing an annotator discrepancy in 55% of cases. These discrepancies stem from errors in the original labels, unclear condition definitions, and an overall lack of task clarity. To address these issues, they propose a novel approach by adapting the task to a Question-Answering (QA) framework.

The new methodology involves generating answers based on the conditions, which then helps in training models. This QA framework facilitates an automatic error identification pipeline, achieving over 80% F1 score in identifying annotation errors. This significant improvement underscores the effectiveness of their approach in refining the evaluation process for C-STS.

Furthermore, the paper introduces a new training method that leverages the generated answers, leading to marked enhancements in model performance when compared to existing baselines. The authors also explore conditionality annotation using the typed-feature structure (TFS) of entity types. They demonstrate through examples that TFS can provide a robust linguistic foundation for defining conditions in C-STS data.

In summary, the paper not only highlights critical issues in the current C-STS datasets but also offers innovative solutions to improve dataset quality and model performance. The integration of QA task settings and TFS-based annotations represents a significant advancement in the STS domain.

X Twitter Logo Streamline Icon: https://streamlinehq.com