Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models on Lexical Semantic Change Detection: An Evaluation (2312.06002v1)

Published 10 Dec 2023 in cs.CL

Abstract: Lexical Semantic Change Detection stands out as one of the few areas where LLMs have not been extensively involved. Traditional methods like PPMI, and SGNS remain prevalent in research, alongside newer BERT-based approaches. Despite the comprehensive coverage of various natural language processing domains by LLMs, there is a notable scarcity of literature concerning their application in this specific realm. In this work, we seek to bridge this gap by introducing LLMs into the domain of Lexical Semantic Change Detection. Our work presents novel prompting solutions and a comprehensive evaluation that spans all three generations of LLMs, contributing to the exploration of LLMs in this research area.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
  2. Bert: Pre-training of deep bidirectional transformers for language understanding.
  3. Analysing lexical semantic change with contextualised word representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3960–3973, Online. Association for Computational Linguistics.
  4. Diachronic word embeddings reveal statistical laws of semantic change.
  5. Tabllm: Few-shot classification of tabular data with large language models.
  6. Large language models are zero-shot rankers for recommender systems.
  7. Diachronic sense modeling with deep contextualized word embeddings: An ecological view. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3899–3908, Florence, Italy. Association for Computational Linguistics.
  8. Statistically significant detection of linguistic change.
  9. TempoWiC: An Evaluation Benchmark for Detecting Meaning Shift in Social Media. ArXiv:2209.07216 [cs].
  10. MLLabs-LIG at TempoWiC 2022: A Generative Approach for Examining Temporal Meaning Shift. In Proceedings of the The First Workshop on Ever Evolving NLP (EvoNLP), pages 1–6, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
  11. Capturing evolution in word usage: Just add more clusters? In Companion Proceedings of the Web Conference 2020, WWW ’20. ACM.
  12. Efficient estimation of word representations in vector space.
  13. OpenAI. 2023a. GPT-4 Technical Report. ArXiv:2303.08774 [cs].
  14. OpenAI. 2023b. Gpt-4 technical report.
  15. Code Llama: Open Foundation Models for Code. ArXiv:2308.12950 [cs].
  16. A Wind of Change: Detecting and Evaluating Lexical Semantic Change across Times and Domains. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 732–746, Florence, Italy. Association for Computational Linguistics.
  17. Peter Schönemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10.
  18. Survey of computational approaches to lexical semantic change.
  19. Llama: Open and efficient foundation language models.
  20. Attention is all you need. Advances in neural information processing systems, 30.
  21. A survey of large language models. arXiv preprint arXiv:2303.18223.
Citations (3)

Summary

We haven't generated a summary for this paper yet.