Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Relation Extraction Datasets in the Digital Humanities Domain and their Evaluation with Word Embeddings (1903.01284v1)

Published 4 Mar 2019 in cs.CL

Abstract: In this research, we manually create high-quality datasets in the digital humanities domain for the evaluation of LLMs, specifically word embedding models. The first step comprises the creation of unigram and n-gram datasets for two fantasy novel book series for two task types each, analogy and doesn't-match. This is followed by the training of models on the two book series with various popular word embedding model types such as word2vec, GloVe, fastText, or LexVec. Finally, we evaluate the suitability of word embedding models for such specific relation extraction tasks in a situation of comparably small corpus sizes. In the evaluations, we also investigate and analyze particular aspects such as the impact of corpus term frequencies and task difficulty on accuracy. The datasets, and the underlying system and word embedding models are available on github and can be easily extended with new datasets and tasks, be used to reproduce the presented results, or be transferred to other domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Gerhard Wohlgenannt (9 papers)
  2. Ekaterina Chernyak (4 papers)
  3. Dmitry Ilvovsky (7 papers)
  4. Ariadna Barinova (2 papers)
  5. Dmitry Mouromtsev (6 papers)

Summary

We haven't generated a summary for this paper yet.