Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Extreme Multi-Label Skill Extraction Training using Large Language Models (2307.10778v1)

Published 20 Jul 2023 in cs.CL

Abstract: Online job ads serve as a valuable source of information for skill requirements, playing a crucial role in labor market analysis and e-recruitment processes. Since such ads are typically formatted in free text, NLP technologies are required to automatically process them. We specifically focus on the task of detecting skills (mentioned literally, or implicitly described) and linking them to a large skill ontology, making it a challenging case of extreme multi-label classification (XMLC). Given that there is no sizable labeled (training) dataset are available for this specific XMLC task, we propose techniques to leverage general LLMs. We describe a cost-effective approach to generate an accurate, fully synthetic labeled dataset for skill extraction, and present a contrastive learning strategy that proves effective in the task. Our results across three skill extraction benchmarks show a consistent increase of between 15 to 25 percentage points in \textit{R-Precision@5} compared to previously published results that relied solely on distant supervision through literal matches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jens-Joris Decorte (9 papers)
  2. Severine Verlinden (3 papers)
  3. Jeroen Van Hautte (9 papers)
  4. Johannes Deleu (29 papers)
  5. Chris Develder (59 papers)
  6. Thomas Demeester (76 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.