Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Efficient Resume Understanding: A Multi-Granularity Multi-Modal Pre-Training Approach (2404.13067v1)

Published 13 Apr 2024 in cs.CL, cs.AI, and cs.LG

Abstract: In the contemporary era of widespread online recruitment, resume understanding has been widely acknowledged as a fundamental and crucial task, which aims to extract structured information from resume documents automatically. Compared to the traditional rule-based approaches, the utilization of recently proposed pre-trained document understanding models can greatly enhance the effectiveness of resume understanding. The present approaches have, however, disregarded the hierarchical relations within the structured information presented in resumes, and have difficulty parsing resumes in an efficient manner. To this end, in this paper, we propose a novel model, namely ERU, to achieve efficient resume understanding. Specifically, we first introduce a layout-aware multi-modal fusion transformer for encoding the segments in the resume with integrated textual, visual, and layout information. Then, we design three self-supervised tasks to pre-train this module via a large number of unlabeled resumes. Next, we fine-tune the model with a multi-granularity sequence labeling task to extract structured information from resumes. Finally, extensive experiments on a real-world dataset clearly demonstrate the effectiveness of ERU.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Feihu Jiang (2 papers)
  2. Chuan Qin (43 papers)
  3. Jingshuai Zhang (2 papers)
  4. Kaichun Yao (5 papers)
  5. Xi Chen (1035 papers)
  6. Dazhong Shen (22 papers)
  7. Chen Zhu (103 papers)
  8. Hengshu Zhu (66 papers)
  9. Hui Xiong (244 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com