Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradient Imitation Reinforcement Learning for Low Resource Relation Extraction (2109.06415v1)

Published 14 Sep 2021 in cs.CL and cs.AI

Abstract: Low-resource Relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce. Existing works either utilize self-training scheme to generate pseudo labels that will cause the gradual drift problem, or leverage meta-learning scheme which does not solicit feedback explicitly. To alleviate selection bias due to the lack of feedback loops in existing LRE learning paradigms, we developed a Gradient Imitation Reinforcement Learning method to encourage pseudo label data to imitate the gradient descent direction on labeled data and bootstrap its optimization capability through trial and error. We also propose a framework called GradLRE, which handles two major scenarios in low-resource relation extraction. Besides the scenario where unlabeled data is sufficient, GradLRE handles the situation where no unlabeled data is available, by exploiting a contextualized augmentation method to generate data. Experimental results on two public datasets demonstrate the effectiveness of GradLRE on low resource relation extraction when comparing with baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xuming Hu (120 papers)
  2. Chenwei Zhang (60 papers)
  3. Yawen Yang (7 papers)
  4. Xiaohe Li (8 papers)
  5. Li Lin (91 papers)
  6. Lijie Wen (58 papers)
  7. Philip S. Yu (592 papers)
Citations (57)