Papers
Topics
Authors
Recent
2000 character limit reached

Unlocking Instructive In-Context Learning with Tabular Prompting for Relational Triple Extraction (2402.13741v1)

Published 21 Feb 2024 in cs.CL and cs.AI

Abstract: The in-context learning (ICL) for relational triple extraction (RTE) has achieved promising performance, but still encounters two key challenges: (1) how to design effective prompts and (2) how to select proper demonstrations. Existing methods, however, fail to address these challenges appropriately. On the one hand, they usually recast RTE task to text-to-text prompting formats, which is unnatural and results in a mismatch between the output format at the pre-training time and the inference time for LLMs. On the other hand, they only utilize surface natural language features and lack consideration of triple semantics in sample selection. These issues are blocking improved performance in ICL for RTE, thus we aim to tackle prompt designing and sample selection challenges simultaneously. To this end, we devise a tabular prompting for RTE (\textsc{TableIE}) which frames RTE task into a table generation task to incorporate explicit structured information into ICL, facilitating conversion of outputs to RTE structures. Then we propose instructive in-context learning (I$2$CL) which only selects and annotates a few samples considering internal triple semantics in massive unlabeled samples.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Skill-based few-shot selection for in-context learning. arXiv preprint arXiv:2305.14210.
  2. Language models are few-shot learners. In NeurIPS.
  3. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
  4. Thinking about GPT-3 in-context learning for biomedical IE? think again. In Findings of EMNLP.
  5. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. In ACL.
  6. Low-resource deep entity resolution with transfer and active learning. arXiv preprint arXiv:1906.08042.
  7. Large language models are zero-shot reasoners. In NeurIPS.
  8. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL.
  9. Fastre: Towards fast relation extraction with convolutional encoder and improved cascade binary tagging framework. In IJCAI.
  10. Revisiting large language models as zero-shot relation extractors. In Findings of EMNLP.
  11. Online noisy continual relation learning. In AAAI.
  12. Codeie: Large code generation models are better few-shot information extractors. In ACL.
  13. What makes good in-context examples for gpt-3333? arXiv preprint arXiv:2101.06804.
  14. Towards continual knowledge graph embedding via incremental distillation. In AAAI.
  15. Iterde: an iterative knowledge distillation framework for knowledge graph embeddings. In AAAI.
  16. Unify named entity recognition scenarios via contrastive real-time updating prototype. In AAAI.
  17. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR.
  18. Unified structure generation for universal information extraction. In ACL.
  19. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! arXiv preprint arXiv:2303.08559.
  20. Confronting active learning for relation extraction to a real-life scenario on french newspaper data. In InterNLP@ NeurIPS 2022.
  21. Active learning by acquiring contrastive examples. arXiv preprint arXiv:2109.03764.
  22. Distant supervision for relation extraction without labeled data. In ACL.
  23. OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
  24. Structured prediction as translation between augmented natural languages. In ICLR.
  25. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 21(140):1–67.
  26. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In EMNLP.
  27. Modeling relations and their mentions without labeled text. In ECML-PKDD.
  28. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In NAACL.
  29. Learning to retrieve prompts for in-context learning. In NAACL.
  30. Using the averaged hausdorff distance as a performance measure in evolutionary multiobjective optimization. IEEE Transactions on Evolutionary Computation, 16(4):504–522.
  31. Ontofact: Unveiling fantastic fact-skeleton of llms via ontology-driven reinforcement learning. In AAAI.
  32. Selective annotation makes language models better few-shot learners. In ICLR.
  33. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
  34. Gpt-re: In-context learning for relation extraction using large language models. arXiv preprint arXiv:2305.02105.
  35. Cost-effective active learning for deep image classification. IEEE Transactions on Circuits and Systems for Video Technology, 27(12):2591–2600.
  36. fmlre: a low-resource relation extraction model based on feature mapping similarity calculation. In AAAI.
  37. Pascore: a chinese overlapping relation extraction model based on global pointer annotation strategy. In IJCAI.
  38. Self-consistency improves chain of thought reasoning in language models. In ICLR.
  39. Chain of thought prompting elicits reasoning in large language models. In NeurIPS.
  40. Zero-shot information extraction via chatting with chatgpt. arXiv preprint arXiv:2302.10205.
  41. Consistner: Towards instructive ner demonstrations for llms with the consistency of ontology and context. In AAAI.
  42. Multi-modal graph fusion for named entity recognition with targeted visual guidance. In AAAI.
  43. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068.
  44. Calibrate before use: Improving few-shot performance of language models. In ICML.
  45. Jin Ziqi and Wei Lu. 2023. Tab-cot: Zero-shot tabular chain of thought. In Findings of ACL.
Citations (9)

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.