Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 91 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 30 tok/s
GPT-5 High 33 tok/s Pro
GPT-4o 98 tok/s
GPT OSS 120B 483 tok/s Pro
Kimi K2 242 tok/s Pro
2000 character limit reached

Prompt-Time Symbolic Knowledge Capture with Large Language Models (2402.00414v1)

Published 1 Feb 2024 in cs.CL and cs.AI

Abstract: Augmenting LLMs with user-specific knowledge is crucial for real-world applications, such as personal AI assistants. However, LLMs inherently lack mechanisms for prompt-driven knowledge capture. This paper investigates utilizing the existing LLM capabilities to enable prompt-driven knowledge capture, with a particular emphasis on knowledge graphs. We address this challenge by focusing on prompt-to-triple (P2T) generation. We explore three methods: zero-shot prompting, few-shot prompting, and fine-tuning, and then assess their performance via a specialized synthetic dataset. Our code and datasets are publicly available at https://github.com/HaltiaAI/paper-PTSKC.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. Large Language Models and Knowledge Graphs: Opportunities and Challenges, August 2023a. URL http://arxiv.org/abs/2308.06374. arXiv:2308.06374 [cs].
  2. Unifying Large Language Models and Knowledge Graphs: A Roadmap, June 2023b. URL http://arxiv.org/abs/2306.08302. arXiv:2306.08302 [cs].
  3. CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training, December 2020. URL http://arxiv.org/abs/2006.04702. arXiv:2006.04702 [cs].
  4. INFINITY: A Simple Yet Effective Unsupervised Framework for Graph-Text Mutual Conversion, September 2022. URL http://arxiv.org/abs/2209.10754. arXiv:2209.10754 [cs].
  5. Entity-Relation Extraction as Multi-Turn Question Answering, September 2019. URL http://arxiv.org/abs/1905.05529. arXiv:1905.05529 [cs].
  6. Language Models are Few-Shot Learners, July 2020. URL http://arxiv.org/abs/2005.14165. arXiv:2005.14165 [cs].
  7. Large Language Model Is Not a Good Few-shot Information Extractor, but a Good Reranker for Hard Samples!, October 2023. URL http://arxiv.org/abs/2303.08559. arXiv:2303.08559 [cs].
  8. LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities, May 2023. URL http://arxiv.org/abs/2305.13168. arXiv:2305.13168 [cs].
  9. MLX: Efficient and flexible machine learning on apple silicon. https://github.com/ml-explore, 2023.
  10. QLoRA: Efficient Finetuning of Quantized LLMs, May 2023. URL http://arxiv.org/abs/2305.14314. arXiv:2305.14314 [cs].
  11. Open llm leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard, 2023.
  12. Mistral 7B, October 2023. URL http://arxiv.org/abs/2310.06825. arXiv:2310.06825 [cs].
  13. Llama 2: Open foundation and fine-tuned chat models, 2023.
  14. A Survey of Data Augmentation Approaches for NLP, December 2021. URL http://arxiv.org/abs/2105.03075. arXiv:2105.03075 [cs].
  15. Data Augmentation Approaches in Natural Language Processing: A Survey. AI Open, 3:71–90, 2022. ISSN 26666510. doi: 10.1016/j.aiopen.2022.03.001. URL http://arxiv.org/abs/2110.01852. arXiv:2110.01852 [cs].
Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com