Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PokeMQA: Programmable knowledge editing for Multi-hop Question Answering (2312.15194v2)

Published 23 Dec 2023 in cs.CL

Abstract: Multi-hop question answering (MQA) is one of the challenging tasks to evaluate machine's comprehension and reasoning abilities, where LLMs have widely achieved the human-comparable performance. Due to the dynamics of knowledge facts in real world, knowledge editing has been explored to update model with the up-to-date facts while avoiding expensive re-training or fine-tuning. Starting from the edited fact, the updated model needs to provide cascading changes in the chain of MQA. The previous art simply adopts a mix-up prompt to instruct LLMs conducting multiple reasoning tasks sequentially, including question decomposition, answer generation, and conflict checking via comparing with edited facts. However, the coupling of these functionally-diverse reasoning tasks inhibits LLMs' advantages in comprehending and answering questions while disturbing them with the unskilled task of conflict checking. We thus propose a framework, Programmable knowledge editing for Multi-hop Question Answering (PokeMQA), to decouple the jobs. Specifically, we prompt LLMs to decompose knowledge-augmented multi-hop question, while interacting with a detached trainable scope detector to modulate LLMs behavior depending on external conflict signal. The experiments on three LLM backbones and two benchmark datasets validate our superiority in knowledge editing of MQA, outperforming all competitors by a large margin in almost all settings and consistently producing reliable reasoning process.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  2. Tinykg: Memory-efficient training framework for knowledge graph neural recommender systems. In Proceedings of the 16th ACM Conference on Recommender Systems, RecSys ’22. ACM.
  3. Multi-hop question answering via reasoning chains. arXiv preprint arXiv:1910.02610.
  4. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality.
  5. Discoverpath: A knowledge refinement and retrieval system for interdisciplinarity on biomedical research.
  6. Editing factual knowledge in language models. arXiv preprint arXiv:2104.08164.
  7. Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models. arXiv preprint arXiv:2301.04213.
  8. Few-shot reranking for multi-hop qa via language model prompting. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15882–15897.
  9. Decomposed prompting: A modular approach for solving complex tasks. arXiv preprint arXiv:2210.02406.
  10. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  11. A survey on complex knowledge base question answering: Methods, challenges and solutions. arXiv preprint arXiv:2105.11644.
  12. Efficient one-pass end-to-end entity linking for questions. arXiv preprint arXiv:2010.02413.
  13. Gang Liu and Jiabao Guo. 2019. Bidirectional lstm with attention mechanism and convolutional layer for text classification. Neurocomputing, 337:325–338.
  14. Winner-take-all column row sampling for memory efficient adaptation of language model.
  15. Vgcn-bert: augmenting bert with graph embedding for text classification. In Advances in Information Retrieval: 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14–17, 2020, Proceedings, Part I 42, pages 369–382. Springer.
  16. A survey on multi-hop question answering and generation. arXiv preprint arXiv:2204.09140.
  17. Locating and editing factual associations in gpt. Advances in Neural Information Processing Systems, 35:17359–17372.
  18. Mass-editing memory in a transformer. arXiv preprint arXiv:2210.07229.
  19. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26.
  20. Fast model editing at scale. arXiv preprint arXiv:2110.11309.
  21. Memory-based model editing at scale. In International Conference on Machine Learning, pages 15817–15831. PMLR.
  22. Can lms learn new entities from descriptions? challenges in propagating injected knowledge. arXiv preprint arXiv:2305.01651.
  23. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744.
  24. Van L Parsons. 2014. Stratified sampling. Wiley StatsRef: Statistics Reference Online, pages 1–11.
  25. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350.
  26. Biomedical multi-hop question answering using knowledge graph embeddings and language models. arXiv preprint arXiv:2211.05351.
  27. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
  28. Editable neural networks. arXiv preprint arXiv:2004.00345.
  29. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  30. Denny Vrandečić and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78–85.
  31. Easyedit: An easy-to-use knowledge editing framework for large language models. arXiv preprint arXiv:2308.07269.
  32. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
  33. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
  34. Compress, then prompt: Improving accuracy-efficiency trade-off of llm inference with transferable prompt.
  35. On early stopping in gradient descent learning. Constructive Approximation, 26:289–315.
  36. Towards similarity-aware time-series classification. CoRR, abs/2201.01413.
  37. Can we edit factual knowledge by in-context learning? arXiv preprint arXiv:2305.12740.
  38. Judging llm-as-a-judge with mt-bench and chatbot arena.
  39. Mquake: Assessing knowledge editing in language models via multi-hop questions.
  40. Modifying memories in transformer models. arXiv preprint arXiv:2012.00363.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hengrui Gu (7 papers)
  2. Kaixiong Zhou (52 papers)
  3. Xiaotian Han (46 papers)
  4. Ninghao Liu (98 papers)
  5. Ruobing Wang (16 papers)
  6. Xin Wang (1306 papers)
Citations (17)