Papers
Topics
Authors
Recent
2000 character limit reached

Enhancement of Subjective Content Descriptions by using Human Feedback (2405.15786v1)

Published 30 Apr 2024 in cs.IR and cs.AI

Abstract: An agent providing an information retrieval service may work with a corpus of text documents. The documents in the corpus may contain annotations such as Subjective Content Descriptions (SCD) -- additional data associated with different sentences of the documents. Each SCD is associated with multiple sentences of the corpus and has relations among each other. The agent uses the SCDs to create its answers in response to queries supplied by users. However, the SCD the agent uses might reflect the subjective perspective of another user. Hence, answers may be considered faulty by an agent's user, because the SCDs may not exactly match the perceptions of an agent's user. A naive and very costly approach would be to ask each user to completely create all the SCD themselves. To use existing knowledge, this paper presents ReFrESH, an approach for Relation-preserving Feedback-reliant Enhancement of SCDs by Humans. An agent's user can give feedback about faulty answers to the agent. This feedback is then used by ReFrESH to update the SCDs incrementally. However, human feedback is not always unambiguous. Therefore, this paper additionally presents an approach to decide how to incorporate the feedback and when to update the SCDs. Altogether, SCDs can be updated with human feedback, allowing users to create even more specific SCDs for their needs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. F. Kuhr, T. Braun, M. Bender, and R. Möller, “To Extend or not to Extend? Context-specific Corpus Enrichment,” Proceedings of AI 2019: Advances in Artificial Intelligence, pp. 357–368, 2019. [Online]. Available: https://doi.org/10.1007/978-3-030-35288-2_29
  2. G. Angeli, M. J. Johnson Premkumar, and C. D. Manning, “Leveraging linguistic structure for open domain information extraction,” Proceedings of the Association of Computational Linguistics (ACL), pp. 344–354, 2015. [Online]. Available: https://doi.org/10.3115/v1/P15-1034
  3. M. Bender, T. Braun, R. Möller, and M. Gehrke, “Unsupervised estimation of subjective content descriptions,” Proceedings of the 17th IEEE International Conference on Semantic Computing (ICSC-23), 2023. [Online]. Available: https://doi.org/10.1109/ICSC56153.2023.00052
  4. M. Bender, K. Schwandt, R. Möller, and M. Gehrke, “Fresh – feedback-reliant enhancement of subjective content descriptions by humans,” in Proceedings of the Workshop on Humanities-Centred Artificial Intelligence (CHAI 2023).   CEUR Workshop Proceedings.
  5. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” 2019. [Online]. Available: https://arxiv.org/abs/1810.04805
  6. T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” 2020. [Online]. Available: https://arxiv.org/abs/2005.14165
  7. S. Lloyd, “Least squares quantization in pcm,” IEEE Transactions on Information Theory, vol. 28, no. 2, pp. 129–137, 1982.
  8. A. A. Ginart, M. Y. Guan, G. Valiant, and J. Zou, “Making AI forget you: Data deletion in machine learning,” 2019.
  9. Z. Izzo, M. A. Smart, K. Chaudhuri, and J. Y. Zou, “Approximate data deletion from machine learning models,” in International Conference on Artificial Intelligence and Statistics, 2021.
  10. M. Bender, T. Braun, R. Möller, and M. Gehrke, “LESS is More: LEan Computing for Selective Summaries,” in KI 2023: Advances in Artificial Intelligence.   Springer International Publishing, 2023, pp. 1–14. [Online]. Available: https://doi.org/10.1007/978-3-031-42608-7_1
  11. T. Sang, P. Beame, and H. A. Kautz, “Performing bayesian inference by weighted model counting,” in AAAI Conference on Artificial Intelligence, 2005.
  12. M. Bender, F. Kuhr, and T. Braun, “To extend or not to extend? complementary documents,” 16th IEEE International Conference on Semantic Computing, (ICSC 2022), Virtual, January 26-28, pp. 17–24, 2022. [Online]. Available: https://doi.org/10.1109/ICSC52841.2022.00011
  13. ——, “To extend or not to extend? enriching a corpus with complementary and related documents,” International Journal of Semantic Computing, 2022.
  14. E. Hellinger, “Neue Begründung der Theorie quadratischer Formen von unendlichvielen Veränderlichen,” Journal für die reine und angewandte Mathematik, pp. 210–271, 1909.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.