Papers
Topics
Authors
Recent
2000 character limit reached

HELP ME THINK: A Simple Prompting Strategy for Non-experts to Create Customized Content with Models (2208.08232v2)

Published 17 Aug 2022 in cs.CL, cs.AI, cs.CV, cs.HC, and cs.LG

Abstract: Controlling the text generated by LLMs and customizing the content has been a long-standing challenge. Existing prompting techniques proposed in pursuit of providing control are task-specific and lack generality; this provides overwhelming choices for non-expert users to find a suitable method for their task. The effort associated with those techniques, such as in writing examples, explanations, instructions, etc. further limits their adoption among non-expert users. In this paper, we propose a simple prompting strategy HELP ME THINK where we encourage GPT3 to help non-expert users by asking a set of relevant questions and leveraging user answers to execute the task. We demonstrate the efficacy of our technique HELP ME THINK on a variety of tasks. Specifically, we focus on tasks that are hard for average humans and require significant thinking to perform. We hope our work will encourage the development of unconventional ways to harness the power of LLMs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691.
  2. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  3. Adaprompt: Adaptive prompt-based finetuning for relation extraction. arXiv e-prints, pages arXiv–2104.
  4. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
  5. Avia Efrat and Omer Levy. 2020. The turking test: Can language models understand instructions? arXiv preprint arXiv:2010.11982.
  6. Make-a-scene: Scene-based text-to-image generation with human priors. arXiv preprint arXiv:2203.13131.
  7. Improving zero and few-shot generalization in dialogue through instruction tuning. arXiv preprint arXiv:2205.12673.
  8. Text modular networks: Learning to decompose tasks in the language of existing models. arXiv preprint arXiv:2009.00751.
  9. Less is more: Summary of long instructions is better for program synthesis. arXiv preprint arXiv:2203.08597.
  10. Teven Le Scao and Alexander M Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2627–2636.
  11. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586.
  12. Biotabqa: Instruction learning for biomedical table question answering. arXiv preprint arXiv:2207.02419.
  13. Reframing instructional prompts to GPTk’s language. In Findings of the Association for Computational Linguistics: ACL 2022, pages 589–612, Dublin, Ireland. Association for Computational Linguistics.
  14. Cross-task generalization via natural language crowdsourcing instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3470–3487.
  15. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114.
  16. Training language models to follow instructions with human feedback. Preprint.
  17. In-boxbart: Get instructions into biomedical multi-task learning. arXiv preprint arXiv:2204.07600.
  18. Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538.
  19. Ethan Perez. 2022. Finding and Fixing Undesirable Behaviors in Pretrained Language Models. Ph.D. thesis, New York University, USA.
  20. How many data samples is an additional instruction worth? arXiv preprint arXiv:2203.09161.
  21. A recipe for arbitrary text style transfer with large language models. arXiv preprint arXiv:2109.03910.
  22. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207.
  23. Instructabsa: Instruction learning for aspect based sentiment analysis. arXiv preprint arXiv:2302.08624.
  24. Instructionner: A multi-task instruction-based generative framework for few-shot ner. arXiv preprint arXiv:2203.03903.
  25. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
  26. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560.
  27. Benchmarking generalization via in-context instructions on 1,600+ language tasks. arXiv preprint arXiv:2204.07705.
  28. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
  29. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
  30. Learning from task descriptions. arXiv preprint arXiv:2011.08115.
  31. An imitation game for learning semantic parsers from user interaction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6883–6902, Online. Association for Computational Linguistics.
  32. Compositional task-oriented parsing as abstractive question answering. arXiv preprint arXiv:2205.02068.
  33. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. arXiv preprint arXiv:2104.04670.
  34. Active programming by example with a natural language prior.
  35. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625.
Citations (25)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.