Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mechanism Design for Large Language Models (2310.10826v3)

Published 16 Oct 2023 in cs.GT and econ.TH

Abstract: We investigate auction mechanisms for AI-generated content, focusing on applications like ad creative generation. In our model, agents' preferences over stochastically generated content are encoded as LLMs. We propose an auction format that operates on a token-by-token basis, and allows LLM agents to influence content creation through single dimensional bids. We formulate two desirable incentive properties and prove their equivalence to a monotonicity condition on output aggregation. This equivalence enables a second-price rule design, even absent explicit agent valuation functions. Our design is supported by demonstrations on a publicly available LLM.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Training a helpful and harmless assistant with reinforcement learning from human feedback. CoRR abs/2204.05862 (2022). https://doi.org/10.48550/arXiv.2204.05862
  2. Fine-tuning language models to find agreement among humans with diverse preferences. In NeurIPS 2022. 38176–38189.
  3. Dirk Bergemann and Stephen Morris. 2005. Robust mechanism design. Econometrica (2005), 1771–1813.
  4. Dirk Bergemann and Stephen Morris. 2012. Robust mechanism design: The role of private information and higher order beliefs. Vol. 2. World Scientific.
  5. Language models are few-shot learners. NeurIPS 2020, 1877–1901.
  6. Jeremy Bulow and John Roberts. 1989. The simple economics of optimal auctions. Journal of Political Economy 97, 5 (1989), 1060–1090.
  7. Gabriel Carroll. 2015. Robustness and linear contracts. American Economic Review 105, 2 (2015), 536–563.
  8. Shaddin Dughmi. 2011. A truthful randomized mechanism for combinatorial public projects via convex optimization. In EC 2011. 263–272.
  9. Internet Advertising and the Generalized Second-Price Auction: Selling Billions of Dollars Worth of Keywords. American Economic Review 97(1) (2007), 242–259.
  10. PaLM 2 Technical Report. arXiv:2305.10403 [cs.CL]
  11. Roger B Myerson. 1981. Optimal auction design. Mathematics of Operations Research 6, 1 (1981), 58–73.
  12. Training language models to follow instructions with human feedback. In NeurIPS 2022. 27730–27744.
  13. On the hardness of being truthful. In FOCS 2008. 250–259.
  14. Zero-shot text-to-image generation. In ICML 2021. 8821–8831.
  15. Tim Roughgarden and Inbal Talgam-Cohen. 2016. Optimal and robust mechanism design with interdependent values. ACM Transactions on Economics and Computation 4 (3) (2016), 1–34.
  16. Proximal Policy Optimization Algorithms. CoRR abs/1707.06347 (2017). http://arxiv.org/abs/1707.06347
  17. Videobert: A joint model for video and language representation learning. In ICCV 2019. 7464–7473.
  18. LaMDA: Language Models for Dialog Applications. CoRR abs/2201.08239 (2022). https://arxiv.org/abs/2201.08239
  19. Hal R. Varian. 2007. Position auctions. International Journal of Industrial Organization 25 (6) (2007), 1163–1178.
  20. William Vickrey. 1961. Counterspeculation, auctions, and competitive sealed tenders. The Journal of Finance 16, 1 (1961), 8–37.
  21. Finetuned Language Models are Zero-Shot Learners. In International Conference on Learning Representations.
  22. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In NeurIPS 2022. 24824–24837.
  23. Larger language models do in-context learning differently. CoRR abs/2303.03846 (2023). https://doi.org/10.48550/arXiv.2303.03846
  24. Large language models are diverse role-players for summarization evaluation. arXiv preprint arXiv:2303.15078 (2023).
  25. Scaling Autoregressive Models for Content-Rich Text-to-Image Generation. Transactions on Machine Learning Research (2022).
Citations (17)

Summary

We haven't generated a summary for this paper yet.