Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GPTs Window Shopping: An analysis of the Landscape of Custom ChatGPT Models (2405.10547v1)

Published 17 May 2024 in cs.SI

Abstract: OpenAI's ChatGPT initiated a wave of technical iterations in the space of LLMs by demonstrating the capability and disruptive power of LLMs. OpenAI has prompted large organizations to respond with their own advancements and models to push the LLM performance envelope. OpenAI has prompted large organizations to respond with their own advancements and models to push the LLM performance envelope. OpenAI's success in spotlighting AI can be partially attributed to decreased barriers to entry, enabling any individual with an internet-enabled device to interact with LLMs. What was previously relegated to a few researchers and developers with necessary computing resources is now available to all. A desire to customize LLMs to better accommodate individual needs prompted OpenAI's creation of the GPT Store, a central platform where users can create and share custom GPT models. Customization comes in the form of prompt-tuning, analysis of reference resources, browsing, and external API interactions, alongside a promise of revenue sharing for created custom GPTs. In this work, we peer into the window of the GPT Store and measure its impact. Our analysis constitutes a large-scale overview of the store exploring community perception, GPT details, and the GPT authors, in addition to a deep-dive into a 3rd party storefront indexing user-submitted GPTs, exploring if creators seek to monetize their creations in the absence of OpenAI's revenue sharing.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219 (2024).
  2. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
  3. Same app, different app stores: A comparative study. In 2017 IEEE/ACM 4th International Conference on Mobile Software Engineering and Systems (MOBILESoft). IEEE, 79–90.
  4. Apple. 2024. iTunes — Apple. https://www.apple.com/au/itunes/
  5. Grounded copilot: How programmers interact with code-generating models. Proceedings of the ACM on Programming Languages 7, OOPSLA1 (2023), 85–111.
  6. Programming is hard-or at least it used to be: Educational opportunities and challenges of ai code generation. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1. 500–506.
  7. Phoenix: Democratizing chatgpt across languages. arXiv preprint arXiv:2304.10453 (2023).
  8. Epic. 2024. GPT Store: Find the best GPTs on our GPTStore. https://www.epicgptstore.com/
  9. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020. 1536–1547.
  10. Why people hate your app: Making sense of user feedback in a mobile app store. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. 1276–1284.
  11. Necmiye Genc-Nayebi and Alain Abran. 2017. A systematic literature review: Opinion mining studies from mobile app store user reviews. Journal of Systems and Software 125 (2017), 207–219.
  12. Google. 2024. Google Play. https://play.google.com/store/apps?hl=en&gl=US
  13. Mistral 7B. arXiv preprint arXiv:2310.06825 (2023).
  14. André Mafei. 2024. BeeTrove OpenAI GPTs Dataset. https://github.com/beetrove/openai-gpts-data. Apache License 2.0.
  15. PYTHIA: a Framework for the Automated Analysis of Web Hosting Environments. In The World Wide Web Conference (WWW ’19). ACM. https://doi.org/10.1145/3308558.3313664
  16. MaxMind. 2024. GeoIP and GeoLite — MaxMind Developer Portal. https://dev.maxmind.com/geoip
  17. Granite Code Models: A Family of Open Foundation Models for Code Intelligence. arXiv preprint arXiv:2405.04324 (2024).
  18. OpenAI. 2024a. ChatGPT Plus. https://openai.com/index/chatgpt-plus/
  19. OpenAI. 2024b. Explore GPTs. https://chatgpt.com/gpts?oai-dm=1
  20. OpenAI. 2024c. Introducing GPTs — OpenAI. https://openai.com/index/introducing-gpts/
  21. True few-shot learning with language models. Advances in neural information processing systems 34 (2021), 11054–11070.
  22. potaroo. 2024. Potaroo: BGP Reports. https://bgp.potaroo.net/
  23. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 (2023).
  24. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
  25. A measurement study of google play. In The 2014 ACM international conference on Measurement and modeling of computer systems. 221–233.
  26. Route Views. 2024. Route Views Project. http://archive.routeviews.org/bgpdata/
  27. No more fine-tuning? an experimental evaluation of prompt tuning in code intelligence. In Proceedings of the 30th ACM joint European software engineering conference and symposium on the foundations of software engineering. 382–394.
  28. Making pre-trained language models end-to-end few-shot learners with contrastive prompt tuning. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining. 438–446.
  29. A First Look at GPT Apps: Landscape and Vulnerability. arXiv preprint arXiv:2402.15105 (2024).
  30. LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report. arXiv preprint arXiv:2405.00732 (2024).
  31. LLM App Store Analysis: A Vision and Roadmap. arXiv preprint arXiv:2404.12737 (2024).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Benjamin Zi Hao Zhao (30 papers)
  2. Muhammad Ikram (38 papers)
  3. Mohamed Ali Kaafar (67 papers)
Citations (3)