Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Proof of Quality: A Costless Paradigm for Trustless Generative AI Model Inference on Blockchains (2405.17934v2)

Published 28 May 2024 in cs.AI

Abstract: Generative AI models, such as GPT-4 and Stable Diffusion, have demonstrated powerful and disruptive capabilities in natural language and image tasks. However, deploying these models in decentralized environments remains challenging. Unlike traditional centralized deployment, systematically guaranteeing the integrity of AI model services in fully decentralized environments, particularly on trustless blockchains, is both crucial and difficult. In this paper, we present a new inference paradigm called \emph{proof of quality} (PoQ) to enable the deployment of arbitrarily large generative models on blockchain architecture. Unlike traditional approaches based on validating inference procedures, such as ZKML or OPML, our PoQ paradigm focuses on the outcome quality of model inference. Using lightweight BERT-based cross-encoders as our underlying quality evaluation model, we design and implement PQML, the first practical protocol for real-world NLP generative model inference on blockchains, tailored for popular open-source models such as Llama 3 and Mixtral. Our analysis demonstrates that our protocol is robust against adversarial but rational participants in ecosystems, where lazy or dishonest behavior results in fewer benefits compared to well-behaving participants. The computational overhead of validating the quality evaluation is minimal, allowing quality validators to complete the quality check within a second, even using only a CPU. Preliminary simulation results show that PoQ consensus is generated in milliseconds, 1,000 times faster than any existing scheme.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. \NAT@swatrue
  2. (2004). Convex optimization. Cambridge university press. \NAT@swatrue
  3. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877–1901. \NAT@swatrue
  4. (2019). A systematic literature review of blockchain-based applications: Current status, classification and open issues. Telematics and informatics, 36, 55–81. \NAT@swatrue
  5. (2024). opml: Optimistic machine learning on blockchain. arXiv preprint arXiv:2401.17555. \NAT@swatrue
  6. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. \NAT@swatrue
  7. (2017). Dermatologist-level classification of skin cancer with deep neural networks. nature, 542(7639), 115–118. \NAT@swatrue
  8. (2017). Algorand: Scaling byzantine agreements for cryptocurrencies. In Proceedings of the 26th symposium on operating systems principles (pp. 51–68). \NAT@swatrue
  9. (2023). Mistral 7b. arXiv preprint arXiv:2310.06825. \NAT@swatrue
  10. (2020). Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. \NAT@swatrue
  11. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. \NAT@swatrue
  12. (2020). Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33, 9459–9474. \NAT@swatrue
  13. (2004). An introduction to game theory (Vol. 3) (No. 3). Oxford university press New York. \NAT@swatrue
  14. (2016). Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. \NAT@swatrue
  15. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. \NAT@swatrue
  16. (2019). Blockchain for ai: Review and open research challenges. IEEE access, 7, 10127–10149. \NAT@swatrue
  17. (2023). Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. \NAT@swatrue
  18. (2016). Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 12. \NAT@swatrue
  19. (2017). Attention is all you need. Advances in neural information processing systems, 30. \NAT@swatrue
  20. (2021). Mystique: Efficient conversions for {{\{{Zero-Knowledge}}\}} proofs with applications to machine learning. In 30th usenix security symposium (usenix security 21) (pp. 501–518). \NAT@swatrue
  21. (2018). Fhirchain: applying blockchain to securely and scalably share clinical data. Computational and structural biotechnology journal, 16, 267–278. \NAT@swatrue
  22. (2009). On domination game analysis for microeconomic data mining. ACM Transactions on Knowledge Discovery from Data (TKDD), 2(4), 1–27. \NAT@swatrue
  23. (2021). Agatha: Smart contract for DNN computation. CoRR, abs/2105.04919.
Citations (2)

Summary

We haven't generated a summary for this paper yet.