Artemis: Efficient Commit-and-Prove SNARKs for zkML (2409.12055v2)
Abstract: Ensuring that AI models are both verifiable and privacy-preserving is important for trust, accountability, and compliance. To address these concerns, recent research has focused on developing zero-knowledge machine learning (zkML) techniques that enable the verification of various aspects of ML models without revealing sensitive information. However, while recent zkML advances have made significant improvements to the efficiency of proving ML computations, they have largely overlooked the costly consistency checks on committed model parameters and input data, which have become a dominant performance bottleneck. To address this gap, this paper introduces a new Commit-and-Prove SNARK (CP-SNARK) construction, Artemis, that effectively addresses the emerging challenge of commitment verification in zkML pipelines. In contrast to existing approaches, Artemis is compatible with any homomorphic polynomial commitment, including those without trusted setup. We present the first implementation of this CP-SNARK, evaluate its performance on a diverse set of ML models, and show substantial improvements over existing methods, achieving significant reductions in prover costs and maintaining efficiency even for large-scale models. For example, for the VGG model, we reduce the overhead associated with commitment checks from 11.5x to 1.1x. Our results indicate that Artemis provides a concrete step toward practical deployment of zkML, particularly in settings involving large-scale or complex models.