2000 character limit reached
Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp (2406.10816v1)
Published 16 Jun 2024 in cs.PL, cs.AI, cs.AR, and cs.PF
Abstract: This article optimizes the inference performance of the Qwen-1.8B model by performing Int8 quantization, vectorizing some operators in llama.cpp, and modifying the compilation script to improve the compiler optimization level. On the Yitian 710 experimental platform, the prefill performance is increased by 1.6 times, the decoding performance is increased by 24 times, the memory usage is reduced to 1/5 of the original, and the accuracy loss is almost negligible.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.