Assess whether QLoRA matches full 16-bit finetuning at 33B and 65B scales
Establish whether QLoRA can match full 16-bit finetuning performance for 33B-parameter and 65B-parameter models by conducting controlled comparisons against full 16-bit finetuning at those scales.
References
Despite this evidence, we did not establish that QLoRA can match full 16-bit finetuning performance at 33B and 65B scales. Due to the immense resource costs, we leave this study to future work.
— QLoRA: Efficient Finetuning of Quantized LLMs
(2305.14314 - Dettmers et al., 2023) in Section "Limitations and Discussion"