ComplexityNet: Increasing LLM Inference Efficiency by Learning Task Complexity (2312.11511v3)
Abstract: We present ComplexityNet, a streamlined LLM designed for assessing task complexity. This model predicts the likelihood of accurate output by various LLMs, each with different capabilities. Our initial application of ComplexityNet involves the Mostly Basic Python Problems (MBPP) dataset. We pioneered the creation of the first set of labels to define task complexity. ComplexityNet achieved a notable 79% accuracy in determining task complexity, a significant improvement over the 34% accuracy of the original, non fine-tuned model. Furthermore, ComplexityNet effectively reduces computational resource usage by 90% compared to using the highest complexity model, while maintaining a high code generation accuracy of 86.7%. This study demonstrates that fine-tuning smaller models to categorize tasks based on their complexity can lead to a more balanced trade-off between accuracy and efficiency in the use of LLMs. Our findings suggest a promising direction for optimizing LLM applications, especially in resource-constrained environments.