Dice Question Streamline Icon: https://streamlinehq.com

Selecting the Optimal Prompter LLM Under Generalizability and Cost Constraints

Determine the optimal large language model to serve as the prompter in Booster’s guided recommendation pipeline that balances generalizability across environments (e.g., cross-schema transfer) and inference cost, identifying performance–cost trade-offs for different LLM scales and capabilities.

Information Square Streamline Icon: https://streamlinehq.com

Background

The authors evaluate multiple embedder–prompter combinations and observe that larger LLMs can yield better configurations, especially for cross-schema scenarios, but at higher inference cost. They note the absence of a principled method for choosing an LLM given varying generalizability requirements and budget constraints.

This selection problem is left for future work, framing a concrete need to systematize how to choose or adapt LLMs for Booster across deployment contexts while controlling resource usage.

References

We leave the selection of the optimal LLM based on generalizability (e.g., cross-schema) and cost requirements for future work.