Dice Question Streamline Icon: https://streamlinehq.com

Optimal batch size for LLM-guided batch Bayesian optimization

Determine the optimal batch size q for the proposed LLM-guided multi-objective Bayesian optimization procedure—where q candidate designs are generated using batch acquisition (qLogNEHVI) and a large language model selects one candidate per iteration—so as to balance candidate diversity against interaction efficiency (i.e., latency and responsiveness) during cooperative design optimization via natural language interaction.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper integrates a LLM into a batch multi-objective Bayesian optimization (BO) workflow. Each iteration samples q candidates using qLogNEHVI, after which the LLM selects a single candidate aligned with the designer’s natural-language request. In the user studies, the authors fixed q=8 to balance candidate variety and interaction responsiveness.

They observe a trade-off: larger q can increase the chance of matching specific user requests by providing more diverse candidates, but it also increases computation time to generate candidates, slowing the interaction and potentially harming user experience. The authors explicitly state that finding the optimal q to balance these opposing factors remains unresolved.

References

This highlights a trade-off between diversity and interaction efficiency. Determining the optimal batch size that balances this trade-off remains an open question and is left for future work.

Cooperative Design Optimization through Natural Language Interaction (2508.16077 - Niwa et al., 22 Aug 2025) in Section: Limitations and Future Work, Trade-off between Candidate Diversity and Interaction Efficiency