Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 100 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 29 tok/s
GPT-5 High 29 tok/s Pro
GPT-4o 103 tok/s
GPT OSS 120B 480 tok/s Pro
Kimi K2 215 tok/s Pro
2000 character limit reached

Cooperative Design Optimization through Natural Language Interaction (2508.16077v1)

Published 22 Aug 2025 in cs.HC, cs.AI, and cs.LG

Abstract: Designing successful interactions requires identifying optimal design parameters. To do so, designers often conduct iterative user testing and exploratory trial-and-error. This involves balancing multiple objectives in a high-dimensional space, making the process time-consuming and cognitively demanding. System-led optimization methods, such as those based on Bayesian optimization, can determine for designers which parameters to test next. However, they offer limited opportunities for designers to intervene in the optimization process, negatively impacting the designer's experience. We propose a design optimization framework that enables natural language interactions between designers and the optimization system, facilitating cooperative design optimization. This is achieved by integrating system-led optimization methods with LLMs, allowing designers to intervene in the optimization process and better understand the system's reasoning. Experimental results show that our method provides higher user agency than a system-led method and shows promising optimization performance compared to manual design. It also matches the performance of an existing cooperative method with lower cognitive load.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper's main contribution is combining LLMs with Bayesian Optimization to enable designer-guided parameter selection through natural language interaction.
  • It demonstrates improved user agency and reduced cognitive load compared to fully automated and explicit constraint-based methods.
  • Empirical studies reveal competitive optimization performance and higher user satisfaction in complex, high-dimensional design tasks.

Cooperative Design Optimization through Natural Language Interaction: An Expert Analysis

Introduction and Motivation

The paper presents a novel framework for cooperative design optimization that leverages natural language interaction between designers and an optimization system. The approach integrates LLMs with Bayesian Optimization (BO), enabling designers to guide the optimization process and receive interpretable explanations for system-generated parameter suggestions. This method addresses the limitations of fully system-led optimization, which often restricts designer agency and fails to incorporate human intuition, and also improves upon prior cooperative approaches that require explicit, often cognitively demanding, parameter-space constraints.

System Architecture and Interaction Paradigm

The proposed system operates in iterative cycles where designers can intervene using natural language requests. The system samples qq candidate parameter sets via batch BO, then prompts the LLM to select the candidate best aligned with the designer's request and to provide a natural language rationale for its choice. This interaction paradigm is illustrated in a typical scenario where a UI designer optimizes a restaurant map design by providing instructions and receiving parameter suggestions and explanations. Figure 1

Figure 1: A typical interaction scenario of cooperative design optimization with LLMs, showing designer instructions, system proposals, and editable parameter sliders.

The system interface supports both manual parameter adjustment and natural language interaction, with real-time visualization of parameter history and objective values. Designers can alternate between direct manipulation and AI-guided exploration, fostering a flexible, collaborative workflow. Figure 2

Figure 2: The design interface for the Cooperative: Natural Language condition, featuring sliders, natural language input, evaluation controls, and visualizations of objectives and parameter history.

Technical Implementation: LLM-Guided Bayesian Optimization

Problem Formulation

The optimization problem is multi-objective, seeking to maximize mm performance metrics (e.g., speed, accuracy) over nn design parameters. The system iteratively selects parameter sets, conducts user testing (simulated via synthetic functions in the paper), and updates the surrogate models.

Batch BO and Candidate Generation

A Gaussian Process (GP) surrogate is trained for each objective. The acquisition function used is qLogNEHVI, which supports batch sampling and diversity among candidates. In each iteration, qq candidates are generated, and the LLM selects one based on the designer's request and the predicted performance/uncertainty. Figure 3

Figure 3: Overview of the system procedure, showing batch candidate generation, LLM-guided selection, and reasoning.

LLM Prompt Engineering

The LLM prompt includes task context, parameter and objective ranges, candidate statistics (mean, variance, acquisition value), evaluation history, and the designer's request. The LLM outputs the index of the selected candidate and a rationale, which is displayed to the designer. This prompt structure enables the LLM to balance exploitation and exploration, incorporate user intent, and communicate uncertainty.

(Figure 4)

Figure 4: Prompt for candidate selection and reasoning, with context variables and candidate statistics.

Empirical Evaluation: User Studies

Study 1: Levels of Control

Three conditions were compared: Designer-led (manual), BO-led (system-led), and Cooperative: Natural Language (proposed). Participants optimized web app designs using simulated user testing.

  • Optimization Performance: BO-led achieved the highest relative hypervolume, but Cooperative matched or exceeded Designer-led performance in most cases.
  • Agency: Cooperative and Designer-led conditions yielded significantly higher agency scores than BO-led.
  • Preference: Majority preferred the Cooperative condition. Figure 5

    Figure 5: Visualization of obtained Pareto fronts in Study 1, showing performance across conditions.

    Figure 6

    Figure 6: Boxplots of the relative hypervolume for the three conditions, quantifying optimization performance.

    Figure 7

    Figure 7: Boxplot of agency scores, demonstrating higher agency in Cooperative and Designer-led conditions.

Study 2: Comparison with Explicit Constraint-Based Cooperation

The Cooperative: Natural Language approach was compared to a prior cooperative method (Explicit Constraint) that required designers to specify forbidden regions in parameter space.

  • Cognitive Load: Cooperative: Natural Language resulted in significantly lower NASA-TLX scores, indicating reduced mental demand.
  • Trust: No significant difference in trust metrics, though qualitative feedback suggested nuanced perceptions of system transparency.
  • Agency: Explicit Constraint condition yielded higher agency, likely due to more tangible control over parameter space.
  • Preference: Most participants preferred the natural language approach. Figure 8

    Figure 8: Visualization of Pareto fronts in Study 2, showing comparable optimization outcomes for both cooperative methods.

    Figure 9

    Figure 9: Boxplots of relative hypervolume for Explicit Constraint and Natural Language conditions.

    Figure 10

    Figure 10: Boxplots of NASA-TLX scores, indicating lower cognitive load for Natural Language interaction.

    Figure 11

    Figure 11: Boxplots of trust dimensions from the Multidimensional Trust Questionnaire.

Technical Validation and Request Analysis

A technical assessment demonstrated that alternating natural language requests (e.g., "increase Objective 1" vs. "increase Objective 2") successfully steered the optimization toward distinct regions of the search space, as evidenced by clustering and centroid separation in the parameter space. Figure 12

Figure 12: Visualization of optimization results for different objective-focused requests, showing effective steering of the search.

Analysis of 187 designer requests revealed that 90.9% focused on outcomes ("what" to achieve) rather than specific parameter manipulations ("how"), and 20.6% expressed complex constraints or trade-offs not easily captured by GUIs. This underscores the expressive power and flexibility of natural language interaction.

Discussion and Implications

Mitigating Design Fixation

The system's ability to provide unexpected yet feasible suggestions helps mitigate design fixation, encouraging broader exploration while preserving designer agency. Contradictory rationales prompt designers to reconsider assumptions and expand their search.

Explanations and Acceptance

Natural language explanations facilitate acceptance of system suggestions, support planning, and reassure designers when system reasoning aligns with their own. However, explanations must be concise and contextually relevant to avoid cognitive overload.

Trade-offs and Limitations

There is a trade-off between candidate diversity (batch size qq) and interaction efficiency. Larger batches improve alignment with specific requests but increase computational overhead. Early-stage accuracy is limited by surrogate model uncertainty; more initial seed points or improved uncertainty communication could enhance responsiveness.

Applicability and Generalization

The method is well-suited to real-world design tasks, especially in high-dimensional or complex spaces where manual constraint specification is impractical. The hybrid approach—combining BO's statistical guidance and LLM's adaptive learning—enables generalization to unfamiliar domains without prior knowledge.

Future Directions

  • Improved Explanation Generation: Layered explanations (summary + detail) and explicit hypothesis formulation.
  • Uncertainty Communication: Adaptive explanations based on predictive confidence.
  • Scalability: Application to higher-dimensional design spaces and real-world iterative testing.
  • Generalization: Validation in domains outside the LLM's training data.

Conclusion

The integration of LLMs with BO for cooperative design optimization via natural language interaction offers a flexible, interpretable, and cognitively efficient framework for human-AI collaboration. Empirical results demonstrate enhanced user agency, competitive optimization performance, and reduced cognitive load compared to manual and explicit constraint-based methods. The approach is particularly advantageous in complex, high-dimensional design tasks and holds promise for broader application and future refinement.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Youtube Logo Streamline Icon: https://streamlinehq.com