Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using Large Language Models for Parametric Shape Optimization (2412.08072v1)

Published 11 Dec 2024 in cs.CE, cs.AI, and cs.LG

Abstract: Recent advanced LLMs have showcased their emergent capability of in-context learning, facilitating intelligent decision-making through natural language prompts without retraining. This new machine learning paradigm has shown promise in various fields, including general control and optimization problems. Inspired by these advancements, we explore the potential of LLMs for a specific and essential engineering task: parametric shape optimization (PSO). We develop an optimization framework, LLM-PSO, that leverages an LLM to determine the optimal shape of parameterized engineering designs in the spirit of evolutionary strategies. Utilizing the ``Claude 3.5 Sonnet'' LLM, we evaluate LLM-PSO on two benchmark flow optimization problems, specifically aiming to identify drag-minimizing profiles for 1) a two-dimensional airfoil in laminar flow, and 2) a three-dimensional axisymmetric body in Stokes flow. In both cases, LLM-PSO successfully identifies optimal shapes in agreement with benchmark solutions. Besides, it generally converges faster than other classical optimization algorithms. Our preliminary exploration may inspire further investigations into harnessing LLMs for shape optimization and engineering design more broadly.

Summary

  • The paper introduces the AL framework, leveraging large language models and in-context learning to perform end-to-end parametric shape optimization, mimicking evolutionary strategies.
  • Numerical experiments on 2D airfoil optimization showed that the AL framework generated classical shapes and converged faster than prior reinforcement learning algorithms.
  • In 3D axisymmetric body drag minimization, the method achieved profiles matching analytical benchmarks and surpassed traditional genetic algorithms in efficiency.

Exploring LLMs for Parametric Shape Optimization

This paper investigates the application of LLMs for parametric shape optimization (PSO) in engineering design, building on contemporary advances in machine learning paradigms. Specifically, the authors introduce an optimization framework called \al, leveraging the in-context learning (ICL) capabilities of LLMs, notably using the Claude 3.5 Sonnet model, to perform end-to-end shape optimization tasks reminiscent of evolutionary strategies. Two benchmark problems are explored: drag-minimization for a two-dimensional airfoil in a laminar flow and a three-dimensional axisymmetric body in a Stokes flow.

Methodology

The proposed \al framework utilizes an LLM to iteratively suggest optimal design configurations. The PSO problem is framed using a parameter vector x\mathbf{x} that encodes design variables. \al optimizes this vector by evolving a population of designs generation-by-generation through ICL, distinguishing itself from traditional optimization methods by utilizing natural language prompts to guide LLM-generated outputs.

The algorithm commences with initializing a few generations of design variables sampled from a Gaussian distribution. Each generation is evaluated for performance, which is then stored in a record buffer. This buffer forms part of a multi-component prompt that is used to engage the LLM. The prompt supplies the LLM with state vectors and performance metrics from selected previous generations, instructing it to generate a prospective optimal mean for the next generation. This methodology effectively mimics evolutionary strategies where LLMs contribute to exploring the solution space.

Numerical Experiments

Two prominent case studies were conducted:

  1. Two-Dimensional Airfoil Optimization: This classic problem involves optimizing the shape of an airfoil to maximize its aerodynamic efficiency, quantified by the lift-to-drag ratio at a Reynolds number of 100. The airfoil profile was parameterized using B{\'e}zier curves, with a variable number of free points dictating the degrees of freedom (DOFs). Results demonstrated that \al generated airfoil shapes consistent with classical profiles and converged more quickly than reinforcement learning (RL) algorithms from prior studies.
  2. Drag Minimization of Axisymmetric Bodies: Here, the task was to minimize the drag of a body in Stokes flow, with either fixed surface area or volume constraints. The shape of the body was parameterized by tangent angles expressed via Legendre polynomials. \al achieved profiles with drag minimization matching analytical benchmarks, surpassing traditional genetic algorithms (GA) in terms of optimal solutions and computational efficiency.

Implications and Future Work

The research underlines the potential of LLMs, aided by ICL, to support complex design tasks such as PSO, combining the interpretative strength of LLMs with rigorous numerical tasks traditionally left to more specialized algorithms. The results indicate that LLMs can achieve better or comparable performance in PSO tasks compared to classical methods, with faster convergence in some cases.

Future research could focus on refining the \al framework to address high DOF optimization challenges and optimizing hyperparameters to further enhance performance. Additionally, there is potential for integrating LLM-based optimization strategies with other advanced techniques to leverage their combined strengths. Fine-tuning LLMs specifically for engineering tasks could also expand their applicability and efficiency in optimization tasks.

Overall, this investigation opens avenues for deploying LLMs in broader engineering contexts, encouraging further exploration of generative AI capabilities beyond traditional ML paradigms.