Gödel Test: Can Large Language Models Solve Easy Conjectures? (2509.18383v1)
Abstract: Recent announcements from frontier AI model labs have highlighted strong results on high-school and undergraduate math competitions. Yet it remains unclear whether LLMs can solve new, simple conjectures in more advanced areas of mathematics. We propose the G\"odel Test: evaluating whether a model can produce correct proofs for very simple, previously unsolved conjectures. To this end, we study the performance of GPT-5 on five conjectures in combinatorial optimization. For each problem, we provided one or two source papers from which the conjecture arose, withheld our own conjecture, and then assessed the model's reasoning in detail. On the three easier problems, GPT-5 produced nearly correct solutions; for Problem 2 it even derived a different approximation guarantee that, upon checking, refuted our conjecture while providing a valid solution. The model failed on Problem 4, which required combining results from two papers. On Problem 5, a harder case without a validated conjecture, GPT-5 proposed the same algorithm we had in mind but failed in the analysis, suggesting the proof is more challenging than expected. Although our sample is small, the results point to meaningful progress on routine reasoning, occasional flashes of originality, and clear limitations when cross-paper synthesis is required. GPT-5 may represent an early step toward frontier models eventually passing the G\"odel Test.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
Overview
This paper asks a simple but important question: Can today’s strongest AI LLMs (like GPT‑5) write correct proofs for new, easy math ideas that haven’t been solved before? The authors call this challenge the “Gödel Test.” They try it on five fresh conjectures (educated guesses) in a math area called combinatorial optimization, focusing on “submodular maximization,” which shows up a lot in AI and decision‑making.
What questions does the paper explore?
- Can a LLM solve brand‑new, simple math conjectures without step‑by‑step guidance?
- How well does it reason through proofs when given only minimal background (like one or two related papers)?
- Where does it succeed or fail, especially when it needs to combine ideas from different sources?
How did they test the model?
The Gödel Test idea
Think of the Gödel Test like a true “problem‑solving pop quiz.” Instead of giving the model typical math contest problems, the authors give it:
- New, simple conjectures (not directly answered in existing papers),
- Minimal hints: one or two related source papers,
- A request to produce a full proof or solution by itself.
The math topic in simple terms
- Imagine you have a set of items (like apps for your phone), and a “value function” that tells you how good a set of items is.
- Sometimes items work better together (complementarity: left shoe + right shoe), and sometimes they overlap (substitution: tablet + laptop).
- Submodular functions are ones that don’t have complementarity; they act like “diminishing returns”—each extra item helps less when you already have many.
- The goal in submodular maximization is to pick a set (or choose amounts of items, if you work with continuous versions) that gets as much value as possible, under some rules (like only choosing a certain number, or staying inside a budget or a “convex” region—think: a flexible but bounded space where “averages” stay inside).
What the authors did
- They created five problems (with simple, realistic goals) in this area.
- For each problem, they gave GPT‑5 a short description and 1–2 related papers.
- They asked GPT‑5 to propose an algorithm and write a proof of how good it is.
- Then they checked the model’s math carefully, noting correct steps, mistakes, and how original or “lazy” (over‑copying source ideas) the reasoning was.
What did they find?
Here’s a compact summary of the five problems and GPT‑5’s performance:
Problem | What it was about (in brief) | How GPT‑5 did | Why it matters |
---|---|---|---|
1 | Maximize where is monotone (always increases) and can be non‑monotone; both are DR‑submodular (a smooth “diminishing returns” version) under a convex constraint. | Produced a nearly correct “split” guarantee: roughly . The proof was mostly an adaptation of a known method and had minor inaccuracies. | Shows solid routine reasoning: it can adapt known proofs and get almost the right constants ($1 - 1/e$ is a famous best‑possible bound for monotone cases). |
2 | Bicriteria maximization over a ‑system (a general constraint class): get value close to best while allowing a controlled “blow‑up” in feasibility. | Proposed a simple multi‑pass greedy algorithm and derived a guarantee with . This choice actually refuted the authors’ original conjecture and was correct up to a small inequality slip. | A “spark of originality”: it found a different (and sensible) bound that gets worse as grows (which matches intuition), and overall gave a good solution. |
3 | Maximize a “weakly DR‑submodular” function (a relaxed version) over a convex set; show an approximation like . | Gave a Frank–Wolfe‑style algorithm and showed up to a small smoothness error. Later, when asked for a full proof, it delivered a detailed (though still source‑guided) argument with minor consistency issues. | Confirms the model can correctly adjust known analyses for relaxed conditions (scaling by ) and produce a clean, usable guarantee. |
4 | Required combining results from two papers to get a proper solution. | Failed. The proof looked plausible but had serious logical gaps. | Shows a key weakness: “cross‑paper synthesis” and deeper integration of techniques is hard for the model. |
5 | Harder, open‑ended case without a clear conjecture; the authors had an algorithm idea in mind. | Proposed the same algorithm as the authors but couldn’t analyze it correctly. The proof fell apart on details. | Suggests the problem is harder than they thought, and that the model struggles when analysis is non‑routine or demands new creative proof steps. |
General patterns the authors observed
- Strong on routine, step‑by‑step reasoning when there’s a standard path to follow.
- Tends to “adapt” proofs from source papers too closely—like a student who copies the style rather than rethinking more natural steps.
- Sometimes produces impressive, unexpected insights (Problem 2).
- Struggles when a solution needs weaving together methods from multiple papers (Problems 4 and 5).
- Prompting matters: when asked for full proofs, GPT‑5 includes more intermediate steps and is less likely to skip important details.
- Warning: Incorrect proofs can look convincing at first glance—human experts must check carefully.
Why is this important?
- It’s an early sign that AI models can do more than contest math—they can sometimes work with research‑level ideas, adapt known methods, and even produce new twists.
- However, they still miss big pieces when problems require combining ideas across different sources or inventing truly new proof strategies.
- This means AI can assist researchers with routine parts and occasionally spark new directions, but human oversight remains essential—especially to catch subtle but fundamental mistakes.
Limitations to keep in mind
- The paper used only five problems and only GPT‑5; larger, more diverse tests are needed for stronger conclusions.
- Even “simple” conjectures can be tricky to judge; some may exist in older literature, and verifying AI proofs is time‑consuming.
- Results depend on how you prompt the model; better prompts and integration with tools (like computer algebra or proof assistants) could improve performance.
Take‑home message
- GPT‑5 shows clear progress: it can tackle several routine research problems, produce near‑correct proofs, and occasionally offer original ideas.
- But it still has major gaps: combining techniques across papers and doing deep, creative proof synthesis is tough.
- With future improvements—and better tool integration—models may get closer to passing the Gödel Test: solving new, simple conjectures reliably and independently.
Knowledge Gaps
Knowledge gaps, limitations, and open questions
The following list captures what remains missing, uncertain, or unexplored in the paper, framed to guide actionable follow-up research:
- Formalize the Gödel Test: Provide an operational, reproducible definition of “very simple, previously unsolved conjectures,” including criteria for simplicity, novelty assurance, and scope across mathematical subfields.
- Benchmark design: Create a public, diverse suite of easy conjectures spanning multiple advanced domains (not only combinatorial optimization) with standardized prompts, references, difficulty tiers, and target solution types.
- Model comparatives: Evaluate multiple frontier models under identical conditions to ascertain generality of findings beyond GPT-5 (including variance across runs and versions).
- Tool integration: Systematically test the impact of integrating computer algebra systems, proof assistants (e.g., Lean/Coq/Isabelle), and formal verification pipelines on success rates and error reduction.
- Prompting methodology: Quantify the effect of prompt design (e.g., request for full proofs, hints, paper citations) via controlled ablation studies; derive best practices for eliciting complete and correct reasoning.
- Proof verification protocol: Develop and adopt a standardized, preferably mechanized, protocol for correctness checking, with an error taxonomy (missing conditions, invalid inequalities, misuse of monotonicity, unjustified generalizations).
- Training data contamination: Establish procedures to ensure conjecture novelty relative to model training data, including automated literature scans and leakage detection to validate “unknown” problem status.
- Replicability and variability: Measure time-to-solution, token usage, run-to-run variability, and robustness under different decoding strategies; report confidence intervals to support statistical conclusions.
- Cross-paper synthesis: Design targeted tasks explicitly requiring integration of multiple papers and techniques; analyze failure modes and interventions (e.g., retrieval augmentation, structured reading plans).
- Safety mechanisms: Build automated detectors for “plausible but wrong” proofs and uncertainty signaling; paper how to prevent propagation of convincing yet flawed mathematical outputs.
- Human baseline: Compare GPT-5 against a “competent graduate student” baseline under identical constraints (time, references), quantifying accuracy, originality, and proof completeness.
- Data release: Publish prompts, attached references, raw model outputs, verification notes, and adjudication decisions to enable independent re-analysis and benchmarking.
- Problem 1 (monotone + non-monotone DR-submodular over down-closed convex P):
- Tightness of β with fixed α=1−1/e: Can β > 1/e be achieved without degrading α using simple projection-free methods? Clarify trade-offs against algorithmic complexity.
- Error term optimization: Precisely characterize err(ε, D, L_G, L_H), including optimal constants and dependence on norm choice and polytope geometry; assess whether step-size schedules can lower the additive term.
- Necessity of down-closedness: Determine whether guarantees extend to general compact convex sets without down-closedness; identify minimal structural assumptions on P.
- Discrete-to-continuous gap: Extend guarantees to discrete submodular maximization via continuous relaxations and rounding while preserving split guarantees on G and H.
- Alternative analyses: Explore proof techniques beyond adapting MGFW to reduce “lazy mirroring,” aiming for more natural arguments or improved constants.
- Problem 2 (bicriteria over p-systems):
- Tightness of g_p(ε)=⌈ln(1/ε)/ln((p+1)/p)⌉: Prove lower bounds showing this dependence on p is necessary, or construct algorithms achieving smaller bicriteria blow-up in p.
- Pass complexity: Investigate whether the multi-pass greedy’s number of passes can be reduced (e.g., via randomized or batched variants) without worsening β.
- Beyond monotone f: Extend bicriteria guarantees to non-monotone submodular functions under p-systems; quantify achievable value and infeasibility ratios.
- Oracle robustness: Analyze performance with approximate or noisy value/independence oracles, including practical implementability on large-scale instances.
- Problem 3 (γ-weakly DR-submodular over convex C):
- Definition alignment: Precisely relate the paper’s γ-weak DR-submodularity to prior “weak DR” definitions; provide equivalence/separation examples clarifying the hierarchy.
- Remove down-closedness: Prove guarantees without assuming C is down-closed; specify when monotonicity and smoothness suffice for the Frank–Wolfe variant.
- Step-size design: Determine optimal step-size schedules (fixed vs adaptive) that minimize additive error while preserving the (1−e{−αγ}) factor; analyze convergence under line-search.
- Oracle approximation: Quantify how multiplicative (α) and additive (δ) oracle inaccuracies degrade the guarantee; propose practical oracle schemes with provable bounds.
- Gradient nonnegativity: State and verify conditions under which ∇F(x)≥0 holds for monotone differentiable F; if violated, adapt the analysis (e.g., masking or restricted directions).
- Noisy/stochastic settings: Extend analysis to stochastic gradient/oracle access with sample complexity guarantees and robustness bounds.
- Problem 4 (cross-paper combination of results): Specify the exact integrative task and formal reasons for failure; design scaffolding methods (e.g., structured theorem dependency extraction, multi-document retrieval with citation tracking) and evaluate their efficacy.
- Problem 5 (harder case without validated conjecture):
- Formalize the open problem (algorithm class and target guarantee) so it can be attempted by others; characterize obstacles in the analysis and potential avenues (e.g., new potential-function methods).
- Establish hardness baselines or lower bounds indicating whether target guarantees are feasible; explore relaxed objectives or constraints if necessary.
- Originality assessment: Define metrics to quantify when a model’s solution reflects genuine novelty vs adaptation of known proofs (e.g., structural similarity measures, citation-aware provenance analysis).
- Error-type cataloging: Systematically document observed error types (e.g., missing conditions like v≤1−y in inner-product bounds, misuse of monotonicity to justify gradients, incorrect exponential bounds) and develop automated checks tailored to submodular/DR analyses.
- Domain generalization: Test whether observed gains and limitations in combinatorial optimization transfer to other advanced areas (e.g., functional analysis, algebraic topology), using comparable “simple conjecture” benchmarks.
- Ethical guidance: Develop best practices for publishing AI-generated mathematical proofs, including verification standards, uncertainty disclosures, and procedures for corrections to mitigate the risks of plausible but incorrect outputs.
Practical Applications
Below are practical applications derived from the paper’s findings, methods, and observations, organized by deployment horizon. Each item notes the relevant sector(s), possible tools/products/workflows, and key assumptions or dependencies that affect feasibility.
Immediate Applications
- Gödel Test as an internal benchmark for model evaluation (software/AI evaluation)
- Tools/products/workflows: Curate a suite of “simple, previously unsolved” conjectures; build a test harness that accepts model-generated proofs; integrate automated proof-checkers (Lean, Coq, Isabelle) and CAS (SageMath, SymPy) for sanity checks; add a “GödelScore” in model cards.
- Assumptions/dependencies: Access to vetted conjecture sets; human experts for final validation; reliable formalization of problem statements to avoid leakage from literature; reproducible runs and logging.
- Prompt engineering playbooks for mathematical reasoning (education, software tooling)
- Tools/products/workflows: Standardized prompt templates that explicitly require full proofs and intermediate steps; checklists for assumptions and boundary conditions; linting for proof drafts (e.g., “mask bound” usage, step-size conditions).
- Assumptions/dependencies: Availability of high-quality prompts and exemplars; retrieval augmentation for relevant literature; tutors or TAs to review outputs.
- LLM-assisted routine reasoning in submodular/DR-submodular optimization (academia, operations research, software)
- Tools/products/workflows: A “Submodular Co-Pilot” that adapts known proofs to variants (e.g., monotone + non-monotone DR over down-closed polytopes, bicriteria p-systems, weak-DR Frank–Wolfe variants); Jupyter plugins that generate algorithmic sketches and error-term calculations.
- Assumptions/dependencies: Domain expertise to spot subtle inaccuracies; access to independence/feasibility oracles for combinatorial constraints; smoothness constants and convex set geometry available.
- Safe proof practices and red-teaming (AI governance, policy)
- Tools/products/workflows: “Proof Risk Labels” for outputs (plausible-but-wrong risk); mandatory formal verification pipelines for any public claims of novel results; internal red-team prompts targeting cross-paper synthesis failure modes.
- Assumptions/dependencies: Organizational buy-in; integration with formal methods; clear policies for disclosure and retraction if errors surface.
- Curriculum enhancements using Gödel Test-style problems (education)
- Tools/products/workflows: Course modules where students compare LLM-generated proofs against ground truth; assignments that practice error detection (e.g., missing conditions like v ≤ 1−y in gradient bounds); “Gödel Classroom Kits” with grading rubrics.
- Assumptions/dependencies: Instructor oversight; accessible formal proof tools; appropriately scoped conjectures for course level.
- Practical optimization with bicriteria multi-pass greedy on p-systems (marketing, logistics, IoT)
- Tools/products/workflows: Implement multi-pass greedy workflows to plan campaigns or select assets under overlapping independence constraints; treat feasibility vectors via convex hull P of independent sets; use value + independence oracles.
- Assumptions/dependencies: Objective is non-negative monotone submodular; constraints can be modeled as p-systems; performance degrades with larger p; oracle availability.
- Continuous resource allocation via weak-DR Frank–Wolfe (advertising, resource allocation, analytics)
- Tools/products/workflows: Projection-free continuous-greedy for convex, down-closed feasible sets; control additive error via step-size K; deploy to budget-splitting across channels or fractional coverage models.
- Assumptions/dependencies: Function is L-smooth and γ-weakly DR-submodular; linear optimization oracle over feasible region; monotonicity and non-negativity hold; accurate estimation of L and γ.
- Publication hygiene improvements (academia)
- Tools/products/workflows: Require machine-checkable proof artifacts; clearly indicated reliance on prior literature; report error terms and parameter dependencies (ε, D, smoothness constants) when stating guarantees.
- Assumptions/dependencies: Community standards and venues willing to enforce; tooling support for formal submissions.
Long-Term Applications
- Standardized Gödel Test benchmark and leaderboard beyond Olympiad-style metrics (software/AI evaluation)
- Tools/products/workflows: Open-source benchmark covering multiple mathematical domains with “simple but novel” conjectures, graded by proof correctness, originality, and cross-paper synthesis; public GödelScore.
- Assumptions/dependencies: Broad community curation; licensing of source materials; objective definitions of “simple” and “novel.”
- Autonomous theorem-proving assistants that reliably generate and verify new results (software, academia)
- Tools/products/workflows: Tight integration of LLMs with proof assistants and CAS; retrieval-enhanced reasoning over a knowledge graph of lemmas; iterative refinement with formal verification loops.
- Assumptions/dependencies: Robust cross-paper synthesis; scalable formalization of advanced math; compute budgets for verification.
- Automated algorithm discovery for combinatorial and continuous optimization (logistics, telecom, ML systems)
- Tools/products/workflows: “AlgoDesigner” agents that propose algorithms and approximation bounds (e.g., bicriteria over p-systems, DR/weak-DR variants), accompanied by formal guarantees and error terms; deployment in scheduling, routing, network design.
- Assumptions/dependencies: Reliable correctness checking; mapping of industrial constraints to mathematically tractable formulations; domain-specific oracles.
- Verified optimization as a service with provable guarantees (marketing, IoT, smart cities, energy)
- Tools/products/workflows: End-to-end platforms applying submodular and DR-submodular maximization with certificates of approximation/bicriteria guarantees; real-time adaptation to changing constraints.
- Assumptions/dependencies: Accurate models of value functions; constraints captured as matroids/p-systems or convex sets; data quality and stationarity.
- Cross-paper synthesis engines for mathematical reasoning (academia, software)
- Tools/products/workflows: “ProofGraph” systems that stitch techniques from multiple papers, track dependencies, and suggest integrative approaches where current models fail; provenance tracking for each proof step.
- Assumptions/dependencies: Structured literature ingestion; semantic parsing of proofs; reliable citation and lemma resolution.
- Governance standards for AI-generated mathematical claims (policy/regulation)
- Tools/products/workflows: Certification schemes requiring formal verification; disclosure rules that include Gödel Test performance; reproducibility or audit trails for proofs and conjecture selection.
- Assumptions/dependencies: Standards bodies participation; academia–industry cooperation; audit-friendly tooling.
- Transformative graduate education with formal, AI-augmented proof practice (education)
- Tools/products/workflows: Interactive Lean/Coq tutors powered by LLM guidance; “Gödel Labs” where students design and test conjectures; automated feedback on proof gaps and assumptions.
- Assumptions/dependencies: Instructor training; accessible infrastructure; curricular alignment.
- Conjecture mining pipelines for R&D (academia, corporate research)
- Tools/products/workflows: Agents that scan literature to identify tractable gaps and propose “easy conjectures”; triage systems that route promising ones to human experts; tracking of resolution outcomes.
- Assumptions/dependencies: High-quality retrieval; novelty detection; incentive structures for follow-up.
- Finance applications under submodular-like constraints (finance)
- Tools/products/workflows: Portfolio construction framed as monotone submodular selection with p-system constraints (e.g., sector caps); engines proposing bicriteria solutions with explicit feasibility blow-up and value guarantees.
- Assumptions/dependencies: Validity of submodular approximations for diversification; reliable estimation of marginal gains; regulatory acceptance of approximation-based methods.
- Robotics and sensing coverage planning (robotics, environmental monitoring)
- Tools/products/workflows: Multi-robot sensor placement/coverage modeled with submodular objectives; algorithmic frameworks that trade off value vs. constraint violations (bicriteria) under complex feasibility families.
- Assumptions/dependencies: Accurate coverage models; independence structure expressible as p-systems or matroids; operational constraints (battery, communication) incorporated.
Notes on assumptions common across applications:
- Many guarantees rely on monotonicity, non-negativity, submodularity/DR-submodularity (or γ-weak variants), smoothness (L), and down-closed or convex feasibility regions; mis-specification breaks guarantees.
- Value and independence/membership oracles are often assumed; real systems need engineering proxies or learned oracles.
- The paper’s findings highlight current limits in cross-paper synthesis and the danger of plausible-but-wrong proofs; workflows must include formal verification and expert review until those limits are mitigated.
Glossary
- Antitonicity: Property that a function (or its gradient) is order-reversing; for DR-submodular functions, the gradient decreases as inputs increase coordinate-wise. Example: "the antitonicity of gives"
- Approximation guarantee: A bound comparing an algorithm’s solution value to the optimal value, often with explicit constants or factors. Example: "measuring the quality of the approximation guarantee with respect to each component of the objective function."
- Bicriteria approximation: An algorithmic guarantee that simultaneously achieves near-optimal objective value while allowing a controlled violation (or relaxation) of constraints. Example: "The bicriteria approximation guarantee of the algorithm should be of the form "
- Cardinality constraint: A feasibility condition limiting the number of selected elements. Example: "Often the constraint is just a cardinality constraint (i.e., any set whose size is smaller than some given value is allowed)"
- Continuous greedy: A continuous-time or projection-free method (akin to Frank–Wolfe) used to optimize submodular objectives over convex relaxations. Example: "continuous-greedy/Frank--Wolfe style guarantee"
- Convex hull: The smallest convex set containing a given collection of points; denoted conv(·). Example: "P=\operatorname{conv}{1_I:I\in\mathcal I}"
- Down-closed: A property of a feasible region where every point dominated coordinate-wise by a feasible point is also feasible. Example: "The down-closedness of means that if $\vx \in P$ and $\vy$ is a vector in that is coordinate-wise dominated by $\vx$, then $\vy$ also belongs to ."
- Down-closed polytope: A polytope that is down-closed, often enabling greedy or Frank–Wolfe–type methods. Example: "over a solvable down-closed polytope ."
- DR-submodular function: A continuous analogue of submodular set functions exhibiting diminishing returns in each coordinate (gradient is antitone). Example: "a non-negative monotonically increasing DR-submodular function "
- Frank–Wolfe algorithm: A projection-free first-order method for constrained optimization that iteratively moves toward a linear minimization oracle’s solution. Example: "a variant of the Frank-Wolfe-like algorithm"
- Ground set: The universe of elements from which subsets are selected in set function optimization. Example: "Given a ground set $\cN$ of elements,"
- Independence oracle: A subroutine that decides whether a set is independent (feasible) under a combinatorial constraint system. Example: "an independence/membership oracle for "
- L-smoothness: A differentiability condition where the gradient is Lipschitz with constant L, used to control discretization or step-size error. Example: "Let be - and -smooth, respectively"
- Linear optimization oracle: A routine that solves a linear objective over a feasible region, often used within Frank–Wolfe methods. Example: "admitting a linear optimization oracle."
- Malliavin–Stein framework: A probabilistic method combining Malliavin calculus with Stein’s method to obtain quantitative central limit theorems. Example: "Malliavin–Stein framework for central limit theorems"
- Matroid: A combinatorial structure generalizing linear independence, often used to model independence constraints. Example: "Matroid constraints are a well-studied class of combinatorial constraints"
- Measured Greedy Frank–Wolfe (MGFW): A variant of Frank–Wolfe tailored to submodular/DR-submodular objectives with a measured (masked) greedy step. Example: "Measured Greedy Frank--Wolfe (MGFW)"
- Modular function: An additive set function where the value of a set equals the sum of singleton values. Example: "Such functions are called modular functions."
- Monotone submodular function: A submodular set function that is non-decreasing under set inclusion. Example: "given a non-negative monotone submodular function $f\colon 2^\cN \to$"
- p-system: A generalization of matroid constraints characterized by a bounded ratio between sizes of bases within any subset, parameterized by p. Example: "a -system $(\cN, \cI)$"
- Pipage rounding: A rounding technique converting fractional solutions to integral ones while preserving submodular value properties. Example: "Measured Continuous Greedy + pipage rounding"
- Submodular function: A set function with diminishing returns, satisfying f(A)+f(B) ≥ f(A∪B)+f(A∩B). Example: "Such functions are called submodular functions."
- Submodular maximization: The problem of maximizing a submodular objective subject to constraints. Example: "In a submodular maximization problem, the goal is to maximize a set function subject to a constraint ."
- Value oracle: A black-box that returns the value of a function on any queried set. Example: "uses only the value oracle for "
- Weak submodularity: A relaxation of submodularity where the diminishing returns property holds approximately with a parameter γ. Example: "One popular such weakened assumption is known as -weak submodularity."
- γ-weakly DR-submodular: A continuous relaxation of DR-submodularity parameterized by γ, scaling the marginal gain inequality. Example: "is -weakly DR-submodular if"
Collections
Sign up for free to add this paper to one or more collections.