Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Type Prediction With Program Decomposition and Fill-in-the-Type Training (2305.17145v1)

Published 25 May 2023 in cs.SE, cs.LG, and cs.PL

Abstract: TypeScript and Python are two programming languages that support optional type annotations, which are useful but tedious to introduce and maintain. This has motivated automated type prediction: given an untyped program, produce a well-typed output program. LLMs are promising for type prediction, but there are challenges: fill-in-the-middle performs poorly, programs may not fit into the context window, generated types may not type check, and it is difficult to measure how well-typed the output program is. We address these challenges by building OpenTau, a search-based approach for type prediction that leverages LLMs. We propose a new metric for type prediction quality, give a tree-based program decomposition that searches a space of generated types, and present fill-in-the-type fine-tuning for LLMs. We evaluate our work with a new dataset for TypeScript type prediction, and show that 47.4% of files type check (14.5% absolute improvement) with an overall rate of 3.3 type errors per file. All code, data, and models are available at: https://github.com/GammaTauAI/opentau.

Citations (3)

Summary

  • The paper introduces Open, a system that combines tree-based program decomposition with fill-in-the-type training to automate type prediction in TypeScript.
  • The methodology partitions code into manageable segments and fine-tunes models specifically for type annotation, addressing context limitations and FIM challenges.
  • Empirical results demonstrate a 14.5% improvement with a 47.4% type-check success rate, showcasing the system’s practical impact on automated type imputation.

Type Prediction with Program Decomposition and Fill-in-the-Type Training

The paper introduces a novel approach to automated type prediction in programming, specifically geared towards languages like TypeScript that support optional type annotations. Optional types can facilitate misunderstanding and maintenance overhead in large codebases, motivating automated solutions for type imputation. The authors confront several established challenges in applying LLMs for this task, including the inadequacy of fill-in-the-middle (FIM) techniques, context window limitations, and the difficulty in validating the type correctness of LLM predictions.

Methodological Innovations

The proposed solution, named Open, leverages a search-based approach that integrates LLMs with a tree-based program decomposition technique. This strategy involves several key innovations:

  1. Tree-Based Program Decomposition: The program is parsed into a hierarchical structure, reflecting its syntactic composition. This structure allows Open to manage large context sizes by partitioning the code into smaller, manageable sections. Each node in this tree is processed in a bottom-up manner, with inferred types for code segments integrated into the broader context as the tree is traversed upwards.
  2. Fill-in-the-Type (FIT) Fine-Tuning: The authors introduce a fine-tuned model specifically for predicting type annotations using a modified fill-in-the-middle approach. This technique, FIT, trains the LLM on type-specific fill tasks, enabling it to effectively predict type annotations in situ without extraneous code generation.
  3. Typedness Metric: To evaluate the quality of type predictions, a new metric, "typedness," is employed, quantifying the informativeness of type annotations rather than simple type check pass rates. This metric avoids biases introduced by trivial annotations such as any.

Empirical Evaluation

The work is empirically validated on a newly constructed dataset of TypeScript files. The results demonstrate that Open, utilizing FIT and tree-based decomposition, significantly enhances the percentage of files that pass type checking compared to baseline approaches. The most robust configuration of their system achieved a 47.4% success rate in type-checking files, indicating a 14.5% improvement from the baseline. The paper highlights the influence of context window size on these outcomes and shows that using program decomposition effectively mitigates challenges related to extensive context dependencies.

Implications and Future Directions

The development and application of Open illustrate how targeted adaptation of LLMs can overcome domain-specific challenges, like type prediction in programming. Practically, this approach could streamline type migration in large codebases where managing optional types manually is laborious. Theoretically, it underscores the utility of hierarchical structures in AI, which may inform future advancements in other domains where structured data interpretation is pivotal.

However, the paper acknowledges several limitations. For instance, the approach does not currently support the inference of generic types or accommodate highly dynamic language features like eval. These are addressed as avenues for future research. Additionally, with increasing interest in extending context window sizes in LLMs, Open's methodology may be poised to exploit these enhancements further.

Overall, this work contributes a refined lens on static type prediction, aligning computational logic with machine learning advances to address a critical gap in programming language tool support. As AI models evolve, integrating these insights with generational improvements in models promises ongoing refinements in automated code intelligence systems.

Github Logo Streamline Icon: https://streamlinehq.com