Papers
Topics
Authors
Recent
2000 character limit reached

Package Hallucination Rate (PHR)

Updated 11 December 2025
  • PHR is a quantitative metric that measures the ratio of hallucinated package references to total recommendations across domains like code generation and summarization.
  • PHR evaluation involves systematically extracting package mentions from AI outputs and verifying them against authoritative registries in ecosystems such as Python, Go, and JavaScript.
  • Empirical studies show that PHR is influenced by model size, quantization, and prompt design, with higher rates posing significant risks to software supply chain security.

Package Hallucination Rate (PHR) is a quantitative metric for characterizing the tendency of generative AI systems, including LLMs, to recommend, cite, or reference non-existent or unsupported “packages.” Across code generation, shell command synthesis, scientific summarization, and in-context learning, PHR has become the de facto standard for measuring the frequency of such fact-conflicting errors, especially as they pertain to supply chain security, code reliability, and trustworthy automated knowledge synthesis. Although originally developed in the context of LLM-generated package dependencies, the metric has been adapted for rigorous evaluation in domains including code recommendation, summarization, and Bayesian in-context reasoning.

1. Formal Definition and Mathematical Formulation

The canonical definition of Package Hallucination Rate is the ratio of hallucinated (non-existent or unsupported) package references to the total number of package recommendations or claims. The mathematical formality and operationalization of PHR depends on domain and granularity:

  • Code Generation Context: For code samples indexed by i=1,,Ni=1,\ldots,N, let rir_i denote the number of packages recommended, and hih_i those that are hallucinations. Then

R=i=1Nri,H=i=1Nhi,PHR=HR×100%R = \sum_{i=1}^N r_i,\qquad H = \sum_{i=1}^N h_i,\qquad \mathrm{PHR} = \frac{H}{R}\times 100\%

  • Shell Command/Go Ecosystem: Given GG as the multiset of generated package references and HGH\subset G the subset failing existence checks,

PHR=HG\mathrm{PHR} = \frac{|H|}{|G|}

  • Language-Agnostic, Multi-Model Context: For languages lPl\in P and coding prompts QQ, repeated KK times, with GlG_l as the “known-good” package set,

PHR(m,l)=1QKqQi=1KHm,q,iPHR(m,l) = \frac{1}{|Q|K} \sum_{q\in Q} \sum_{i=1}^K H_{m,q,i}

where Hm,q,i=1H_{m,q,i}=1 iff any generated import sGls\notin G_l.

  • Summarization/Knowledge Synthesis: For NN abstract “packages” each with mjm_j claims, Hj,i{0,1}H_{j,i}\in\{0,1\} marks a hallucinated claim,

PHR=1Nj=1Ni=1mjHj,i\mathrm{PHR} = \frac{1}{N} \sum_{j=1}^N \sum_{i=1}^{m_j} H_{j,i}

  • Bayesian In-Context Learning: Letting yy denote a generated prediction, ff the latent mechanism, and Qϵ(f,D)Q_\epsilon(f,D) the ϵ\epsilon-quantile threshold,

hϵ(D)=EfEy[1{logp(yD,f)<Qϵ(f,D)}]h_\epsilon(D) = \mathbb{E}_{f} \mathbb{E}_y \left[ \mathbf{1}\{ \log p(y|D,f) < Q_\epsilon(f,D) \} \right]

In all cases, the quantity is typically reported as a percentage or mean for comparability.

2. Domain-Specific PHR Measurement Protocols

PHR measurement requires systematic extraction and verification of candidate “package” references. Specific protocols vary:

  • Code-Generating LLMs: Packages are extracted from code snippets via installed package statements (e.g., pip install, npm install) and module import patterns. Package names are cross-referenced against authoritative registries (PyPI, npm) as of the model’s training date. Hallucinations are defined as names not present in these registries (Spracklen et al., 12 Jun 2024).
  • Shell Command/Go Ecosystem: Shell commands such as go get … are parsed for URL-based Go module paths. Existence is checked by resolving HTTP queries or invoking package managers; unresolved paths are hallucinatory (Haque et al., 9 Dec 2025).
  • Security-Focused PHR across Languages: Package names are extracted from code via language-specific regular expressions. The generated names are compared against historical indexes per language and cutoff (PyPI/NPM/crates.io), marking as hallucinations any package not found. Both natural and adversarial (“induced”) hallucinations can be tested (Krishna et al., 31 Jan 2025).
  • Scientific Summarization: Summaries are subdivided at the claim level (typically sentences with citations). Each claim is checked via an automated or model-based “Factored Verification” procedure to determine whether it is supported by the cited source material. Unsupported claims increment the hallucination count per package (George et al., 2023).
  • ICL/Generative Modeling: In Bayesian settings, sampled responses with log-likelihood below a quantile threshold, conditioned on a latent mechanism, are classed as hallucinations. Monte Carlo estimators use repeated sampling to approximate the posterior-averaged PHR (Jesson et al., 11 Jun 2024).

Each protocol imposes domain-specific caveats: infallible registry support is assumed, extraction heuristics may lead to false negatives or positives, and prompt-context mismatch can skew rates.

3. Empirical Findings and Patterns

Large-scale empirical evaluation reveals PHR is non-negligible and exhibits strong systematic patterns:

  • Typical Rates:
    • Commercial code LLMs (OpenAI GPTs) show 5.2%5.2\% average PHR; open-source LLMs reach 21.7%21.7\% (Spracklen et al., 12 Jun 2024).
    • In Go LLMs, full-precision models show 30%30\%46%46\% PHR, with aggressive quantization ($4$-bit) yielding up to 96%96\% (Haque et al., 9 Dec 2025).
    • Security survey finds per-language means: JavaScript 14.7%14.7\%, Rust 24.7%24.7\%, Python 23.1%23.1\%, with best models below 2%2\% (Krishna et al., 31 Jan 2025).
    • In summarization, ChatGPT and GPT-4 produce $0.62$–$0.84$ hallucinations per summary, decreasing with critique-enhanced workflows (George et al., 2023).
  • Dependency on Model Family, Size, and Precision: Larger parameter counts reduce PHR, as does access to up-to-date training data. Quantization increases PHR, with $8$-bit models showing moderate (+2%+2\%4%4\%) increase, while $4$-bit models induce catastrophic hallucination frequency except for top-scale models (Haque et al., 9 Dec 2025).
  • Sampling Temperature and Prompt Recency: Increased temperature exacerbates PHR sharply, more than doubling rates at extreme values. Prompts referencing recent or esoteric packages see 10%\sim 10\% higher PHR (Spracklen et al., 12 Jun 2024).
  • Persistence and Specificity: Hallucinations are often persistent across generations from the same prompt and model—43%43\% of hallucinated packages reappear in repeated samples (Spracklen et al., 12 Jun 2024). Most hallucinated names arise only in a single model.
  • String Structure: In code, hallucinated packages are rarely typo-variants of real names; 48.6%48.6\% differ by at least $6$ edit distances from all real packages (Spracklen et al., 12 Jun 2024). For Go, over 80%80\% hallucinated packages take plausible URL form with correct domain but non-existent user or subpath (Haque et al., 9 Dec 2025).
  • Correlation to Code Quality: There is a strong negative correlation (ρ=0.79\rho=-0.79) between HumanEval (code correctness) and PHR: higher code quality begets lower hallucination (Krishna et al., 31 Jan 2025).

4. Mitigation Strategies and Trade-offs

Multiple approaches have been evaluated for PHR reduction, each with trade-offs:

  • Retrieval-Augmented Generation (RAG): Augment prompts with package-to-task facts, e.g., vector-indexed corpora. Observed $24$–49%49\% relative reduction in PHR across open LLM baselines (Spracklen et al., 12 Jun 2024).
  • Self-Refinement: Post-hoc LLM self-validation of package recommendations, with up to 19%19\% reduction. Effectiveness varies by model (Spracklen et al., 12 Jun 2024).
  • Supervised Fine-Tuning: Model retraining on prompt\tovalid-package data can offer 83%83\% (DeepSeek) and 61%61\% (CodeLlama) lower PHR, but at the expense of code generation quality (HumanEval pass@1 drops by half in some cases) (Spracklen et al., 12 Jun 2024).
  • Quantization-Aware Practices: $8$-bit quantization is generally safe with minor PHR cost; $4$-bit demands aggressive post-filtering and validation layers to block “slopsquatting” attacks—malicious registration of hallucinated package names (Haque et al., 9 Dec 2025).
  • Deployment Safeguards: Integrate registry exist checks, prompt hardening, explicit dependency provisioning, and internal “sinkholing” of high-risk names. Pre-deployment integration of PHR detection into code completion platforms can provide early warning (Krishna et al., 31 Jan 2025).

Combining these techniques in ensemble reduces PHR by up to 85%85\% (DeepSeek) (Spracklen et al., 12 Jun 2024).

5. Security, Reliability, and Broader Impact

Elevated PHR presents a significant software supply chain security risk. Hallucinated names, especially those not yet registered in public package indices, create “zero-day” attack surfaces: adversaries can publish malicious code under them (“slopsquatting”), which is then consumed by downstream developers acting on LLM suggestions.

PHR is directly actionable as a security metric:

  • Model Selection: High PHR models, even if otherwise performant, should be disfavored for security-critical code synthesis (Krishna et al., 31 Jan 2025).
  • Continuous Monitoring: Registry operators and security teams may proactively reserve or monitor common hallucinated names (Krishna et al., 31 Jan 2025).
  • Automated Vetting: Tooling can flag, require explicit approval for, or auto-remediate suspicious package suggestions.

There is no evidence that current model development best practices consistently optimize for low PHR in tandem with code quality: Pareto-optimality in error-hallucination space is sparsely populated (Krishna et al., 31 Jan 2025). A plausible implication is joint consideration of coding benchmarks and hallucination-centric benchmarks should shape future model architecture and training set curation.

6. Extensions, Limitations, and Adaptability

PHR has been adapted from code to summarization and generative in-context learning:

  • Summarization: Factored Verification measures per-package hallucination count at claim level (mean per summary/package, or proportion with 1\ge1 hallucination), with further correction for verifier accuracy (George et al., 2023).
  • Bayesian Modeling: Posterior Hallucination Rate tracks the probability of response-generation with log-likelihood below the (1ϵ)(1-\epsilon) region for the latent data-generating mechanism, estimated via black-box Monte Carlo sampling (Jesson et al., 11 Jun 2024).

Practical limitations include:

  • Lower Bound Bias: Registry-index-based measurement only establishes a lower bound; actors may have registered hallucinated names post-model-training (Spracklen et al., 12 Jun 2024).
  • Extraction and Classification Error: Regex-based or heuristics-based extraction can misclassify modules as packages, or miss certain classes entirely (Spracklen et al., 12 Jun 2024).
  • Domain Specificity: Results are contingent on model, ecosystem (Python, JavaScript, Go, Rust), prompt set, and date of registry snapshots. Rates and optimal mitigations may differ significantly elsewhere (Spracklen et al., 12 Jun 2024, Krishna et al., 31 Jan 2025).
  • Adaptive Attack Risk: Adversarial prompts can sharply increase hallucination frequency, especially for code-specialized and small models (Krishna et al., 31 Jan 2025).

Despite these limitations, PHR offers a transferable, interpretable, and robust metric for quantifying hallucination-related threats and guiding the design of secure, supply-chain–conscious AI systems.

7. Cross-Domain and Methodological Evolution

The principles underlying PHR have influenced evaluation standards across LLM-centric research, including but not limited to supply chain security, code recommendation reliability, factual correctness in summarization, and ICL trustworthiness. Algorithmic refinements—including claim weighting, token- or entity-level granularity, and confidence-threshold–modulated PHR—extend its utility in matching the statistical structure of various generative tasks (George et al., 2023).

This suggests that PHR, once narrowly tailored for code package hallucination, has become integral to a broader epistemic and security paradigm for generative AI evaluation.


Principal sources:

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Package Hallucination Rate (PHR).