Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 215 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Vendor Postures on AI Vulnerabilities

Updated 14 September 2025
  • Vendor postures toward AI vulnerabilities are a range of organizational stances on disclosure and risk management, broadly categorized as proactive, silent, or restrictive.
  • Quantitative analysis reveals only 64% of vendors provide clear reporting channels and just 18% address AI-specific risks, highlighting significant policy gaps.
  • The study underscores a misalignment between vendor policies and real-world AI incidents, urging the integration of academic insights and timely policy revisions.

Vendor postures toward AI vulnerabilities refer to the range of disclosure, remediation, and risk management strategies adopted by organizations that develop and deploy artificial intelligence technologies. As AI becomes integral to products and critical systems, the willingness and capacity of vendors to accept, triage, and resolve reports of AI-specific vulnerabilities directly shapes the security, trustworthiness, and resilience of the ecosystem. The variability of these postures—ranging from proactive engagement to restrictive exclusion or complete silence—has measurable effects on incident response capacity, research community engagement, and alignment with both academic advances and real-world AI failures.

1. Disclosure Policies: Structure and Scope

Quantitative assessment of 264 AI vendors demonstrates substantial heterogeneity in public-facing vulnerability disclosure mechanisms. Only 64% of vendors provide a clear reporting channel—such as a dedicated security email, vulnerability disclosure program, or formal bug bounty initiative—while a significant 36% offer no channel at all. Furthermore, only 18% of vendors explicitly mention AI-specific risks in their policies, indicating that most treat these issues under the umbrella of generic IT or software vulnerabilities rather than as a distinct class requiring dedicated consideration.

Policies that do address AI often display greater coverage of best-practice elements: a “Scope-In” definition is present in 88% of explicitly AI-mentioning policies, compared to much lower coverage among generic policies. Additional provisions such as safe harbor clauses, response timelines, and formalized evaluation criteria appear more frequently in AI-specific documentation.

The determination of in-scope versus out-of-scope vulnerabilities reveals clear boundaries in eligibility. Vendors generally accept vulnerabilities mapped to classical security guarantees—data access (in-scope rating 9), authorization (8), and model extraction (with 90% acceptance among those mentioning AI)—whereas issues such as jailbreaking (27% eligibility) and hallucination (14–17%) are typically excluded. This demarcation suggests that the industry regards certain AI failure modes as “inherent” or not amenable to patching, rather than treatable security flaws.

<table> <tr> <th>Vulnerability Type</th> <th>Typical Inclusion Ratio (IR)</th> <th>Scope Status</th> </tr> <tr> <td>Data access</td> <td>High</td> <td>In-scope</td> </tr> <tr> <td>Model extraction</td> <td\>0.90</td> <td>In-scope</td> </tr> <tr> <td>Jailbreaking</td> <td\>0.27</td> <td>Out-of-scope</td> </tr> <tr> <td>Hallucination</td> <td>~0.15</td> <td>Out-of-scope</td> </tr> </table>

2. Classification of Vendor Postures

Through qualitative analysis, vendor postures toward AI vulnerabilities are described under three principal categories: proactive clarification (n = 46), silence (n = 115), and restrictive (n = 103).

  • Proactive clarification includes:
    • Active supporters: Vendors such as Google, Microsoft, and Meta offer detailed in-scope declarations, severity matrices, dedicated submission channels, or even separate reward structures for AI vulnerabilities.
    • Integrationists: These vendors embed AI-specific categories within existing vulnerability programs but provide minimal additional structure beyond enumerating covered attack types.
    • Back-channel: Vendors providing non-public reporting (e.g., dedicated emails) but excluding AI vulnerabilities from formal bug bounty eligibility or structured triage.
  • Silent vendors lack any explicit AI-related provisions, either through self-hosted policies with generic references or delegation to hosted disclosure platforms that now allow for an “AI” classification—though with little or no guidance.
  • Restrictive vendors either (a) maintain no disclosure channel whatsoever, or (b) explicitly disqualify AI vulnerabilities—especially related to LLMs or content safety—within their public bug bounty exclusions.

This taxonomy is supported by policy excerpts and a comparative statistical summary of the sampled vendor population.

3. Policy-Research-Incident Alignment and Lag

Analysis of vendor policies against a corpus of 1,130 AI incidents and 359 academic publications reveals a persistent mismatch between what is formally “accepted” and the types of real-world harms and research advances occurring in the AI security landscape. While vendor policies prioritize technical vulnerabilities (e.g., model extraction, adversarial attacks, prompt injection—approximately 29% of observed incident types), 47% of publicly reported incidents pertain to content safety issues such as toxicity, misinformation, or discrimination, which are rarely acknowledged as in-scope.

Moreover, the temporal lag between academic breakthroughs or incident discovery and subsequent vendor policy revisions is substantial. Seminal academic work on model stealing, adversarial machine learning, and prompt injection has existed for years, whereas policy updates only began scaling in 2023–2024, often induced by highly publicized incidents rather than proactive alignment. This lag can be formulated as:

Policy Revision TimeTincident/academic research+Δ\text{Policy Revision Time} \approx T_{\text{incident/academic research}} + \Delta

where Δ\Delta is the delay from research/incident to policy update—often several years.

4. Security and Development Implications

These postures have direct impacts on the security, robustness, and future trajectory of AI system development and deployment:

  • Proactive postures encourage submission of vulnerability reports, clarify eligibility, and often support structured remediation and patching workflows—enabling more robust identification and correction of technical risks to system integrity (e.g., model theft, data leakage, adversarial exploitation).
  • Silent or restrictive postures can create reporting ambiguity, leaving researchers uncertain as to what constitutes a valid AI vulnerability. This can deter responsible disclosure and result in unresolved exposures persisting into production. The exclusion of jailbreaking/hallucination (common vectors for real-world harm) from eligibility further increases undetected risk.
  • The overall lag in updating policies suggests that as AI systems grow in complexity—especially with the introduction of new technical architectures and content generation paradigms—legacy approaches may prove inadequate. This may necessitate a shift from “bolt-on” after-the-fact models to more deeply integrated, “built-in” safety and security-by-design paradigms, potentially accompanied by more rigorous post-incident regulation or mandatory reporting standards.

The paper provides summary statistics to represent key trends:

  • Disclosure channel prevalence: Pdc=64%P_{dc} = 64\%
  • Explicit AI mention: Pai=18%P_{ai} = 18\%
  • No disclosure channel: Pex_cl=36%P_{ex\_cl} = 36\%
  • Inclusion ratio for model extraction: IRmodel extraction=0.90IR_{\text{model extraction}} = 0.90

These ratios can be extended to other vulnerability categories, capturing the difference in eligibility for various AI-specific issues across the vendor landscape.

6. Broader Ecosystem and Accountability Considerations

The observed vendor postures reflect both the immaturity and divergence of AI security culture compared to classical IT/infosec environments. Proactive vendors may set a precedent for industry-wide best practices, encouraging transparency, more rapid incident response, and the institutionalization of AI-specific security expertise in product life cycles. By contrast, a preponderance of vendors with silent or restrictive policies may leave the ecosystem underprepared for emerging systemic risks, challenge the efficacy of collaborative vulnerability research, and potentially expose both users and organizations to greater liability and harm.

A plausible implication is that persistent gaps between incident types, academic knowledge, and vendor policies will incent regulatory bodies to demand higher degrees of clarity, responsiveness, and technical specificity in vendor disclosure programs as AI continues to proliferate across critical domains.

Conclusion

Vendor postures toward AI vulnerabilities currently exhibit substantial heterogeneity, both in disclosure policy coverage and in eligibility for reporting and remediation. While leading organizations develop detailed, transparent frameworks for AI-specific risk, the majority of vendors either provide minimal guidance or explicitly exclude key vulnerability categories, contributing to persistent gaps between incident realities and formal handling mechanisms. The field is thus characterized by lagging policy adaptation, incomplete incident alignment, and highly variable accountability—factors with profound implications for the security, resilience, and societal trust of AI-integrated systems as they scale in complexity and ubiquity (Piao et al., 7 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Vendor Postures Toward AI Vulnerabilities.