Papers
Topics
Authors
Recent
2000 character limit reached

AI Exposure Scores

Updated 9 December 2025
  • AI Exposure Scores are quantitative metrics designed to assess how AI systems may impact human activities and labor markets.
  • They incorporate capabilities-based, diffusion/adoption-based, and risk-based methodologies to quantify AI substitution and augmentation.
  • These metrics combine data from sources like O*NET, patents, 10-K filings, and NLP tools to support workforce planning, financial analytics, and regulatory compliance.

AI Exposure Scores are quantitative metrics designed to assess the extent to which AI systems, technologies, or products have the potential to impact, substitute, or complement human activities, especially in labor markets, operational decision-making, finance, information access, and risk management. Exposure scores span methodologies rooted in technical feasibility, observed market activity, regulatory obligations, and realized economic outcomes. These measures are central to academic, corporate, and policy analysis of AI’s diffusion, risk profile, and socioeconomic consequences.

1. Conceptual Foundations and Scope

AI exposure scores are built on the premise that AI’s impact is multi-dimensional, contingent not only on theoretical capabilities but also on actual system deployment, task structure, market demand, and regulatory environments. The core objective is to map the overlap between AI system functionality and human expertise or assets, with various indices quantifying exposure at the level of occupations, tasks, documents, firms, or technical vulnerabilities.

Broadly, exposure scoring systems fall into three methodological categories:

  • Capabilities-based: Mapping technical feasibility of AI substituting or augmenting human activity, e.g., via LLM self-assessment, expert annotation, or skill-task-algorithm matching.
  • Diffusion/adoption-based: Quantifying actual market activity, such as venture-backed startup targeting rates, patent diffusion, or textual signals from company filings.
  • Risk/regulatory-based: Assessing exposure as the risk profile induced or altered by AI system deployment, including operational, compliance, technical, and environmental factors.

These complementary perspectives enable robust multi-factor quantification of AI’s societal and economic footprint (Fenoaltea et al., 6 Dec 2024, Chopra et al., 29 Oct 2025, Dominski et al., 11 Jul 2025, Muhammad et al., 24 Aug 2025, Huwyler, 26 Nov 2025, Meindl et al., 2021, Ante et al., 3 Jan 2025).

2. Definitions and Formal Index Construction

Each family of exposure scores embodies distinct formalisms and data sources:

Occupational and Task-Level Indices

Startup-Targeted Exposure (AISE)

  • For occupation ii and SS “AI-tagged” startups, the AI Startup Exposure Index (AISE) is:

AISEi=1Ss=1SEi,s\mathrm{AISE}_{i} = \frac{1}{S} \sum_{s=1}^{S} E_{i,s}

where Ei,s{0,1}E_{i,s} \in \{0,1\} reflects LLM (Llama 3) assessment of direct substitution feasibility. Weighted averages permit industry and regional aggregation using employment shares (Fenoaltea et al., 6 Dec 2024).

Task/Model-Based Exposure (OAIES, Theory-Based)

  • Occupational AI Exposure Score (OAIES) employs LLMs to estimate, for each O*NET task tt within an occupation oo and model mm, what share of the task can be performed at each AI development “stage” (ηt,sm\eta^{m}_{t,s}). Weighted sum with relevance:

Expo,sm=tTowo,tηt,sm\mathrm{Exp}^m_{o,s} = \sum_{t\in T_o} w_{o,t} \eta^m_{t,s}

where wo,tw_{o,t} are normalized task-importance weights (Dominski et al., 11 Jul 2025).

  • Theory-driven index (Moravec’s Paradox) aggregates task-level scores across theoretical/empirical dimensions:

Ej=iwi[14(PVi+DAi+TKi+AGi)]iwiE_j = \frac{\sum_i w_i \left[\frac{1}{4} (PV_i + DA_i + TK_i + AG_i)\right]}{\sum_i w_i}

where PViPV_i=performance variance, DAiDA_i=data abundance, TKiTK_i=tacit knowledge, AGiAG_i=algorithmic gap, and wiw_i is task importance (Schaal, 15 Oct 2025).

Patent-Based Exposure

  • For occupation occ, AI-patent exposure is:

Eocc,AI=twt,occlog(1+Nt,AI)E_{\text{occ},AI} = \sum_{t} w_{t,\text{occ}} \cdot \log(1 + N_{t,AI})

with Nt,AIN_{t,AI} the count of AI patents linked to task tt, wt,occw_{t,\text{occ}} as task-importance weights (Meindl et al., 2021).

Skill-Value Exposure (Iceberg Index)

  • Measures percent of wage value tied to AI-automatable skills per occupation:

Io=smo,sasI_o = \sum_{s} m_{o,s} a_s

as=1a_s = 1 iff some AI tool covers skill ss, mo,sm_{o,s} is normalized importance (Chopra et al., 29 Oct 2025).

Document and System-Level Indices

Fair Document Exposure (T-Retrievability)

  • For IR model θ\theta and topical cluster Qi\mathcal{Q}_i:

r(D,C,Qi,θ)=1QiQQi1log(1+ρ(D;Q,θ))r(D, \mathcal{C}, \mathcal{Q}_{i}, \theta) = \frac{1}{|\mathcal{Q}_i|} \sum_{Q \in \mathcal{Q}_i} \frac{1}{\log(1 + \rho(D;Q,\theta))}

Local Gini coefficients on these distributions are aggregated to yield system-wide exposure/fairness metrics (Chang et al., 29 Aug 2025).

Firm and Market Exposure

AI Engagement Scores from 10-K NLP

  • For firm ii and year tt, composite normalized TF-IDF for AI keywords:

Si,t=kKTF-IDFk,d(i,t)normS_{i,t} = \sum_{k \in K} \text{TF-IDF}^{norm}_{k,d(i,t)}

where KK is the term set {"artificial intelligence*", "AI", "A.I."}. Binary exposure (presence/absence) and various weighted indices (AII, SAII, TAII) are constructed for market analytics (Ante et al., 3 Jan 2025).

AI System Risk Exposure

CORTEX Composite Risk Scoring

  • Multi-dimensional score for operational AI vulnerabilities:

S=αU(L,I)+γC+δG+θT+λE+ρRS = \alpha U(L, I) + \gamma C + \delta G + \theta T + \lambda E + \rho R

where U(L,I)=1exp[k(L×I)]U(L,I) = 1-\exp[-k\cdot(L \times I)], CC=context, GG=governance, TT=technical surface, EE=environmental exposure, RR=residual risk, with Bayesian and MC aggregation for uncertainty (Muhammad et al., 24 Aug 2025).

Risk-Adjusted Intelligence Dividend

  • Defines annualized expected AI-induced loss (AI Exposure Score):

AI Exposure Score=ALEintroTCO\text{AI Exposure Score} = \frac{\text{ALE}^{\text{intro}}}{\text{TCO}}

where ALEintro\text{ALE}^{\text{intro}} is the expected annual AI-specific loss, TCO is total cost of ownership. Net risk, controls, and compliance costs are simulated via Monte Carlo (Huwyler, 26 Nov 2025).

3. Data Sources, Computation, and Aggregation

Exposure scores leverage sector-specific and cross-domain datasets:

Aggregation schemes consistently use weighted sums (by employment, wage, document length, firm size, or scenario impact), allowing exposure to be compared across granularities (task, occupation, region, sector, firm, system).

4. Empirical Patterns and Comparative Insights

AI exposure scores reveal substantial heterogeneity in impact:

  • Occupational Exposure Rankings: Highest AISE and patent-based exposure in data-intensive, routine information processing, management, STEM, and analytics roles; lowest in highly embodied, high-stakes, or ethical/regulated domains (e.g., judges, surgeons, athletes, manual trades) (Fenoaltea et al., 6 Dec 2024, Schaal, 15 Oct 2025, Meindl et al., 2021).
  • Geographic and Sectoral Distribution: AISE and Iceberg metrics show highest exposure in major tech hubs (San Francisco Bay, Seattle), professional services, finance, and information sectors; lowest in agriculture, construction, and some healthcare roles (Chopra et al., 29 Oct 2025, Fenoaltea et al., 6 Dec 2024).
  • Document Exposure Inequality: T-Retrievability shows that fairness (exposure equality) in document retrieval is highly topic- and model-sensitive, with neural rerankers displaying lower inequality on average but some topics facing severe exposure skew (Chang et al., 29 Aug 2025).
  • Temporal and Skill-Based Heterogeneity: OAIES and patent lag analyses indicate that current exposure can be decoupled from near-term displacement: occupations with high underlying automatability may show low present startup or patenting attention, and labor displacement lags technical exposure by 10–20 years (Dominski et al., 11 Jul 2025, Meindl et al., 2021).
  • Risk and Regulatory Sensitivity: Risk/ROI-focused scores demonstrate the need to account for new sources of exposure from compliance, adversarial risk, and operational volatility, not just productivity gains (Muhammad et al., 24 Aug 2025, Huwyler, 26 Nov 2025).

Correlations among different exposure scores are often moderate, indicating overlap but also that each measure captures distinct facets of AI’s impact. Ensembles of multiple indices provide superior predictive validity for unemployment and labor flow outcomes (Frank et al., 2023).

5. Policy, Governance, and Practical Applications

Exposure scores have critical implications for:

  • Workforce and Regional Planning: Targeting training, reskilling, and mobility policies toward occupations or locations with high technical or market-validated exposure (Fenoaltea et al., 6 Dec 2024, Chopra et al., 29 Oct 2025, Schaal, 15 Oct 2025, Dominski et al., 11 Jul 2025).
  • Risk Management and Audit: Embedding composite risk/exposure scores (e.g., CORTEX, ALE-based, Gini/T-Retrievability) in organizational governance, compliance dashboards, conformity assessment (EU AI Act, ISO/IEC 42001), and executive reporting (Muhammad et al., 24 Aug 2025, Huwyler, 26 Nov 2025).
  • Financial Analytics: Constructing AI-exposed investment indices for portfolio management, event studies, and risk-return modeling; NLP-based measures outperforming many existing thematic ETFs (Ante et al., 3 Jan 2025).
  • Dynamic Monitoring and Policy Simulation: LPM and scenario tools (e.g., Project Iceberg) enable ex ante simulation of exposure diffusion under alternative adoption and policy shocks (Chopra et al., 29 Oct 2025).

A common finding is that adoption is selective and gradual, concentrated in routine and information-processing domains, with high-stakes, high-skill occupations less exposed in the near term than technical feasibility scores suggest. Policy designs must consider both technical and societal constraints on exposure.

6. Limitations, Critiques, and Future Directions

Key limitations and considerations include:

  • Representation Bias: Metrics derived from venture, patent, or public tool datasets may under-represent sectors (e.g., manufacturing, in-house enterprise AI) or over-privilege certain application types (e.g., generative AI) (Fenoaltea et al., 6 Dec 2024, Meindl et al., 2021, Chopra et al., 29 Oct 2025).
  • Task/O*NET Homogeneity: Socio-occupational heterogeneity is averaged out; local, firm, or demographic variations in exposure are suppressed (Fenoaltea et al., 6 Dec 2024, Schaal, 15 Oct 2025).
  • LLM Uncertainty and Prompt Variance: Automated text-based assessments (via LLMs) may introduce classifier noise and bias, requiring continual model benchmarking and potential human validation (Fenoaltea et al., 6 Dec 2024, Dominski et al., 11 Jul 2025).
  • Patent/Keyword Proxy Issues: Patents and term frequencies do not guarantee active deployment; measures may misestimate impact for slow-to-commercialize innovations (Meindl et al., 2021, Ante et al., 3 Jan 2025).
  • Risk Metric Specification: For risk-based scores, parameter choice (e.g., scenario specification, loss distribution, curvature constants) and regulatory landscape shifts can meaningfully alter exposure estimates (Muhammad et al., 24 Aug 2025, Huwyler, 26 Nov 2025).
  • Empirical Predictive Validity: Individual scores are often weak predictors of unemployment or displacement; ensemble approaches and continuous contextual updating are empirically superior (Frank et al., 2023).

Ongoing development emphasizes integrating AI capability evolution, labor mobility networks, sectoral demand, and regulatory events into unified, dynamically updated indices, aligned with real-time economic and risk outcomes.


References

Whiteboard

Follow Topic

Get notified by email when new papers are published related to AI Exposure Scores.