Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Foundation Model Transparency Index (2310.12941v1)

Published 19 Oct 2023 in cs.LG and cs.AI

Abstract: Foundation models have rapidly permeated society, catalyzing a wave of generative AI applications spanning enterprise and consumer-facing contexts. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies (e.g. social media). Reversing this trend is essential: transparency is a vital precondition for public accountability, scientific innovation, and effective governance. To assess the transparency of the foundation model ecosystem and help improve transparency over time, we introduce the Foundation Model Transparency Index. The Foundation Model Transparency Index specifies 100 fine-grained indicators that comprehensively codify transparency for foundation models, spanning the upstream resources used to build a foundation model (e.g data, labor, compute), details about the model itself (e.g. size, capabilities, risks), and the downstream use (e.g. distribution channels, usage policies, affected geographies). We score 10 major foundation model developers (e.g. OpenAI, Google, Meta) against the 100 indicators to assess their transparency. To facilitate and standardize assessment, we score developers in relation to their practices for their flagship foundation model (e.g. GPT-4 for OpenAI, PaLM 2 for Google, Llama 2 for Meta). We present 10 top-level findings about the foundation model ecosystem: for example, no developer currently discloses significant information about the downstream impact of its flagship model, such as the number of users, affected market sectors, or how users can seek redress for harm. Overall, the Foundation Model Transparency Index establishes the level of transparency today to drive progress on foundation model governance via industry standards and regulatory intervention.

Holistic Evaluation of LLMs

The paper "Holistic Evaluation of LLMs" from the Center for Research on Foundation Models (CRFM) at Stanford University introduces a methodical framework for evaluating LLMs, known as \benchmarkname. By adopting a top-down approach, this framework contrasts with prior bottom-up evaluation methodologies. This shift to a top-down methodology begins with clearly defining the evaluation objectives through carefully selected scenarios and metrics, which form the basis of a taxonomy. This approach helps highlight areas that require further exploration or lack adequate evaluation metrics.

A significant contribution of this paper is the emphasis on a multi-metric evaluation strategy. Traditional LLM benchmarks predominantly prioritize accuracy, often delegating other critical desiderata to separate, specialized datasets. However, \benchmarkname integrates multiple metrics within its framework, emphasizing that evaluation criteria beyond accuracy are equally significant and context-dependent. This approach promotes a more comprehensive view of model performance across varying contexts, reflecting real-world applicability.

The framework also endeavors to standardize the evaluation processes of LLMs. Prior to this framework, the evaluation of LLMs was inconsistent, with numerous core scenarios lacking any model evaluation. The proposed framework ensures that LLMs are evaluated uniformly across numerous scenarios. This consistency improves the comparability of results, facilitating a deeper understanding of the capabilities and limitations of LLMs under a standardized set of conditions.

Additionally, the paper discusses "evaluation primitives" which define the essential components of each evaluation run. These primitives include the scenario (what is to be evaluated), the model and its adaptation process (the method of obtaining results), and the associated metrics (measures of result quality). By clearly delineating these components, the framework provides a structured and repeatable process for LLM evaluation.

This holistic evaluation procedure holds significant theoretical and practical implications. Theoretically, it encourages a more nuanced understanding of LLM capabilities, moving beyond simplistic accuracy-driven assessments. Practically, it enables a thorough assessment that can inform model deployment in real-world scenarios. The top-down evaluation strategy may serve as a crucial step forward in standardizing LLM assessment, thereby enhancing the reliability of research findings and technology applications in the field.

Future research may explore refining the taxonomy, extending it to incorporate emerging desiderata, or bridging gaps in evaluation across new or evolving scenarios. Moreover, investigation into refining the adaptation processes of LLMs, specific to certain tasks or contexts, could bolster the breadth and applicability of LLM evaluations within the \benchmarkname framework. Such advancements could potentially offer enhanced insights and pave the way for more robust development in artificial intelligence methodologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Rishi Bommasani (28 papers)
  2. Kevin Klyman (17 papers)
  3. Shayne Longpre (49 papers)
  4. Sayash Kapoor (23 papers)
  5. Nestor Maslej (10 papers)
  6. Betty Xiong (5 papers)
  7. Daniel Zhang (34 papers)
  8. Percy Liang (239 papers)
Citations (49)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com