Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A framework for fostering transparency in shared artificial intelligence models by increasing visibility of contributions (2103.03610v1)

Published 5 Mar 2021 in cs.AI

Abstract: Increased adoption of AI systems into scientific workflows will result in an increasing technical debt as the distance between the data scientists and engineers who develop AI system components and scientists, researchers and other users grows. This could quickly become problematic, particularly where guidance or regulations change and once-acceptable best practice becomes outdated, or where data sources are later discredited as biased or inaccurate. This paper presents a novel method for deriving a quantifiable metric capable of ranking the overall transparency of the process pipelines used to generate AI systems, such that users, auditors and other stakeholders can gain confidence that they will be able to validate and trust the data sources and contributors in the AI systems that they rely on. The methodology for calculating the metric, and the type of criteria that could be used to make judgements on the visibility of contributions to systems are evaluated through models published at ModelHub and PyTorch Hub, popular archives for sharing science resources, and is found to be helpful in driving consideration of the contributions made to generating AI systems and approaches towards effective documentation and improving transparency in machine learning assets shared within scientific communities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Iain Barclay (10 papers)
  2. Harrison Taylor (5 papers)
  3. Alun Preece (41 papers)
  4. Ian Taylor (20 papers)
  5. Dinesh Verma (8 papers)
  6. Geeth De Mel (7 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.