Open questions on granular, perspective-sensitive FAIRness assessment

Develop a granular and perspective-sensitive framework for assessing FAIRness (Findable, Accessible, Interoperable, Reusable) at multiple levels of information granularity in datasets structured as semantic units and nested FAIR Digital Objects; Determine which parts of a dataset are FAIR for which stakeholders and why; Devise methods to improve FAIRness within specified use scenarios; Establish principled rules for how FAIRness scores propagate through nested modular data and knowledge structures; Derive the relationship between FAIRness scores and a dataset’s granular complexity; Define procedures for calculating cumulated FAIRness across granularity levels with differing FAIRness; and Ascertain criteria for task-dependent sufficiency of FAIRness for a given task.

Background

The paper introduces the Grammar of FAIR, proposing semantic units and nested FAIR Digital Objects (FDOs) as granular, machine-actionable carriers of meaning. In this framework, FAIRness should be evaluated not only at the dataset level but also at the level of individual statement units and compound/nested units, reflecting differences in granular richness and depth.

Within this granular paradigm, the authors identify unresolved issues about how to operationalize FAIRness assessment: identifying which components are FAIR for which stakeholders, improving FAIRness for specific use scenarios, defining propagation of FAIRness across nested units, relating FAIRness to granular complexity, aggregating FAIRness across levels, and determining task-dependent sufficiency. Addressing these open questions is framed as necessary to realize the Internet of FAIR Data and Services (IFDS).

References

Rather than asking is this dataset FAIR?, we should ask some other questions: What parts of a dataset are FAIR, for whom, and why? What would it take to improve its FAIRness within a given use scenario? How should FAIRness scores propagate through nested modular data and knowledge structures? How should a FAIRness score relate to the granular complexity score of a dataset? How are cumulated FAIRness scores calculated for a dataset, if the FAIRness differs across its granularity levels? Is it possible to assess whether a given dataset is FAIR enough for a given task at hand (task-dependent FAIRness assessment)? All these represent open questions that have to be addressed at some point, given that we want to realise the IFDS.

The Grammar of FAIR: A Granular Architecture of Semantic Units for FAIR Semantics, Inspired by Biology and Linguistics  (2509.26434 - Vogt et al., 30 Sep 2025) in Section 'FAIR Semantics and the Granularity of FAIRness' (end of section)