Open questions on granular, perspective-sensitive FAIRness assessment
Develop a granular and perspective-sensitive framework for assessing FAIRness (Findable, Accessible, Interoperable, Reusable) at multiple levels of information granularity in datasets structured as semantic units and nested FAIR Digital Objects; Determine which parts of a dataset are FAIR for which stakeholders and why; Devise methods to improve FAIRness within specified use scenarios; Establish principled rules for how FAIRness scores propagate through nested modular data and knowledge structures; Derive the relationship between FAIRness scores and a dataset’s granular complexity; Define procedures for calculating cumulated FAIRness across granularity levels with differing FAIRness; and Ascertain criteria for task-dependent sufficiency of FAIRness for a given task.
References
Rather than asking is this dataset FAIR?, we should ask some other questions: What parts of a dataset are FAIR, for whom, and why? What would it take to improve its FAIRness within a given use scenario? How should FAIRness scores propagate through nested modular data and knowledge structures? How should a FAIRness score relate to the granular complexity score of a dataset? How are cumulated FAIRness scores calculated for a dataset, if the FAIRness differs across its granularity levels? Is it possible to assess whether a given dataset is FAIR enough for a given task at hand (task-dependent FAIRness assessment)? All these represent open questions that have to be addressed at some point, given that we want to realise the IFDS.