...
In this pre-FAIRification phase you assess whether your (meta)data already contains FAIR features, such as persistent unique identifiers for data elements and rich metadata, by using FAIRness assessment tooling [Generic].
By quantifying the level of FAIRness of the data based on its current characteristics and environment, the assessment outcomes can help shape the necessary steps and requirements needed to achieve the desired FAIRification objectives [FAIRInAction].
[Jolanda] Different assessment tools are available which are The how-to section describes a variety of assessment tools based on the FAIR principles.
...
This step will help you assess the current FAIRness level of your data. Comparing the current FAIRness level to the previously defined FAIRification objectives, will help you shape the necessary steps and requirements needed to achieve your FAIRification goals [FAIRInAction].
Furthermore, the outcomes of this assessment can be used to compare against in the Assess FAIRness step to track the progress of you data towards FAIRness . [Hannah; copied from above]
...
This section could describe the expertise required. Perhaps the Build Your Team step could then be an aggregation of all the “Expertise requirements for this step” steps that someone needs to fulfil his/her FAIRification goals.
How to
There are many assessment tools to do a pre-FAIR assessment of your (meta)data. Based on the 2022 publication FAIR assessment tools: evaluating use and performance and x, y, z the following tools could be considered:
Online self-assessment surveys
These tools provide allow you to fill in an online form and then give e.g. a score to indicate the FAIRness of your (meta)data.
Tool | Description - Nakijken of de paper iets moois heeft staan |
---|---|
Provided by Australian Research Data Commons, this 12-question online survey provides a visual indication on the FAIRness of your (meta)data and provides resources on how to improve it. | |
Provided by DANS, this online survey gives a FAIRness score. Furthermore, it provides advice on how to improve the FAIRness of your (meta)data. | |
Provided by DANS, this online survey gives a FAIRness score. Furthermore, it provides advice on how to improve the FAIRness of your (meta)data. | |
Allows you to automatically assess digital objects as well as add a new project to their repository (??) it seems to (automatically??) check digital objects. Is the survey automatically filled? |
Online (Semi-) automated
These tools do an automatic assessment by reading the metadata available at a certain URI.
Offline self-assessment
GARDIAN (link from paper is dead, could be somewhere around here, can’t find it though)
guidance for researchers to pick a tool for their needs and be aware of its strong points and weaknesses.
The selected tools were split up into four different sections, namely online self-assessment/survey, (semi-)automated, offline self-assessment and other types of tools. The tool selection was based on online searches in June 2020.
They compare:
we evaluated FAIR assessment tools in terms of 1) the prerequisite knowledge needed to run the tools, 2) the ease and effort needed to use them and 3) the output of the tool, with respect to the information it contains and the consistency between tools. This should help users, e.g., in the nanosafety domain, to improve their methods on storing, publishing and providing research data. To do this we provide
The FAIR Data Maturity Model
FAIR assessment tools vary greatly in their outcomes. The FAIR Data Maturity Model aims to harmonise outcomes of FAIR assessment tools to make these comparable.
FAIR maturity evaluation system
FAIR Implementation Profiles (FIPs)
Potentially: compare the community FIP with your own fingerprint. This gives an indication on whether you meet R1.3?
‘The FAIR Principle R1.3 states that “(Meta)data meet domain-relevant Community standards”. This is the only explicit reference in the FAIR Principles to the role played by domain-specific communities in FAIR. It is interesting to note that an advanced, online, automated, FAIR maturity evaluation system [22] did not attempt to implement a maturity indicator for FAIR Principle R1.3. It was not obvious during the development of the evaluator system how to test for “domain-relevant Community standards” as there exists, in general, no venue where communities publicly and in machine-readable formats declare data and metadata standards, and other FAIR practices. We propose the existence of a valid, machine-actionable FIP be adopted as a maturity indicator for FAIR Principle R1.3.’
[Hannah] There are also these tools [Mijke: these are the ones Nivel used in a recent project - have them write the community example?]:
FIP Mini Questionnaire from GO-FAIR: https://www.go-fair.org/how-to-go-fair/fair-implementation-profile/fip-mini-questionnaire/
Data Maturity Model: https://zenodo.org/records/3909563#.YGRNnq8za70
[Mijke: RDMkit has a page on this → https://rdmkit.elixir-europe.org/compliance_monitoring#how-can-you-measure-and-document-data-management-capabilities ]
[Sander]
FAIRCookbook
Assessment Chapter in the FAIRCookbook. It currently has recipes for two tools (no idea how they work yet):
...
A Checklist produced for use at the EUDAT summer school to discuss how FAIR the participant's research data were and what measures could be taken to improve FAIRness:
[Hannah] There are also these tools [Mijke: these are the ones Nivel used in a recent project - have them write the community example?]:
FIP Mini Questionnaire from GO-FAIR: https://www.go-fair.org/how-to-go-fair/fair-implementation-profile/fip-mini-questionnaire/
...
[Sander]
Hannah mentions the Data Maturity Model. This is also here on FAIRplus. There is also this Github from FAIRplus and the sheet for the actual assessment is here. Could be worrying: last update was last year.
...