...
Data stewards: can help filling out the surveys and questionnaires
Research Software Engineer: can help running some of the specialised software
ELSI experts: can help filling out the ELSI related questions surveys and questionnaires
How to
There are many assessment tools to do a pre-FAIR assessment which can help you to gain insight into the FAIRness of your (meta)data before you commence the FAIRification process. FAIRassist holds has a ( manually created ) collection of various tools. These include manual questionnaires or checklists, as well as automated tests (, often only applicable to datasets that are already public and have a persistent identifier (eg. DOI)), such as a DOI. The tools help users understand how to achieve a state of "FAIRness", and how this can be measured and improved. Furthermore, a 2022 publication (FAIR assessment tools: evaluating use and performance) compared a number of tools. Of these and the Based on the outcomes of that paper and tools listed on FAIRassist, we suggest that the following can be considered for your pre-FAIR assessment:below follows an overview with tools you could consider pre-FAIR assessment. The overview contains both online self-assessment surveys and semi-automated tools.
There are many tools which can help you to gain insight into the FAIRness of your (meta)data before you commence the FAIRification process. Broadly, they fall into two categories: online self-assessment surveys and (semi) automated tests. The first category presents the user with an online form, which the users fills in manually, whereas the second category performs (semi) automated tests on a dataset by providing the tool with, for example, a link to the dataset. In both cases, the result gives an indication about the FAIRness of the (meta)data. Additionally, tools may give advice how to improve FAIRness. It is important to bear in mind, that outcomes of tools may vary due to differences in tests performed and subjectivity of the self-assessments surveys.
Based on FAIRassist, a website with a manually created collection of various tools, and the publication FAIR assessment tools: evaluating use and performance, several of the more popular tools can be found below.
Online self-assessment surveys
...
Tool | Description | Quick user guide |
---|---|---|
Provided by Australian Research Data Commons, this 12-question online survey provides a visual indication on the FAIRness level of your (meta)data and provides resources on how to improve it. | Fill in the 12 questions in the survey, potentially with help of a FAIR expert/data steward. | |
Provided by DANS, this online survey gives a FAIRness score. Furthermore, it provides advice on how to improve the FAIRness of your (meta)data. | Fill in the 12 questions in the survey, potentially with assistance of a FAIR expert/data steward. | |
The FAIR Data Maturity Model aims to harmonise outcomes of FAIR assessment tools to make these comparable. Based on the FAIR principles and sub-principles, they have created a list of universal 'maturity indicators'. Their work resulted in a checklist (with extensive description of all maturity indicators), which can be used to assess the FAIRness of your (meta)data. Their work resulted in a checklist (with extensive description of all maturity indicators), which can be used to harmonise outcomes as well as directly assess the FAIRness of your (meta)data. (??????????????) | Download the excel file from Zenodo and in the ‘FAIR Indicators_v0.05’ tab, give a score to the 41 different ‘maturity indicators’, by selecting the level from the drop-down menu in the ‘METRIC’- column, that fits the status if your (meta)data best. Potentially perform this with assistance of a FAIR expert/data steward. View the results in the ‘LEVELS' tab. Detailed definitions and examples for all 'maturity indicators’ can be found in the documentation on zenodo. | |
The FAIRplus dataset maturity indicators were created based on previous work by the Research Data Alliance (RDA, see FAIR Data Maturity Model above) and the FAIRsFAIR projects. This model evaluates FAIRness of data in three categories (Content related, Representation and format, Hosting environment capabilities) and five levels of maturity per category (ranging from Single Use Data to Managed Data Assets). For each category, indicators have been defined to describe the requirements to reach a certain level of maturity in that category. | The spreadsheet used to assess the maturity of your dataset can be found on GitHub. In the 'FAIR-DSM Assessment Sheet v1.2' tab, a pre- and post-FAIR assessment can be performed, potentially with assistance from a FAIR expert/data steward. [Hannah: unfortunately there is not really a user guide, so one has to guess how to fill this in] |
Online (Semi-) automated tests
Tool | Description | Quick user guide |
---|---|---|
The FAIR EvaluatorThe FAIR Evaluator assesses each FAIR metric automatically and returns a quantitative score for the FAIRness of the metadata of a resource. It provides feedback on how further improve FAIRness. | “With the goal of providing an objective, automated way of testing (meta)data resources against the Maturity Indicators, the Authorship Group have created the FAIR Evaluator, which is running as a demonstration service at https://w3id.org/FAIR_Evaluator. The Evaluator provides a registry and execution functions for:
The public version of The FAIR Evaluator has been used to assess >5500 datasets. | A guide on how to use the FAIR Evaluator can be found in the FAIR Cookbook. |
“The FAIRshake toolkit was developed to enable the establishment of community-driven FAIR metrics and rubrics paired with manual and automated FAIR assessments. FAIR assessments are visualized as an insignia that can be embedded within digital-resources-hosting websites. Using FAIRshake, a variety of biomedical digital resources can be manually and automatically evaluated for their level of FAIRness.“ (FAIRshake documentation) The FAIRshake website currently show the results for 132 projects and offers 65 rubics for reuse . | The extensive documentation (including YouTube tutorials) can be found here. See for more More information is also available in the FAIR Cookbook. | |
“FAIR-Checker is a web interface to evaluate FAIR metrics and to provide developers with technical FAIRification hints. It's also a Python framework aimed at easing the implementation of FAIR metrics.” (FAIRassist) FAIRchecker does over 18000 metrics evaluations per month. | In the ‘Check' page, paste an URL or DOI and click on 'Test all metrics’. The assessment will run automatically and return a score for 12 FAIR sub-principles. If a sub-principle does not reach the highest score, you can view recommendations on how to improve. |
...