...
These tools allow you to fill in an online form. The result of the survey can be e.g. a score to indicate the FAIRness of your (meta)data. Some tools additionally provide advice on how to improve FAIRness. Well-known online surveys include:
Tool
Tool | Description | Quick user guide |
---|---|---|
Provided by Australian Research Data Commons, this 12-question online survey provides a visual indication on the FAIRness level of your (meta)data and provides resources on how to improve it. | Fill in the 12 questions in the survey, potentially with help of a FAIR expert/data steward. | |
Provided by DANS, this online survey gives a FAIRness score. Furthermore, it provides advice on how to improve the FAIRness of your (meta)data. | Fill in the 12 questions in the survey, potentially with help of a FAIR expert/data steward. | |
The FAIR Data Maturity Model aims to harmonise outcomes of FAIR assessment tools to make these comparable. Based on the FAIR principles and sub-principles, they have created a list of universal 'maturity indicators'. Their work resulted in a checklist (with extensive description of all maturity indicators), which can be used to assess the FAIRness of your (meta)data. Their work resulted in a checklist (with extensive description of all maturity indicators), which can be used to harmonise outcomes as well as directly assess the FAIRness of your (meta)data. (??????????????) |
Online (Semi-) automated
Download the excel file from Zenodo and in the ‘FAIR Indicators_v0.05’ tab, give a score to the 41 different ‘maturity indicators’, by selecting the level from the drop-down menu in the ‘METRIC’- column, that fits the status if your (meta)data best. Potentially perform this with help of a FAIR expert/data steward. View the results in the ‘LEVELS' tab. Detailed definitions and examples for all 'maturity indicators’ can be found in the documentation on zenodo. |
Online (Semi-) automated
Tool | Description | Quick user guide |
---|---|---|
The FAIR Evaluator assesses each FAIR metric automatically and returns a quantitative score for the FAIRness of the metadata of a resource. It provides feedback on how further improve FAIRness. The public version of The FAIR Evaluator has been used to assess >5500 datasets. | ||
“The FAIRshake toolkit was developed to enable the establishment of community-driven FAIR metrics and rubrics paired with manual and automated FAIR assessments. FAIR assessments are visualized as an insignia that can be embedded within digital-resources-hosting websites. Using FAIRshake, a variety of biomedical digital resources can be manually and automatically evaluated for their level of FAIRness.“ (FAIRshake documentation) The FAIRshake website currently show the results for 132 projects and offers 65 rubics for reuse for reuse . | The extensive documentation (including YouTube tutorials) can be found here. | |
“FAIR-Checker is a web interface to evaluate FAIR metrics and to provide developers with technical FAIRification hints. It's also a Python framework aimed at easing the implementation of FAIR metrics.” (FAIRassist) FAIRchecker does over 18000 metrics evaluations per month. | In the ‘Check' page, paste an URL or DOI and click on 'Test all metrics’. The assessment will run automatically and return a score for 12 FAIR sub-principles. If a sub-principle does not reach the highest score, you can view recommendations on how to improve. |
For even more surveys and (semi-) automated tools, see FAIRassist.
...