Status | ||||
---|---|---|---|---|
|
...
Tool | Description | Work on your side |
---|---|---|
Provided by Australian Research Data Commons, this 12-question online survey provides a visual indication on the FAIRness level of your (meta)data and provides resources on how to improve it. Pending: usage statistics | Fill in the 12 questions in the survey, potentially with help of a FAIR expert/data steward. | |
Based on the FAIR principles and sub-principles, the Research Data Alliance created a list of universal 'maturity indicators'. These indicators are designed for re-use in evaluation approaches and are accompanied by guidelines for their use. The guidelines are intended to assist evaluators to implement the indicators in the evaluation approach or tool they manage. Their work resulted in a checklist (with extensive description of all maturity indicators), which can be used to assess the FAIRness of your (meta)data. The FAIR Maturity Model is recommended by, amongst others, HL7. | Download the Excel file from Zenodo and in the ‘FAIR Indicators_v0.05’ tab, give a score to the 41 different ‘maturity indicators’, by selecting the level from the drop-down menu in the ‘METRIC’- column, that fits the status if your (meta)data best. Potentially perform this with assistance of a FAIR expert/data steward. View the results in the ‘LEVELS' tab. Detailed definitions and examples for all 'maturity indicators’ can be found in the documentation on Zenodo. | |
A FAIR Implementation Profile (FIP) is a collection of FAIR implementation choices made for all FAIR Principles by a community (for example a research project or an institute). It was developed by the GO FAIR Foundation. Pending: asked Kristina questions about how to use the FIPs to assess in practice. TBD: should the FIPs be here? | Fill in the 10 questions in the Mini Questionnaire or create an account on the Datastewardship Wizard for a more user-friendly experience. |
...
Tool | Description | Work on your side | |||
---|---|---|---|---|---|
FAIR-Checker provides a web interface to automatically evaluate FAIR metrics. It provides users with hints on how to further improve the FAIRness of the resources. FAIRchecker does over 18000 metrics evaluations per month. | In the ‘Check' page, paste an URL or DOI and click on 'Test all metrics’. The assessment will run automatically and return a score for 12 FAIR sub-principles. If a sub-principle does not reach the highest score, you can view recommendations on how to improve. | ||||
The FAIR Evaluator provides an online service to test (meta)data resources against the Maturity Indicators in an objective, automated way. The public version of The FAIR Evaluator has been used to assess >5500 datasets. | A guide on how to use the FAIR Evaluator can be found in the FAIR Cookbook. | ||||
Using FAIRshake, a variety of biomedical digital resources can be manually and automatically evaluated for their level of FAIRness. They provide a variety of rubrics with test-metrics, which can reused, including those proposed by the FAIR Data Maturity Model. The FAIRshake website currently show the results for 132 projects and offers 65 rubics for reuse . | The extensive documentation (including YouTube tutorials) can be found here. More information is also available in the FAIR Cookbook. | FAIR-Checker provides a web interface to automatically evaluate FAIR metrics. It provides users with hints on how to further improve the FAIRness of the resources. FAIRchecker does over 18000 metrics evaluations per month. | In the ‘Check' page, paste an URL or DOI and click on 'Test all metrics’. The assessment will run automatically and return a score for 12 FAIR sub-principles. If a sub-principle does not reach the highest score, you can view recommendations on how to improve. |
For even more surveys and (semi-) automated tools, see FAIRassist. Also, see RDMkit discussion several solutions.
...