STATUS: IN DEVELOPMENT
Short Description
In this pre-FAIRification phase you assess whether your (meta)data already contains FAIR features, such as persistent unique identifiers for data elements and rich metadata. By using FAIRness assessment tooling you can quantify the level of FAIRness of the data based on its current characteristics and environment. The assessment outcomes can help shape the necessary steps and requirements needed to achieve the desired FAIRification objectives (see A Generic Workflow for the Data FAIRification Process and FAIR in Action Framework by FAIRplus).
The how-to section describes a variety of assessment tools based on the FAIR principles.
Why is this step important
This step will help you assess the current FAIRness level of your data. Comparing the current FAIRness to the previously defined FAIRification objectives will help you shape the necessary steps and requirements needed to achieve your FAIRification goals and help you create your solution plan. Furthermore, the outcomes of this assessment can be used to compare against in the Assess FAIRness step to track the progress of you data towards FAIRness.
How to
There are many tools which can help you to gain insight into the FAIRness of your (meta)data before you commence the FAIRification process. Broadly, they fall into two categories: online self-assessment surveys and (semi) automated tests. The first category presents the user with an online form, which the users fills in manually, whereas the second category performs (semi) automated tests on a dataset by providing the tool with, for example, a link to an already published dataset. In both cases, the result gives an indication about the FAIRness of the (meta)data. Additionally, tools may give advice how to improve FAIRness. It is important to bear in mind, that outcomes of tools may vary due to differences in tests performed and subjectivity of the self-assessments surveys.
Based on FAIRassist, a website with a manually created collection of various tools, and the publication FAIR assessment tools: evaluating use and performance, several of the more popular tools can be found below. Generally, doing pre-FAIR assessments will require the expertise of a data steward to get a reliable value.
Online self-assessment surveys
Tool | Description | Work on your side |
---|---|---|
Provided by Australian Research Data Commons, this 12-question online survey provides a visual indication on the FAIRness level of your (meta)data and provides resources on how to improve it. Pending: usage statistics | Fill in the 12 questions in the survey, potentially with help of a FAIR expert/data steward. | |
Based on the FAIR principles and sub-principles, the Research Data Alliance created a list of universal 'maturity indicators'. These indicators are designed for re-use in evaluation approaches and are accompanied by guidelines for their use. The guidelines are intended to assist evaluators to implement the indicators in the evaluation approach or tool they manage. Their work resulted in a checklist (with extensive description of all maturity indicators), which can be used to assess the FAIRness of your (meta)data. The FAIR Maturity Model is recommended by, amongst others, HL7. | Download the Excel file from Zenodo and in the ‘FAIR Indicators_v0.05’ tab, give a score to the 41 different ‘maturity indicators’, by selecting the level from the drop-down menu in the ‘METRIC’- column, that fits the status if your (meta)data best. Potentially perform this with assistance of a FAIR expert/data steward. View the results in the ‘LEVELS' tab. Detailed definitions and examples for all 'maturity indicators’ can be found in the documentation on Zenodo. | |
A FAIR Implementation Profile (FIP) is a collection of FAIR implementation choices made for all FAIR Principles by a community (for example a research project or an institute). It was developed by the GO FAIR Foundation. Pending: asked Kristina questions about how to use the FIPs to assess in practice. TBD: should the FIPs be here? | Fill in the 10 questions in the Mini Questionnaire or create an account on the Datastewardship Wizard for a more user-friendly experience. |
Online (Semi-) automated tests
Tool | Description | Work on your side |
---|---|---|
The FAIR Evaluator provides an online service to test (meta)data resources against the Maturity Indicators in an objective, automated way. The public version of The FAIR Evaluator has been used to assess >5500 datasets. | A guide on how to use the FAIR Evaluator can be found in the FAIR Cookbook. | |
Using FAIRshake, a variety of biomedical digital resources can be manually and automatically evaluated for their level of FAIRness. They provide a variety of rubrics with test-metrics, which can reused, including those proposed by the FAIR Data Maturity Model. The FAIRshake website currently show the results for 132 projects and offers 65 rubics for reuse . | The extensive documentation (including YouTube tutorials) can be found here. More information is also available in the FAIR Cookbook. | |
FAIR-Checker provides a web interface to automatically evaluate FAIR metrics. It provides users with hints on how to further improve the FAIRness of the resources. FAIRchecker does over 18000 metrics evaluations per month. | In the ‘Check' page, paste an URL or DOI and click on 'Test all metrics’. The assessment will run automatically and return a score for 12 FAIR sub-principles. If a sub-principle does not reach the highest score, you can view recommendations on how to improve. |
For even more surveys and (semi-) automated tools, see FAIRassist. Also, see RDMkit discussion several solutions.
Based on the outcomes of the FAIR assessment, you can start to build your solution plan.
Expertise requirements for this step
The expertise required may depend on the assessment tool you want to use. Experts that may need to be involved, as described in Metroline Step: Build the Team, include:
Data stewards: can help filling out the surveys and questionnaires
Research Software Engineer: can help running some of the specialised software
ELSI experts: can help filling out the ELSI related questions surveys and questionnaires
Practical Examples from the Community
Nivel Example
[Mijke: Nivel has done a pre-assessment in a recent project - have them write the community example? The ZonMw program have written FAIR Improvement Plans, we can contact some of those and ask for example]
PRISMA Example (todo)
References & Further reading
[FAIRopoly] https://www.ejprarediseases.org/fairopoly/
[FAIRinAction] https://www.nature.com/articles/s41597-023-02167-2
[Generic] https://direct.mit.edu/dint/article/2/1-2/56/9988/A-Generic-Workflow-for-the-Data-FAIRification
Authors / Contributors
Experts who contributed to this step and whom you can contact for further information