Status | ||||
---|---|---|---|---|
|
...
|
Short description
’FAIR evaluation results can serve as a pointer to where your FAIRness can be improved.’ (FAIRopoly)
In this pre-FAIRification phase you assess whether your (meta)data already contains meets FAIR featurescriteria, such as persistent unique identifiers for data elements and rich metadata, by . By using FAIRness assessment tooling [Generic]. By quantifying you can quantify the level of FAIRness of the data based on its current characteristics and environment, the . The assessment outcomes can help shape the necessary steps and requirements needed to achieve the desired FAIRification objectives [FAIRInAction](see A Generic Workflow for the Data FAIRification Process and FAIR in Action Framework by FAIRplus).
The how-to section describes a variety of assessment tools based on the FAIR principles.
[Mijke: RDMkit has a page on this → https://rdmkit.elixir-europe.org/compliance_monitoring#how-can-you-measure-and-document-data-management-capabilities ]
Why is this step important
This step will help you assess the current FAIRness level of your data. Comparing the current FAIRness level to the previously defined FAIRification objectives will help you shape the necessary steps and requirements needed to achieve your FAIRification goals [FAIRInAction].
and help you create your
Expertise requirements for this step
The expertise required may depend on the assessment tool you want to use. Experts that may need to be involved, as described in Metroline Step: Build the Team, include:
Data stewards: can help filling out the surveys and questionnaires
Research Software Engineer: can help running some of the specialised software
ELSI experts: can help filling out the ELSI related questions surveys and questionnaires
How to
How to
Step 1
There are many tools which can that help you to gain insight into assess the FAIRness of your (meta)data before you commence starting the FAIRification process. FAIRassist has a manually created collection of various tools. These include manual questionnaires or checklists, as well as automated tests, often only applicable to datasets that are already public and have a persistent identifier, such as a DOI. The tools help users understand how to achieve a state of "FAIRness", and how this can be measured and improved. Furthermore, a 2022 publication (:
for an overview of available tools see FAIRassist;
several tools are evaluated and compared in FAIR assessment tools: evaluating use and performance
...
;
RDMkit discusses several solutions.
While we focus specifically on the FAIRness of (meta)data in this step, it is also possible to possible to assess general FAIR awareness, for example by using the FAIR Aware tool provided by DANS.
Step 2
Decide which tool fits your goal(s) the best. Broadly, the tools fall into the two categories described below.
Online self-assessment surveys. Here, the user is presented with an online form, which
...
is filled in manually.
(Semi) automated tests. Here (semi) automated tests are performed on a dataset by providing the tool with, for example, a link to
...
an already published dataset.
In both cases, the result gives an indication about the FAIRness of the (meta)data. Additionally, tools may give advice how to improve FAIRness. It is important to bear in mind , that outcomes of tools may vary due to, for example, differences in tests performed and subjectivity of the self-assessments surveys. Based on FAIRassist, a website with a manually created collection of various tools, and the publication FAIR assessment tools: evaluating use and performance, several . See EOSC’s FAIR Assessment Tools: Towards an “Apples to Apples” Comparisons for more information this.
The tables below provide an overview of some of the more popular tools can be found belowfrom both categories.
Online self-assessment surveys
These tools allow you to fill in an online form. The result of the survey can be e.g. a score to indicate the FAIRness of your (meta)data. Some tools additionally provide advice on how to improve FAIRness. Well-known online surveys include:
Tool | DescriptionQuick user guide | Work on your side | ||||
---|---|---|---|---|---|---|
Provided by Australian Research Data Commons, this 12-question online survey provides a visual indication on the FAIRness level of your (meta)data and provides resources on how to improve it. | Fill in the 12 questions in the survey, potentially with help of a FAIR expert/data steward. | Provided by DANS, this online survey gives a FAIRness score. Furthermore, it provides advice on how to improve the FAIRness of your (meta)data. | Fill in the 12 questions From October 2023 until May 2024, the site had around 2500 visitors who actively interacted with the page. | Fill in the survey, potentially with assistance help of a FAIR expert/data steward. | ||
The FAIR Data Maturity ModelThe FAIR Data Maturity | Model aims to harmonise outcomes of FAIR assessment tools to make these comparable. Based on the FAIR principles and sub-principles, they have created a list of universal 'maturity indicators'. Their work resulted in a checklist (with extensive description of all maturity indicators), which the Research Data Alliance created a checklist with FAIR maturity indicators and guidelines. These can be used to assess the FAIRness of your (meta)data. Their work resulted in a checklist (with extensive description of all maturity indicators), which can be used to harmonise outcomes as well as directly assess the FAIRness of your (meta)data. (??????????????) Download the excel The FAIR Data Maturity Model is recommended by, amongst others, HL7. | Download the Excel file from Zenodo and in the ‘FAIR Indicators_v0.05’ tab, give a score to the 41 different ‘maturity indicators’, by selecting the level from the drop-down menu in the ‘METRIC’- column, that fits the status if your (meta)data best. Potentially perform this with assistance of a FAIR expert/data steward. View the results in the ‘LEVELS' tab. Detailed definitions and examples for all 'maturity indicators’ can be found in the documentation on zenodoZenodo. | ||||
The FAIRplus dataset maturity indicators were created based on previous work by the Research Data Alliance (RDA, see FAIR Data Maturity Model above) and the FAIRsFAIR projects. This model evaluates FAIRness of data in three categories (Content related, Representation and format, Hosting environment capabilities) and five levels of maturity per category (ranging from Single Use Data to Managed Data Assets). For each category, indicators have been defined to describe the requirements to reach a certain level of maturity in that category. | The spreadsheet used to assess the maturity of your dataset can be found on GitHub. In the 'FAIR-DSM Assessment Sheet v1.2' tab, a pre- and post-FAIR assessment can be performed, potentially with assistance from a FAIR expert/data steward. [Hannah: unfortunately there is not really a user guide, so one has to guess how to fill this in] |
Online (Semi-) automated tests
...
Tool
...
Description
...
Quick user guide
...
...
Online (Semi-) automated tests
Tool | Description | Work on your side |
---|---|---|
FAIR-Checker provides a web interface to automatically evaluate FAIR metrics. It provides users with hints on how to further improve the FAIRness of the resources. FAIRchecker does over 18000 metrics evaluations per month. | In the ‘Check' page, paste an URL or DOI and click on 'Test all metrics’. The assessment will run automatically and return a score for 12 FAIR sub-principles. If a sub-principle does not reach the highest score, you can view recommendations on how to improve. | |
The FAIR Evaluator provides an online service to test (meta)data resources against the Maturity Indicators in an objective, the Authorship Group have created the FAIR Evaluator, which is running as a demonstration service at https://w3id.org/FAIR_Evaluator. The Evaluator provides a registry and execution functions for: Maturity Indicator Tests Community-defined Collections of Maturity Indicator Tests automated way. For an applied example, see Applying the FAIR principles to data in a hospital: challenges and opportunities in a pandemic. The public version of The FAIR Evaluator has been used to assess >5500 datasets. | A guide on how to use the FAIR Evaluator can be found in the FAIR Cookbook. | |
“The FAIRshake toolkit was developed to enable the establishment of community-driven FAIR metrics and rubrics paired with manual and automated FAIR assessments. FAIR assessments are visualized as an insignia that can be embedded within digital-resources-hosting websites. Using FAIRshake, a variety of biomedical digital resources can be manually and automatically evaluated for their level of FAIRness.“ (FAIRshake documentation). They provide a variety of rubrics with test-metrics, which can reused, including those proposed by the FAIR Data Maturity Model. The FAIRshake website currently show the results for 132 projects and offers 65 rubics rubrics for reuse. | The extensive documentation (including YouTube tutorials) can be found here. More information is also available in the FAIR Cookbook. | |
“FAIR-Checker is a web interface to evaluate FAIR metrics and to provide developers with technical FAIRification hints. It's also a Python framework aimed at easing the implementation of FAIR metrics.” (FAIRassist) FAIRchecker does over 18000 metrics evaluations per month. | In the ‘Check' page, paste an URL or DOI and click on 'Test all metrics’. The assessment will run automatically and return a score for 12 FAIR sub-principles. If a sub-principle does not reach the highest score, you can view recommendations on how to improve. |
For even more surveys and (semi-) automated tools, see FAIRassist.
FAIR assessment tools vary greatly in their outcomes. The FAIR Data Maturity Model (created by the Research Data Alliance, or RDA) aims to harmonise outcomes of FAIR assessment tools to make these comparable. Based on the FAIR principles and sub-principles, they have created a list of universal 'maturity indicators'. Their work resulted in a checklist (with extensive description of al maturity indicators), which can be used to assess the FAIRness of your (meta)data.
The FAIRplus dataset maturity indicators were created based on previous work around the FAIR indicators, done by the Research Data Alliance (RDA) and the FAIRsFAIR projects:
FAIR Data Maturity Model Working Group. (2020). FAIR Data Maturity Model. Specification and Guidelines (1.0). https://doi.org/10.15497/rda00050
Devaraju, Anusuriya, Huber, Robert, Mokrane, Mustapha, Herterich, Patricia, Cepinskas, Linas, de Vries, Jerry, L’Hours, Herve, Davidson, Joy, & Angus White. (2020). FAIRsFAIR Data Object Assessment Metrics (0.4). Zenodo. https://doi.org/10.5281/zenodo.4081213
In the definitions of the FAIRplus-DSM indicators, you will find a link to the corresponding RDA or FAIRsFAIR indicator when they are related.
Hannah mentions the Data Maturity Model. This is also here on FAIRplus. There is also this Github from FAIRplus and the sheet for the actual assessment is here. Could be worrying: last update was last year.
[Hannah: I cannot really find a clear description on how to use it, only a huge excel file (for which you have to dig quite deeply into the GitHub, maybe we can link to it here if we include it https://github.com/FAIRplus/Data-Maturity/tree/master/docs/assessment ?]
Related: in the FAIRtoolkit they describe Data Capability Maturity Model:
Most recently, CMM has been adapted by the FAIRplus IMI consortium [7] to improve an organisation’s life science data management process, which is the basis for the method described here.
The FAIR data CMM method identifies 1) important organisational aspects of FAIR data transformation and management, 2) a sequence of levels that form a desired path from an initial state to maturity and 3) a set of maturity indicators for measuring the maturation levels.
e.g. Findability Maturity Indicators. Also describes some team requirements.
[Hannah; I think this is also more about assessing FAIR in an organization?]
[Sander: so far I have a feeling:
The FAIRplus-DSM indicators are not used anywhere except by the FAIRcookbook (which I think is FAIRplus)
The FAIR Data Maturity Model is (was?) at least used somewhat. Google e.g. "RDA-R1.2-01M"
Wilkinson’s gen2 is from 2019, which is from before the Maturity Model stuff. But apparently he believes it’s fine the way it is
]
FAIR maturity evaluation system
FAIR Implementation Profiles (FIPs)
Potentially: compare the community FIP with your own fingerprint. This gives an indication on whether you meet R1.3?
‘The FAIR Principle R1.3 states that “(Meta)data meet domain-relevant Community standards”. This is the only explicit reference in the FAIR Principles to the role played by domain-specific communities in FAIR. It is interesting to note that an advanced, online, automated, FAIR maturity evaluation system [22] did not attempt to implement a maturity indicator for FAIR Principle R1.3. It was not obvious during the development of the evaluator system how to test for “domain-relevant Community standards” as there exists, in general, no venue where communities publicly and in machine-readable formats declare data and metadata standards, and other FAIR practices. We propose the existence of a valid, machine-actionable FIP be adopted as a maturity indicator for FAIR Principle R1.3.’
[Hannah: This might not be applicable to all FAIRification processes, but I think it might be of added value in some cases to assess the current FAIRness also in relation to community standards, so I think it might be nice to include this as well. However, I find it hard to find any information about community standards..]
FIP Mini Questionnaire from GO-FAIR: https://www.go-fair.org/how-to-go-fair/fair-implementation-profile/fip-mini-questionnaire/
[Mijke: RDMkit has a page on this → https://rdmkit.elixir-europe.org/compliance_monitoring#how-can-you-measure-and-document-data-management-capabilities ]
More Checklists and tools:
A Checklist produced for use at the EUDAT summer school to discuss how FAIR the participant's research data were and what measures could be taken to improve FAIRness:
https://zenodo.org/records/1065991#.Xs_XpC2cbOQ%C2%A0 [Hannah; this is also an offline checklist; not sure if we should recommend to consider. I also think it is rather limited compared to the rest of the tools/checklists]
FAIR Guidance [https://www.ejprarediseases.org/fair_guidance/]
FAIR Assessment Tools
There is growing interest in the degree to which digital resources adhere to the goals of FAIR – that is, to be Findable, Accessible, Interoperable, and Retrievable by both humans and, more importantly, by machines acting on behalf of their human operator. Unfortunately, the path to FAIRness was left undefined by the original FAIR Principles paper, which chose to remain agnostic about which technologies or approaches were appropriate. As such, until recently, it has been impossible to make objectively valid statements about the degree to which a data object exhibits “FAIRness”.
With the encouragement of journal editors and other stakeholders who have a need to evaluate author/researcher claims regarding the FAIRness of their outputs, a group consisting of FAIR experts, journal editors, data repository hosts, internet researchers, and software developers assembled to jointly define a set of formal metrics that could be applied to test the FAIRness of a resource. The first edition of these metrics was aimed at self-assessment, in the form of a questionnaire; however, upon review of the validity of several completed self-assessments by data owners, we determined that the questions were often answered inconsistently, or incorrectly (knowingly or unknowingly), and often the data provider did not know enough about the data publishing environment to answer the questions at all. As such, a smaller group of FAIR experts created a second generation of FAIR Metrics that aimed to be fully automatable. The result was a set of 22 Metrics spanning most FAIR principles and sub-principles, which explicitly describe what is being tested, which FAIR Principle it applies to, why it is important to test this (meta)data feature, exactly how the test will be conducted, and what will be considered a successful result.
Generic
If driving user question(s) were defined in Step 1 it should be “answered” in this step. The results of these question(s) are gathered by processing the FAIR machine-readable data. If RDF is the machine-readable format used, then RDF data stores (triple stores) are used to store the machine-readable data, and SPARQL queries are used to retrieve the data required to answer the driving user question(s).
FAIRCOOKBOOK recipe: [https://faircookbook.elixir-europe.org/content/recipes/introduction/fairification-process.html]
Phase 3: assess, design, implement, repeat
Following the selection of the “action” team, an iterative cycle of assessment, design, and implementation in put in place.
Assessment : Prior to starting the work, the assessment of goals is done to ensure that individuals in the action team are updated and clear with the FAIRification goals formulated by the data owners. This assessment is carried out by review team which could be an independent team or certain individuals from the technical team who are not involved in the action team. The assessment involves a binary decision of “GO” or “NO GO” based on the FAIRification goals and the catalog provided. At this stage, the reviews can also provide suggestion based on their experiences on the resources, tool, or goals.
Design : Once the team receives a “GO” decision from the review team, the action team now starts by enlisting the steps that need to be done performed to achieve the goal. For each task, the resources, an estimate time duration, as well as the responsible person is selected.
Implementation : Once the tasks have been selected and assigned, the actual work begins. To ensure that the action team is working smoothly, weekly or bi-weekly meetings is recommended so that the team is aware of the progress.
Once the implementation of task listed in the design phase are done, the action team assess the work done and checks the aligned with the FAIRification goal. In case more tasks are needed to achieve the goal, a second round of the assess-review-implement cycle takes place as described above with the starting point as the FAIRification goals, the completed tasks and the proposed task
This phase is usually run in short sprints of 3-month.
Practical Examples from the Community
This section should show the step applied in a real project. Links to demonstrator projects.
[Mijke: Nivel has done a pre-assessment in a recent project - have them write the community example? The ZonMw program have written FAIR Improvement Plans, we can contact some of those and ask for example]
[Hannah - copied from the Define FAIR objectives Metroline step]
Amsterdam University of Applied Sciences have a “FAIR enough checklist”. They describe it as follows:
The first checklist describes the minimum effort for Urban Vitality (UV) research projects and can be applied by researchers with minimal assistance from a data steward. Following this checklist makes the research data quite FAIR to people and somewhat FAIR to machines (computers). The checklist should be used immediately after obtaining research funding.
Source: https://www.amsterdamuas.com/uv-openscience/toolkit/open-science/fair/fair-data.html
References & Further reading
[FAIRopoly] https://www.ejprarediseases.org/fairopoly/
[FAIRinAction] https://www.nature.com/articles/s41597-023-02167-2
[Generic] https://direct.mit.edu/dint/article/2/1-2/56/9988/A-Generic-Workflow-for-the-Data-FAIRification
Authors / Contributors
...
FAIR Implementation profiles
Another promising development is the FAIR Implementation Profile (FIP), developed by the GO FAIR Foundation. Once published, a FIP can be reused by others, thus acting as a recipe for making data FAIR by a community, for example a research project or an institute, based on agreements and standards within that community. A FIP can be used to compare your currently used FAIR implementation choices, such as standards used in your dataset, to those used by your community, thus providing a Pre-FAIR score. FIPs and their usage are currently still under active development. For more information, see Creating a FAIR Implementation Profile (FIP), FIP Mini Questionnaire and the FIP Data Stewardship Wizard.
Step 3
To successfully do a pre-FAIR assessment, do the following:
learn from examples (see the practical examples section);
familiarise yourself with the tool you intend to use;
involve the necessary experts (see expertise requirements section);
perform the assessment.
The final evaluation will give insight into the current FAIRness of your data. Depending on the tool used, you may receive feedback on how to improve the FAIRness of your data. Thus, the outcome of the pre-FAIR assessment helps you determine the next steps to achieve your FAIRification goals.
Expertise requirements for this step
The expertise required may depend on the assessment tool you want to use. Experts that may need to be involved, as described in Metroline Step: Build the Team, are described below.
FAIR data stewards. Specialist who can help filling out the surveys and questionnaires.
Research software engineers. Specialists whocan help running some of the specialised software.
Practical examples from the community
Pending - Nivel Example
Training
Relevant training will be added in the future if available.
Suggestions
Visit our How to contribute page for information on how to get in touch if you have any suggestions about this page.