Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Status
colourRedYellow
titlestatus: in development

...

STATUS: READY FOR REVIEW

Short description

’FAIR evaluation results can serve as a pointer to where your FAIRness can be improved.’ (FAIRopoly)

In this pre-FAIRification phase you assess whether your (meta)data already contains meets FAIR featurescriteria, such as persistent unique identifiers for data elements and rich metadata, by . By using FAIRness assessment tooling [Generic]. By quantifying you can quantify the level of FAIRness of the data based on its current characteristics and environment, the . The assessment outcomes can help shape the necessary steps and requirements needed to achieve the desired FAIRification objectives [FAIRInAction](see A Generic Workflow for the Data FAIRification Process and FAIR in Action Framework by FAIRplus).

The how-to section describes a variety of assessment tools based on the FAIR principles.   

[Mijke: RDMkit has a page on this → https://rdmkit.elixir-europe.org/compliance_monitoring#how-can-you-measure-and-document-data-management-capabilities

Why is this step important 

This step will help you assess the current FAIRness level of your data. Comparing the current FAIRness level to the previously defined FAIRification objectives will help you shape the necessary steps and requirements needed to achieve your FAIRification goals [FAIRInAction].
and help you create your solution plan. Furthermore, the outcomes of this assessment can be used to compare against repeated in the Assess FAIRness step to track , allowing you to compare the results and check the progress of you your data towards FAIRness. [Hannah; copied from above]

Expertise requirements for this step 

The expertise required may depend on the assessment tool you want to use. Experts that may need to be involved, as described in Metroline Step: Build the Team, include:

  • Data stewards: can help filling out the surveys and questionnaires

  • Research Software Engineer: can help running some of the specialised software

  • ELSI experts: can help filling out the ELSI related questions surveys and questionnaires

How to 

How to 

Step 1

There are many tools which can that help you to gain insight into assess the FAIRness of your (meta)data before you commence starting the FAIRification process:

While we focus specifically on the FAIRness of (meta)data in this step, it is also possible to possible to assess general FAIR awareness, for example by using the FAIR Aware tool provided by DANS.

Step 2

Decide which tool fits your goal(s) the best. Broadly, they the tools fall into the two categories : online described below.

  • Online self-assessment surveys

...

  • . Here, the user is presented with an online form, which

...

  • is filled in manually.

  • (Semi) automated tests. Here (semi) automated tests are performed on a dataset by providing the tool with, for example, a link to an already published dataset.

In both cases, the result gives an indication about the FAIRness of the (meta)data. Additionally, tools may give advice how to improve FAIRness. It is important to bear in mind , that outcomes of tools may vary due to, for example, differences in tests performed and subjectivity of the self-assessments surveys. Based on FAIRassist, a website with a manually created collection of various tools, and the publication FAIR assessment tools: evaluating use and performance, several See EOSC’s FAIR Assessment Tools: Towards an “Apples to Apples” Comparisons for more information this.

The tables below provide an overview of some of the more popular tools can be found belowfrom both categories.

Online self-assessment surveys

Tool

Description

Work on your side

ARDC FAIR self assessment

Provided by Australian Research Data Commons, this 12-question online survey provides a visual indication on the FAIRness level of your (meta)data and provides resources on how to improve it.

Fill in the 12 questions in the survey, potentially with help of a FAIR expert/data steward.

SATIFYD

Provided by DANS, this online survey gives a FAIRness score. Furthermore, it provides advice on how to improve the FAIRness of your (meta)data.

Fill in the 12 questions From October 2023 until May 2024, the site had around 2500 visitors who actively interacted with the page.

Fill in the survey, potentially with assistance help of a FAIR expert/data steward.

The FAIR Data Maturity Model

Based on the FAIR principles and sub-principles, the Research Data Alliance created a list of universal 'maturity indicators'. These indicators are designed for re-use in evaluation approaches and are accompanied by guidelines for their use. The guidelines are intended to assist evaluators to implement the indicators in the evaluation approach or tool they manage.Their work resulted in a checklist (with extensive description of all maturity indicators), which checklist with FAIR maturity indicators and guidelines. These can be used to assess the FAIRness of your (meta)data.

The FAIR Data Maturity Model is recommended by, amongst others, HL7.

Download the excel Excel file from Zenodo and in the ‘FAIR Indicators_v0.05’ tab, give a score to the 41 different ‘maturity indicators’, by selecting the level from the drop-down menu in the ‘METRIC’- column, that fits the status if your (meta)data best. Potentially perform this with assistance of a FAIR expert/data steward.

View the results in the ‘LEVELS' tab. Detailed definitions and examples for all 'maturity indicators’ can be found in the documentation on zenodoZenodo.

FIP Mini Questionnaire & FIP Datastewardship Wizard

A FAIR Implementation Profile (FIP) is a collection of FAIR implementation choices made for all FAIR Principles by a community (for example a research project or an institute). It was developed by the GO FAIR Foundation.  
Once published, a FIP can be reused by others, thus acting as a recipe for making data FAIR by a community based on agreements and standards within that community. Therefore, a FIP aids in achieving FAIR principle R1.3, which states that “(Meta)data meet domain-relevant community standards." 

Q: Where can we find the community FIPs??

Q: How can we get a useful score / outcome that will help us? Advice?

Fill in the 10 questions in the Mini Questionnaire or create an account on the Datastewardship Wizard for a more user-friendly expierence.

Online (Semi-) automated tests

...

Tool

...

Description

...

Online (Semi-) automated tests

Tool

Description

Work on your side

FAIR-Checker

FAIR-Checker provides a web interface to automatically evaluate FAIR metrics. It provides users with hints on how to further improve the FAIRness of the resources.

FAIRchecker does over 18000 metrics evaluations per month.

In the ‘Check' page, paste an URL or DOI and click on 'Test all metrics’. The assessment will run automatically and return a score for 12 FAIR sub-principles. If a sub-principle does not reach the highest score, you can view recommendations on how to improve.

The FAIR Evaluator

The FAIR Evaluator provides an online service to test (meta)data resources against the Maturity Indicators in an objective, automated way. For an applied example, see Applying the FAIR principles to data in a hospital: challenges and opportunities in a pandemic.

The public version of The FAIR Evaluator has been used to assess >5500 datasets. 

A guide on how to use the FAIR Evaluator can be found in the FAIR Cookbook.

FAIRshake

Using FAIRshake, a variety of biomedical digital resources can be manually and automatically evaluated for their level of FAIRness. They provide a variety of rubrics with test-metrics, which can reused, including those proposed by the FAIR Data Maturity Model.

The FAIRshake website currently show the results for 132 projects and offers 65 rubics rubrics for reuse.

The extensive documentation (including YouTube tutorials) can be found here.

More information is also available in the FAIR Cookbook.

FAIR

...

FAIR-Checker provides a web interface to automatically evaluate FAIR metrics. It provides users with hints on how to further improve the FAIRness of the resources.

FAIRchecker does over 18000 metrics evaluations per month.

...

In the ‘Check' page, paste an URL or DOI and click on 'Test all metrics’. The assessment will run automatically and return a score for 12 FAIR sub-principles. If a sub-principle does not reach the highest score, you can view recommendations on how to improve.

For even more surveys and (semi-) automated tools, see FAIRassist.

Practical Examples from the Community 

This section should show the step applied in a real project. Links to demonstrator projects. 

[Mijke: Nivel has done a pre-assessment in a recent project - have them write the community example? The ZonMw program have written FAIR Improvement Plans, we can contact some of those and ask for example]

[Hannah - copied from the Define FAIR objectives Metroline step]

Amsterdam University of Applied Sciences have a “FAIR enough checklist”. They describe it as follows:

References & Further reading

[FAIRopoly] https://www.ejprarediseases.org/fairopoly/  

[FAIRinAction] https://www.nature.com/articles/s41597-023-02167-2 

[Generic] https://direct.mit.edu/dint/article/2/1-2/56/9988/A-Generic-Workflow-for-the-Data-FAIRification   

Authors / Contributors 

...

Implementation profiles

Another promising development is the FAIR Implementation Profile (FIP), developed by the GO FAIR Foundation. Once published, a FIP can be reused by others, thus acting as a recipe for making data FAIR by a community, for example a research project or an institute, based on agreements and standards within that community. A FIP can be used to compare your currently used FAIR implementation choices, such as standards used in your dataset, to those used by your community, thus providing a Pre-FAIR score. FIPs and their usage are currently still under active development. For more information, see Creating a FAIR Implementation Profile (FIP), FIP Mini Questionnaire and the FIP Data Stewardship Wizard.

Step 3

To successfully do a pre-FAIR assessment, do the following:

  • learn from examples (see the practical examples section);

  • familiarise yourself with the tool you intend to use;

  • involve the necessary experts (see expertise requirements section);

  • perform the assessment.

The final evaluation will give insight into the current FAIRness of your data. Depending on the tool used, you may receive feedback on how to improve the FAIRness of your data. Thus, the outcome of the pre-FAIR assessment helps you determine the next steps to achieve your FAIRification goals.

Expertise requirements for this step 

The expertise required may depend on the assessment tool you want to use. Experts that may need to be involved, as described in Metroline Step: Build the Team, are described below.

  • FAIR data stewards. Specialist who can help filling out the surveys and questionnaires.

  • Research software engineers. Specialists whocan help running some of the specialised software.

Practical examples from the community 

Pending - Nivel Example

Training

Relevant training will be added in the future if available.

Suggestions

Visit our How to contribute page for information on how to get in touch if you have any suggestions about this page.