/
Metroline Step: Assess FAIRness

Metroline Step: Assess FAIRness

Status: IN development

‘Start with a great quote from, for example, a paper, between single quotes, in italic.' (source as a hyperlink between parenthesis)

Now that you’ve FAIRified your data, it’s time to check the resulting FAIRness and decide if you’ve reached your goals. Use tools and other methods to assess if your data is truly Findable, Accessible, Interoperable, and Reusable. If needed, adjust or improve things so your data stays FAIR in the long run. 

Short description 

In this step, you assess whether the objectives of your FAIRification process have been achieved. As outlined in A Generic Workflow for the Data FAIRification Process, this may include evaluating the extent to which the original FAIRification objectives have been met and assessing the FAIR status of your data and metadata, for example, by using FAIR assessment tools. Depending on your solution plan, this step may mark the completion of your FAIRification journey or serve as an intermediate checkpoint to review progress and, if needed, refine your objectives.

Why is this step important 

Assessing FAIRness after FAIRification ensures that your dataset truly meets your original goals and reveals where further work is needed. This step is key to: 

  • Verifying FAIR compliance. Confirms whether your data is truly Findable, Accessible, Interoperable and Reusable, rather than just improved. 

  • Ensuring long-term usability. Prevents obsolescence by checking if your data remains understandable and accessible over time. 

  • Pinpointing gaps. Identifies remaining issues—like missing metadata or access barriers—supporting ongoing improvement. 

  • Building trust and transparency. Clarifies access policies and validates metadata availability, even if the data itself is restricted later. 

  • Boosting reuse and impact. Well-assessed FAIR datasets are more likely to be shared, cited and reused in future research. 

  • Meeting External Expectations. A FAIRness assessment ensures your dataset complies with mandates and expectations from institutions, funders (like Horizon Europe or the NIH), journals, and the wider scientific community.  

Aligning with these key points does not only support accountability but also reinforces the credibility and acceptance of your work. 

How to 

Step 1 – Check if you reached your (original) FAIRification objectives 

You set out by defining FAIRification objectives. You should now check if you reached these objectives. Keep in mind: 

  • How important was the objective? 

    • For example, if one goal is to meet specific funder, institutional, and/or  journal requirements, not reaching it may not be an option. 

      • For example, ZonMw requires you to fill in their so-called “kerntabel” which should provide information on where data is stored and how it can be accessed. This information should be published and have a persistent identifier (doi). 

    • Which goals were nice-to-have? Some goals may be optional or dependent on available resources.

  • Do you have the resources for further improvement? 

    • If funding runs out, further FAIR enhancements may not be realistic and you may have to settle for less ambitious goals. Sometimes. “good enough” is acceptable given time, budget, or resource limits. 

    • Identify any blockers, like missing expertise or tooling, and whether they’re worth addressing now or later. 

  • Did any new requirements emerge during the project?  

    • FAIRification is an iterative process; your goals may have evolved. 

Step 2 – Consider (re)running an assessment tool 

  • In the Pre-FAIR assessment step, various tools are discussed that can be used to assess fairness of data. By running such a tool in this stage, you can objectively assess how FAIR your data is right now.  

  • If you did a pre-FAIR assessment, rerunning the same tool in this phase is a great way to compare results to show progress. 

Step 3 – Other assessment options 

  • Conduct a peer review. Engage external reviewers or peers to conduct an independent assessment for objectivity. 

  • Conduct tests. If you, for example, set out to create machine actionable metadata, run actual tests to verify if this the case. 

    • Try accessing the dataset as if you were an external user. Can you find it? Understand it? Reuse it?

    • Test whether machine-actionable metadata actually works by using real scripts or applications to access and interpret it.

    • Ask intended users or colleagues to test useability and accessibility in the ways you intended. 

Step 4 - Document the results 

  • Record any gaps or deviations from the initial FAIRification goals. 

  • Provide recommendations for further improvements if needed. 

Step 5 - Continuous monitoring and iterative improvement 

Even if your FAIRification efforts are complete for now, evolving requirements, such as changes in metadata standards, may require future updates. It is therefore essential to continue monitoring your dataset to ensure it remains FAIR over time. 

Expertise requirements for this step 

The expertise required may depend on the assessment tool you want to use. Experts that may need to be involved, as described in Metroline Step: Build the Team, are described below. 

  • Researcher. Understands the data content and how it should be used. 

  • FAIR data stewards. Specialist who can help filling out the FAIR assessment tools (see also Metroline step: Pre-FAIR assessment

  • Research software engineers. Specialists who can help running some of the specialised software. 

Practical examples from the community 

For an applied example of The FAIR Evaluator tool, see Applying the FAIR principles to data in a hospital: challenges and opportunities in a pandemic

Training

Relevant training will be added soon. 

Suggestions

This page is under construction. Learn more about the contributors here and explore the development process here. If you have any suggestions, visit our How to contribute page to get in touch.

Related content