Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This section could describe the expertise required. Perhaps the Build Your Team step could then be an aggregation of all the “Expertise requirements for this step” steps that someone needs to fulfil his/her FAIRification goals.  

[Hannah: I would say expertise depends a bit on which tool you use; most checklists and questionnaires are pretty low effort and self-explenatory. But some of the more automated tools require some (programming) skills]

How to 

There are many assessment tools to do a pre-FAIR assessment of your (meta)data. Based on the 2022 publication FAIR assessment tools FAIRassist holds a (manually created) collection with various different tools. These include manual questionnaires or checklists and automated tests that help users understand how to achieve a state of "FAIRness", and how this can be measured and improved. Furthermore, a 2022 publication (FAIR assessment tools: evaluating use and performance and x, y, z the following tools could be considered:

Online self-assessment surveys

These tools provide allow you to fill in an online form and then give e.g. a score to indicate the FAIRness of your (meta)data.

...

Tool

...

) compared a number of tools. Of these and the tools listed on FAIRassist, we suggest that the following can be considered for your pre-FAIR assessment:

[Hannah: Fieke pointed out something important; there are basically two kinds of tools for the FAIR assessment. One group assesses (often in a semi-automated way) the FAIRness of (meta)data which already has a persistent identifyer (such as a DOI). The other group assesses FAIRness (often in the form of a survey, questionnaire or checklist) of (meta)data without persistent identifyer.]

Online self-assessment surveys

These tools allow you to fill in an online form. The result of the survey can be e.g. a score to indicate the FAIRness of your (meta)data. Some tools additionally provide advice on how to improve FAIRness at different levels.

Tool

Description - Nakijken of de paper iets moois heeft staan

ARDC FAIR self assessment

Provided by Australian Research Data Commons, this 12-question online survey provides a visual indication on the FAIRness level of your (meta)data and provides resources on how to improve it.

FAIRaware

Provided by DANS, this online survey gives a FAIRness score. Furthermore, it provides advice on how to improve the FAIRness of your (meta)data.SATIFYD

[Hannah; according to the review paper, this tool ‘assesses the user's understanding of the FAIR principles rather than the FAIRness of his/her dataset. FAIR-aware is not further considered in this paper’. Maybe throw it out as well?]

SATIFYD

Provided by DANS, this online survey gives a FAIRness score. Furthermore, it provides advice on how to improve the FAIRness of your (meta)data.

FAIRshake

Allows you to automatically assess digital objects as well as add a new project to their repository (??)
it seems to (automatically??) check digital objects. Is the survey automatically filled?

...

Also has a Chrome browser plugin to automatically check FAIR assessments for available projects

[Hannah; I don’t know how useful this is in the context of our metroline; also the paper states it as quite a time investment and ]

Online (Semi-) automated

These tools do an automatic assessment by reading the metadata available at a certain URI.

Offline self-assessment

  • RDA- [Hannah: this links to another page on the confluence] and this one [Hannah; these are jupyter notebooks to use for data from specific databases; can be extended/adjusted with your own dataset; it seems a bit of a larger effort to use for a ‘quick’ FAIR assessment of your (meta)data]

  • FAIR evaluator software

  • FAIRchecker; this tool automatically provides a score for all aspects of FAIR from a URI

Offline self-assessment

[Hannah: the 2022 paper does not recommend using offline tools, and I kind of agree. So maybe we don’t include this category at all? Especially because one of the links is dead anyways..]

  • we evaluated FAIR assessment tools in terms of 1) the prerequisite knowledge needed to run the tools, 2) the ease and effort needed to use them and 3) the output of the tool, with respect to the information it contains and the consistency between tools. This should help users, e.g., in the nanosafety domain, to improve their methods on storing, publishing and providing research data. To do this we provide

  • guidance for researchers to pick a tool for their needs and be aware of its strong points and weaknesses.

  • The selected tools were split up into four different sections, namely online self-assessment/survey, (semi-)automated, offline self-assessment and other types of tools. The tool selection was based on online searches in June 2020.

  • They compare:

  • we evaluated FAIR assessment tools in terms of 1) the prerequisite knowledge needed to run the tools, 2) the ease and effort needed to use them and 3) the output of the tool, with respect to the information it contains and the consistency between tools. This should help users, e.g., in the nanosafety domain, to improve their methods on storing, publishing and providing research data. To do this we provide assessment/survey, (semi-)automated, offline self-assessment and other types of tools. The tool selection was based on online searches in June 2020.

  • They compare:

The FAIR Data Maturity Model

...

  • reviews ten FAIR assessment tools that have been evaluated and characterized using two datasets from the nanomaterials and microplastics risk assessment domain.

  • we evaluated FAIR assessment tools in terms of 1) the prerequisite knowledge needed to run the tools, 2) the ease and effort needed to use them and 3) the output of the tool, with respect to the information it contains and the consistency between tools. This should help users, e.g., in the nanosafety domain, to improve their methods on storing, publishing and providing research data. To do this we provide guidance for researchers to pick a tool for their needs and be aware of its strong points and weaknesses.

  • The selected tools were split up into four different sections, namely online self-assessment/survey, (semi-)automated, offline self-assessment and other types of tools. The tool selection was based on online searches in June 2020.

  • They compare:

    These resources include manual questionnaires, checklists and automated tests that help users understand how to achieve a state of "FAIRness", and how this can be measured and improved.

There is also FAIRassist with lots of tools and questionnaires:

More Checklists and tools:

  • A Checklist produced for use at the EUDAT summer school to discuss how FAIR the participant's research data were and what measures could be taken to improve FAIRness:

[Sander]

Hannah mentions the Data Maturity Model. This is also here on FAIRplus. There is also this Github from FAIRplus and the sheet for the actual assessment is here. Could be worrying: last update was last year.

...