'Metadata is the descriptor, and data is the thing being described' (https://doi.org/10.1162/dint_r_00024 )

Todo: In layman’s terms (Jip en Janneke), add an easy to follow summary, using around three sentences.

Short description 

Metadata refers to the contextual information about a resource (e.g. a dataset), often described as “data about data”. Metadata can come in many different types and forms.

In this step, the focus will be on assessing the availability of your metadata. This involves identifying and collecting all types of metadata gathered for your resource, checking their quality, and ensuring they are as accurate and complete as possible. This step is a good starting point and a common first step for multiple objectives (see also the Metroline Step: Define FAIRification objectives), whether you aim to:

Why is this step important 

To be able to register resource level metadata (for instance in a repository or catalogue) you need to make sure you have/collect the appropriate and correct metadata.

Furthermore it is:

Beneficial for you and your team: Having comprehensive and detailed metadata ensures that anyone, including yourself, can understand and work on the data effectively even when some time has passed since collection. This is an example of good data management practices and contributes to data remaining usable and meaningful over time and saves time when setting up new projects.

Beneficial for the organisation: Complete and error-free metadata makes it easier for organisations to migrate information about its projects between systems, especially when newer software versions are available. 

Promotes higher research impact: Good metadata records reflect well on the researcher’s outputs. Potential data reusers might be put off by documentation issues and may not be inclined to use the data.

Improves the quality of your data: Good metadata should describe the data accurately and unambiguously, which in turn improves the overall quality of the data and enhances transparency and reproducibility. This enables others to verify results and build upon them.

Helps with data discovery: Complete metadata improves the ability for you and your team to locate and retrieve data quickly. Additionally, if this metadata is published, it can boost reuse of data, lead to new collaborations and enhance recognition of existing work.

Complies with funders’ and journals’ requirements: Many funding agencies and publishers now require metadata to be published to increase the efficiency and visibility of the research they support.

Regarding the National Health Data catalogue:

Health-RI is in the process of defining a metadata scheme for adding metadata (onboarding) to the Health-RI metadata portal. To allow for onboarding of a resource, the minimal metadata set must be provided. It is therefore essential that you assess whether this minimal set is collected/available or whether additional metadata needs to be collected.

 

How to 

Step 1:  Identify where information about your resource is stored 

Start by considering where information about your resource is already contained. Typically, institutions have systems that require a certain level of documentation. Investigate these systems. 

Example: Eva, a researcher at Radboudumc, wants to assess what metadata is available about her project. She starts by consulting her Data Management Plan (DMP). She then remembers that she added metadata about her project to the PaNaMa registry and the Radboud Data Repository. 

Step output: Systems and documents identified, where metadata are stored (for instance the DMP, Research Management system such as PaNaMa, and (local) data repositories). 

 

Step 2: Extract and evaluate your metadata  

Once you've identified where your metadata might reside, it's time to extract and evaluate it. Errors and inconsistencies can naturally creep into your records over time, especially when many people are involved. Guidelines and project contexts can also change. This step helps ensure that the metadata is still understandable and accurate. Use these questions to guide you: 

  1. Are there typos in the metadata? 

  1. Is there missing information due to accidents, or omissions? 

  1. Are vocabularies used properly? Is the language outdated or not accurate anymore? 

  1. Are metadata terms used consistently? (e.g., Radboudumc vs rumc) 

Example: After reviewing her metadata across various platforms, Eva realizes some information is outdated. The abstract of her Data Management Plan no longer aligns with her adjusted research question. Her data collection protocol has changed due to a new data collection system recently implemented by Radboudumc. She also notices that the PaNaMa entry has many blank recommended fields, and the Radboud Data Repository keywords include terminologies that might not facilitate discoverability of her resource (e.g., by using the term "neoplasm" instead of "cancer" or "tumor" more widely searched as keywords). Additionally, terms like "gender" and "sex" are used interchangeably across the descriptions in all those systems. 

Step output: A list of identified issues in the metadata to be resolved/updated. 

 

Step 3: Make the necessary corrections 

Tip: Prioritize the systems with the highest impact. While assessing metadata is beneficial, it might require organizational support and can be labor-intensive, especially if you're involved in multiple complex projects. 

Example: Eva decides to update her Data Management Plan because it's crucial for her PhD thesis. She also updates and fills out missing fields in the Radboud Data Repository to make her dataset available for reuse by others. 

Step output: Metadata is updated, based on step 2 output.  
 

You are now ready to take the next step with your metadata: 

 

Step 4 (Bonus Step!): Enhance Your Metadata 

Consider what else might be missing from your metadata. Is it sufficient for others to understand the context of your resource and how to use it? The FAIR data principles suggest describing your resource with various attributes to help others find potential uses that you might not be aware of. Think about the questions your current metadata can't answer and consult your data steward for solutions, if needed. 

 

Example: Eva collects a lot of data from questionnaires but doesn't know how to include them in the metadata. This information could help others discover her dataset based on specific questions (e.g., whether participants smoke) and understand the possible values and the presence or not of missing data (e.g., incomplete diagnosis dates). 

Expertise requirements for this step 

Experts that may need to be involved, as described in https://health-ri.atlassian.net/wiki/spaces/FSD/pages/273350662/Metroline+Step+Build+the+Team , are described below.

Practical examples from the community 

This section should show the step applied in a real project. Links to demonstrator projects. 

Training

https://carpentries-incubator.github.io/scientific-metadata/instructor/data-metadata.html#metadata

https://howtofair.dk/how-to-fair/metadata/#what-are-metadata

More relevant trainings will be added soon.

Suggestions

This page is under construction. Learn more about the contributors here and explore the development process here. If you have any suggestions, visit our How to contribute page to get in touch.