Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

[Fieke] The data steward profile is often described according to three roles (policy, research and infrastructure) and eight task areas (policy & strategy; compliance; FAIR data; Services; Infrastructure; Knowledge management; network; data archiving). A single data steward can be responsible for all task areas, but tasks can also be divided among central and embedded / domain data stewards. Each task area requires different competencies. The EMBL-EBI competency hub describes activities, ksa’s (knowledge, skills & abilities) and learning objective for each rol and task area.

[Sander] Would it make sense that, if we mention roles in this section in other pages, these roles are actually specified in this page’s How to? We could even create hyperlinks to this page.

How to 

[Sander] As a FAIRification steward is essential for reaching the FAIRification goals, a full page has been dedicated to this role. See “Metroline Step: Have a FAIRification steward on board” for details on this crucial role.

RDMkit has a nice section about Roles in Data Management (with more details than I copied below):

In this section, information is organised based on the different roles a professional can have in research data management. You will find:

  • A description of the main tasks usually handled by each role.

  • A collection of research data management responsibilities for each role.

  • Links to RDMkit guidelines and advice on useful information for getting started with data management specific to each role.

Roles:

  • Data Steward: Data stewardship is a relatively new profession and a catch-all term for numerous support functions, roles and activities. It implies professional and careful treatment of data throughout all stages of a research process.

  • Policy maker: As a policy maker, you are responsible for the development of a strategic data management framework and the coordination and implementation of research data management guidelines and practices.

  • Principal Investigator: As a Principal Investigator (PI), you may have recently acquired project funding. More and more funders require data management plans (DMP), stimulating the researcher to consider, from the beginning of a project, all relevant aspects of data management.

  • Researcher: Your research data is a major output from your research project, it supports your research conclusions, and guides yourself and others towards future research. Therefore, managing the data well throughout the project, and sharing it, is a crucial aspect of research.

  • Research Software Engineer: Research software engineers (RSE) in the life sciences design, develop and maintain software systems that help researchers manage their software and data. The RSE’s software tools and infrastructure are critical in enabling scientific research to be conducted effectively.

  • Trainer: As a trainer, you design and deliver training courses in research data management with a focus on bioinformatics data. Your audience is mainly people in biomedical sciences: PhD students, postdocs, researchers, technicians and PIs.

[Generic] 

Data FAIRification requires different types of expertise and should therefore be carried out in a multidisciplinary team guided by FAIR data steward(s). The different sets of expertise are on i) the data to be FAIRified and how they are managed, ii) the domain and the aims of the data resource within it, iii) architectural features of the software that is (or will be) used for managing the data, iv) access policies applicable to the resource, v) the FAIRification process (guiding and monitoring it), vi) FAIR software services and their deployment, vii) data modelling, viii) global standards applicable to the data resource, and ix) global standards for data access. A good working approach is to organize a team that contains or has access to the required expertise. The core of such a team may be formed by data stewards, with at least expertise of the local environment and of the FAIRification process in general. 

...

#

FAIR Principle

Example resource

F1

Globally unique and persistent identifiers

DOI, ORCID, EUPID, 

F2 

Metadata about data

  • DCAT (standard)

  • FAIR data point (former DTL metadata editor) (tool)

  • ISA Framework

F3

Adding clearly and explicitly the identifier of the data they describe in the metadata

  • FAIRifier tool

  • FAIR data point

F4

indexing or registering metadata and data in a searchable resource

  • FAIR data point

A1

metadata and data can be retrieved by their identifier via an protocol (making explicit the contact protocol to access the data)

  • Http/ Ftp

  • In case of sensitive data, add to the metadata the contact info (email / telephone) of who to discuss data access with, and a clear protocol for such access request.

A1.1

open, free and universally implementable protocols

  • Email / phone

  • Http / ftp / SMTP

A1.2

protocol that allows for authentication / authorization when necessary 

  • (set user rights, register users in repository)

A2

metadata is there even when data is not available anymore (see F4)

  • FAIR data point

I1

Metadata and data use a proper language for knowledge representation (incl (1) commonly used controlled vocabularies, ontologies, thesauri (having resolvable globally unique and persistent identifiers, see F1) and and (2) a good data model (a well-defined framework to describe and structure (meta)data).

  • RDF (ttl, rdfs, rdfxml, shex, shacl)

  • Dublin Core / DCAT

  • OWL

  • DAML+OIL

  • JSON LD

  • Semantic data models

I2

The controlled vocabulary used to describe datasets needs to be documented and resolvable using globally unique and persistent identifiers. This documentation needs to be easily findable and accessible by anyone who uses the dataset.

  • FAIR data point

I3

The goal is to create as many meaningful links as possible between (meta)data resources to enrich the contextual knowledge about the data.

 

R1

 

 

R1.1

 

 

R1.2

 

 

R1.3

 

 

Resource glossary

Tool/Standardl Standard # can be used to #

  • Goal Modelling (see link) is a standard that can be used to represent goals that are connected to each other and it helps defining clear FAIRification objsectives objectives for both research question and process perspectives. 

  • FAIR data point (see link) is a tool guarantees many FAIR principles and can be used to describe metadata completely in accordance to the  DCAT standard, you can create and publish metadata in the FAIR data point which is a searchable and indexable resource (see fair data index, every fair data point is indexed in the fair data index), 

  • DCAT (see link) is a standard to describe metadata of, from detail to general levels: distribution, dataset, catalogue

  • RDF (see link) extensible knowledge representation model is a way to describe and structure datasets

  • Smart Guidance (see link) is a tool that defines the specific steps for RD registries data FAIRification

...