-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making FAIR data real - The community experience #12
Comments
Dear Frank, Many thanks in advance! |
Dear Jasmin, in German we have a report and user stories online. |
BioITWorld FAIR Hackathon http://www.bio-itworldexpo.com/fair-data-hackathon/ also focused on FAIR approaches to Pharmaceutical data. Incidentally the latest IMI call is about Fairification
|
Presentation Title: FAIRShake: Toolkit to Enable the FAIRness Assessment of Biomedical Digital Objects Abstract: While it is clear that there will be a benefit in making biomedical digital objects more FAIR, the FAIR principles are abstract and high level. FAIRShake brings these principles into practice by encouraging digital object producers to make their products more FAIR. The FAIRShake toolkit is designed to enable the biomedical research community to assess the FAIRness of biomedical research digital objects. These include: repositories, databases, tools, journal and book publications, courses, scientific meetings and more. The FAIRShake toolkit uses the FAIR insignia to display the results FAIR assessments. The insignia symbolizes the FAIRness of a digital object according to 16 FAIR metrics. Each square on the insignia represents the average answer to a FAIR metric question. The FAIRShake Chrome extension inserts the insignia into web-sites that list biomedical digital objects. Users can see the insignia and also contribute evaluations by clicking on the insignia. It is also possible to embed the insignia without the need for a Chrome extension and initiate FAIR evaluation projects using the FAIRShake web-site directly. Currently, the FAIRShake web site enlists four projects: evaluation of the LINCS tools and datasets, evaluation of the MOD repositories, evaluations of over 5,000 bioinformatics tools and databases, and evaluations of the repositories listed on DataMed. The project is at an early prototyping phase so it is not ready for broad use. |
Frank raises a couple of different aspects - some have been commented by others. Let me try to find my way. I should add here that I have some knowledge of what is being done at KIT and we had some collaborations - also on questions Frank is raising.
So thanks for your great input which we need to consider in the report. |
Thanks for the FAIRShake reference @CaroleGoble I've found a link to a short video on youtube but if you have other literature references we should follow up that would be great. |
I just looked at the FAIRshake video and it is indeed pritty cool. if I got it right, it's finally the crowds view on the fairness of DOs. So this makes it complementary to approaches such as DSA/WDS where people do self-assessment based on rule sets. |
Probably you might already know this all but maybe it is still somewhat helpful answering your questions on how to make FAIR data real from what I have learned from the our almost 800 questioned scientists in a brief overview.
To what extent are the FAIR principles alone sufficient to reduce fragmentation and increase interoperability? The principles have a great potential to influence the minds of stakeholders towards more efficient data sharing and reuse, but perhaps additional measures and more specifics are needed to guide implementation?
So the question must be answered: "What is the data, that needs to be FAIR?" 45% of our researches said they could benefit "much" or "very much" from some kind of "negative data", but I do not see it coming automatically by being FAIR alone. The disciplines know in principle what could be needed, but are not able to change their "credit system". I think there shoud be funding offers to disciplines to think about their data and work on such things as a whole.
What are the necessary components of a FAIR data ecosystem in terms of technologies, standards, legal framework, skills etc?
What existing components can be built on, and are there promising examples of joined-up architectures and interoperability around research data such as those based on Digital Objects?
Do we need a layered approach to tackle the complexity of building a global data infrastructure ecosystem, and if so, what are the layers?
Which global initiatives are working on relevant architectural frameworks to put FAIR into practice?
So please make clear among all players what exactly universities, project funders and disciplinary or EU-solutions should offer their scientists (e.g. as condition for some participation/cooperation)
A large proportion of data-driven research has been shown to not be reproducible. Do we need to turn to automated processing guided by documented workflows, and if so how should this be organised?
What kind of roles and professions are required to put the FAIR principles into place?
Let us try the analogy with industry: This professorships for "data replication science" are comparable with a specialised "quality control", but for the good "data". This is different from the current approach where the production lines somehow check each other. The new guys would focus on checking only the data. These people could give valuable feedback and impact for the data science field. Science is an industry with high quality and sensitive products.
The text was updated successfully, but these errors were encountered: