05. How do you document your trust in an open source solution to satisfy a third-party inquiry? #5
Replies: 5 comments 4 replies
-
Risk metric score: https://pharmar.github.io/riskmetric/ valtools: https://phuse-org.github.io/valtools/ |
Beta Was this translation helpful? Give feedback.
-
Do people have any webinars/presentations to link on this one? |
Beta Was this translation helpful? Give feedback.
-
You at least need the following with appropriately tracked review/sign-off as per internal QA policies:
This concerns the review of the language and packages. Internal processes should govern additional documentation required to qualify/validate the environment. You could potentially bypass human engagement for some packages, if they meet some risk threshold. The R Validation Hub invited several companies to share their experiences via case studies. Several were using the riskmetric score either as a way to quickly pass through the obvious (packages like dplyr which score very highly) or as a guide in an overall assessment. I think it's theoretically fine to rely on a score for those that score 'low risk' (so long as low risk is clearly defined), although the riskmetric package itself still has a lot of open issues to address (91 as I look today), including some important metrics that are still to be added. So as things stand my advice would be to use riskmetric to gather info but don't rely too heavily on the score itself and let a human review the data it gathers. This is where the risk assessment app comes in. Or else get involved in the development of the package and improve it! |
Beta Was this translation helpful? Give feedback.
-
Comments from PhUSE EU Connect OSTCDA workshop:
|
Beta Was this translation helpful? Give feedback.
-
For consideration... (my own views) Different companies are likely to have different weightings (for the package attributes) to create their own "risk scores" for packages. This may make it difficult to contribute a regulatory R repository with a definitive score. But maybe we could work together to gather the attributes of packages and provide this as metadata for packages of interest. For example: code coverage, date since last version, package license type are attributes that can then be "scored". But instead of each party pulling this information to create our scores, if this was readily available as metadata in a repository then it would be easier to just apply our own scoring rules. If we can agree a common format and source for that metadata. |
Beta Was this translation helpful? Give feedback.
-
e.g., if an organization is GCP audited on the integrity of data handling by a regulatory body?
Beta Was this translation helpful? Give feedback.
All reactions