You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
s:algoScope: related to the algorithm, modelling or other scientific concernss:uiScope: related to user experience, user interface, usability, accessibilityt:featType: request of a new feature, functionality, enchancement
An option (toggle) to correct observed case counts considering the widespread testing problems in many countries. It could either use t+18 deaths / IFR (~1%) or test positivity data where available.
🔦 Context
The tool is very important to allow the public to generate scenarios, specially where local available modelling capabilities are limited. The exponential nature of the problem makes it hard to create scenarios that aren't widely divergent based on minute assumption differences. Tuning the scenario to the actual local evolution of the epidemic is vital, and that's why I proposed to add Google Mobility data as a baseline mitigation intervention in another suggestion.
Fitting the tool's predicted case numbers to actual numbers is made harder by the significant measurement error in different countries' case count data, which is the only benchmark available within the tool to fit the model. (The analyst might use their knowledge of ICU usage, testing limitations and underascertainment, and other indicators, but it can't be imported into the tool and must thus be used "by eye".) One might try to correct the model predictions by adding finely tuned interventions given what is know about test positivity, recorded deaths and other indicators, but perhaps the tool could help on this regard without a significant increase in complexity.
😯 Describe the feature
A toggle on the results pane would swap observed case counts in the main chart for corrected case counts.
It could use recorded deaths at t+18 days divided by IFR (~1%) as a simple fix. Or correct the case number upwards when the proposed fix yields a larger number.
ivan-aksamentov
added
IMPORTANT
Take this immediately!
s:algo
Scope: related to the algorithm, modelling or other scientific concerns
s:ui
Scope: related to user experience, user interface, usability, accessibility
and removed
needs triage
Review this and assign labels
labels
Jun 1, 2020
@osnofas thanks for these notes. We generally don't fit case counts for the exact same reasons you mention. Instead, we try to fit deaths. This usually results in case counts that are between 5 and 20-fold higher than confirmed cases as in this example for Germany:
You see that model reproduces the observed deaths very well, but the number of cases in the last 3 days (yellow dots) is substantially lower than the model equivalent (yellow line). Does that make sense? I am a little hesitant to plot "corrected case counts" as they seem very susceptible to interpretation issues.
I'll try focusing on fitting deaths, thanks for the suggestion (I've been sort of trying to fit both). Perhaps a cue here ("try fitting deaths") could be of help:
This way, however, you loose >2 weeks of data (e.g. if there is an effect of the current protests in the US it'll show much earlier in cases than in deaths). 🤷♂️
We should probably add a guide on how to adjust/fit manually. And you are right, you can't fit both. my strategy is to fit death for absolute numbers and fit cases for slopes...
s:algoScope: related to the algorithm, modelling or other scientific concernss:uiScope: related to user experience, user interface, usability, accessibilityt:featType: request of a new feature, functionality, enchancement
🙋 Feature Request
An option (toggle) to correct observed case counts considering the widespread testing problems in many countries. It could either use t+18 deaths / IFR (~1%) or test positivity data where available.
🔦 Context
The tool is very important to allow the public to generate scenarios, specially where local available modelling capabilities are limited. The exponential nature of the problem makes it hard to create scenarios that aren't widely divergent based on minute assumption differences. Tuning the scenario to the actual local evolution of the epidemic is vital, and that's why I proposed to add Google Mobility data as a baseline mitigation intervention in another suggestion.
Fitting the tool's predicted case numbers to actual numbers is made harder by the significant measurement error in different countries' case count data, which is the only benchmark available within the tool to fit the model. (The analyst might use their knowledge of ICU usage, testing limitations and underascertainment, and other indicators, but it can't be imported into the tool and must thus be used "by eye".) One might try to correct the model predictions by adding finely tuned interventions given what is know about test positivity, recorded deaths and other indicators, but perhaps the tool could help on this regard without a significant increase in complexity.
😯 Describe the feature
A toggle on the results pane would swap observed case counts in the main chart for corrected case counts.
It could use recorded deaths at t+18 days divided by IFR (~1%) as a simple fix. Or correct the case number upwards when the proposed fix yields a larger number.
Another possibility is to correct observed case numbers by test positivity. This describes some alternatives: http://freerangestats.info/blog/2020/05/09/covid-population-incidence
Related
IFR estimates
https://www.medrxiv.org/content/10.1101/2020.05.03.20089854v2
Testing data
https://github.com/owid/covid-19-data/tree/master/public/data
Possible way to correct observed case counts by test possitivity
http://freerangestats.info/blog/2020/05/09/covid-population-incidence
http://freerangestats.info/blog/2020/05/09/covid-population-incidence
The text was updated successfully, but these errors were encountered: