Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
amyheather committed Oct 18, 2024
2 parents 432e186 + d379097 commit 2d1e2a7
Show file tree
Hide file tree
Showing 9 changed files with 87 additions and 5 deletions.
28 changes: 24 additions & 4 deletions logbook/posts/2024_10_15/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ bibliography: ../../../quarto_site/references.bib

::: {.callout-note}

X. Total time used: Xh Xm (X%)
Review base case results with 100 million and run sensitivity analysis with 50 million. Total time used: 18h 56m (47.3%)

:::

Expand All @@ -21,8 +21,28 @@ X. Total time used: Xh Xm (X%)

As these were run at the same time in seperate terminals, total was 20 hours 40 minutes.

**Model results:** Ran `Process_Model_Results.Rmd`. As expected, result still seems quite different for Appendix 6. For Table 3 and Figure 3, the results generally seem closer to the paper, and I would argue that I think these have been reproduced. I think this is a tricky case, as the ICER and INMB are evidently incredibly sensitive, and so with very similar looking QALYs and costs, they vary hugely. However, I am satisifed that we have now got close enough results to consider these reproduced.
**Model results:** Ran `Process_Model_Results.Rmd`. As expected, result still seems quite different for Appendix 6, due to the mismatch in SABA. For Table 3 and Figure 3, the results generally seem closer to the paper, and I would argue that I think these have been reproduced. I think this is a tricky case, as the ICER and INMB are evidently incredibly sensitive, and so with very similar looking QALYs and costs, they vary hugely. However, I am satisifed that we have now got close enough results to consider these reproduced.

## 09.54-X: Running sensitivity analysis with 100 million agents
## 09.54-10.06: Running sensitivity analysis with 100 million agents

Having found I needed to run this to 100 million agents, and given the timings were not exponentially larger (14 hours vs 20 hours), I set the sensitivity analyses to run with 100 million agents. I initially tried running all at once.
Having found I needed to run this to 100 million agents, and given the timings were not exponentially larger (14 hours vs 20 hours), I set the sensitivity analyses to run with 100 million agents. I initially tried running all at once, and it seemed to manage fine with this, so I anticipate I could have ran everything in parallel (i.e. inc base case - which I had run seperately just like a "test run") (using 11667, 10117 available, and that's including any background processes).

**Note: as on next logbook, turns out this was accidentally 50 million**

## Timings

```{python}
import sys
sys.path.append('../')
from timings import calculate_times
# Minutes used prior to today
used_to_date = 1111
# Times from today
times = [
('09.40', '09.53'),
('09.54', '10.06')]
calculate_times(used_to_date, times)
```
Binary file added logbook/posts/2024_10_16/apx7_100t.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added logbook/posts/2024_10_16/apx7_50m.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added logbook/posts/2024_10_16/fig4_100t.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added logbook/posts/2024_10_16/fig4_50m.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
62 changes: 62 additions & 0 deletions logbook/posts/2024_10_16/index.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
title: "Day 15"
author: "Amy Heather"
date: "2024-10-16"
categories: [reproduction]
bibliography: ../../../quarto_site/references.bib
---

::: {.callout-note}

Review sensitivity analysis from 50 million and set to run with 100 million. Total time used: 19h 6m (47.8%)

:::

## 09.28-09.31, 09.37-09.44: Review results from 50 million

Realised mistake - had run sensitivity analysis with 50 million rather than intended 100 million.

::: {.callout-tip}
## Reflection

With high run times like this, a simple mistake means another day waiting for results. Probably worth reflecting on the run time of the code itself in the article, and how in cases like this, it means it spans days waiting for different runs.

:::

Alas, will review results with 50 million, and set to run 100.

Ran `Process_Sensitivity_Analysis.Rmd`. Results all still quite far off.

**From 50 million:**

![](fig4_50m.png)

**For reference, from 100 thousand:**

![](fig4_100t.png)

**From 50 million:**

![](apx7_50m.png)

**For reference, from 100 thousand:**

![](apx7_100t.png)

## Timings

```{python}
import sys
sys.path.append('../')
from timings import calculate_times
# Minutes used prior to today
used_to_date = 1136
# Times from today
times = [
('09.28', '09.31'),
('09.37', '09.44')]
calculate_times(used_to_date, times)
```
Binary file modified reproduction/outputs/appendix7-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified reproduction/outputs/fig4-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion reproduction/scripts/Process_Sensitivity_Analysis.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Process Sensitivity Analysis Results
================
16 September, 2024
16 October, 2024

This file processes results from the sensitivity analysis.

Expand Down

0 comments on commit 2d1e2a7

Please sign in to comment.