diff --git a/_sources/posts/2023/2023_08_21_vara_week_12_13.rst.txt b/_sources/posts/2023/2023_08_21_vara_week_12_13.rst.txt index ee8dee4a..b79f5798 100644 --- a/_sources/posts/2023/2023_08_21_vara_week_12_13.rst.txt +++ b/_sources/posts/2023/2023_08_21_vara_week_12_13.rst.txt @@ -15,7 +15,7 @@ Monai's VQVAE results on T1-weighted NFBS dataset, 125 samples, for batch size o 2. dipy's ``resize`` & scipy's ``affine_transform`` scale the volume to (128,128,128,1) shape & (1,1,1) voxel size 3. MinMax normalization to limit the range of intensities to (0,1) -Using existing training parameters, carried out two experiments, one on CC359 alone & another on both datasets combined. Additionally, I made a slight modification in the loss definition by attributing different weights of 0.5 & 1 to background & foreground pixels compared to equal weights from previous experiments. This resulted in faster convergence as shown in the red, blue & purple lines in the combined plot shown below. (Naming convention for each each training curve is ``B-``, where CC=CC359, NFBS=NFBS, both=[NFBS,CC359]) +Using existing training parameters, carried out two experiments, one on CC359 alone & another on both datasets combined. Additionally, I made a slight modification in the loss definition by attributing different weights of 0.5 & 1 to background & foreground pixels compared to equal weights from previous experiments. This resulted in faster convergence as shown in the red, blue & purple lines in the combined plot shown below. (Naming convention for each training curve is ``B-``, where CC=CC359, NFBS=NFBS, both=[NFBS,CC359]) .. image:: /_static/images/vqvae3d-monai-training-plots.png :alt: Combined trainings plots for all experiments @@ -27,13 +27,13 @@ Inference results on the best performing model, B12-both, is shown below, where :alt: VQVAE-Monai-B12-both reconstructions & originals showing equally spaced 5 slices for 2 different test samples :width: 800 -Here's a similar visualization of the inference on the next best performing model, B12-CC. +Here's a similar visualization of the inference on the next best performing model, B12-CC. .. image:: /_static/images/vqvae-monai-B12-CC.png :alt: VQVAE-Monai-B12-CC reconstructions & originals showing equally spaced 5 slices for 2 different test samples :width: 800 -This shows that our training not only converged quickly but also improved visually. Here's a comparison of our current best performing model i.e., VQVAE-Monai-B12-both & the previous one on NFBS i.e., VQVAE-Monai-B5-NFBS. The test reconstruction loss is 0.0013 & 0.0015 respectively. +This shows that our training not only converged quickly but also improved visually. Here's a comparison of our current best performing model i.e., VQVAE-Monai-B12-both & the previous one on NFBS i.e., VQVAE-Monai-B5-NFBS. The test reconstruction loss is 0.0013 & 0.0015 respectively. .. image:: /_static/images/vqvae-reconstructions-comparison.png :alt: VQVAE reconstruction comparison for B12-both & B5-NFBS diff --git a/posts/2023/2023_08_21_vara_week_12_13.html b/posts/2023/2023_08_21_vara_week_12_13.html index 54095a68..6db42961 100644 --- a/posts/2023/2023_08_21_vara_week_12_13.html +++ b/posts/2023/2023_08_21_vara_week_12_13.html @@ -757,7 +757,7 @@

What I did this week

MinMax normalization to limit the range of intensities to (0,1)

-

Using existing training parameters, carried out two experiments, one on CC359 alone & another on both datasets combined. Additionally, I made a slight modification in the loss definition by attributing different weights of 0.5 & 1 to background & foreground pixels compared to equal weights from previous experiments. This resulted in faster convergence as shown in the red, blue & purple lines in the combined plot shown below. (Naming convention for each each training curve is B<batch_size>-<dataset_used>, where CC=CC359, NFBS=NFBS, both=[NFBS,CC359])

+

Using existing training parameters, carried out two experiments, one on CC359 alone & another on both datasets combined. Additionally, I made a slight modification in the loss definition by attributing different weights of 0.5 & 1 to background & foreground pixels compared to equal weights from previous experiments. This resulted in faster convergence as shown in the red, blue & purple lines in the combined plot shown below. (Naming convention for each training curve is B<batch_size>-<dataset_used>, where CC=CC359, NFBS=NFBS, both=[NFBS,CC359])

Combined trainings plots for all experiments

Inference results on the best performing model, B12-both, is shown below, where every two rows correspond to reconstructions & original volumes respectively, with equally spaced slices in each row. These slices visualised are anterior-posterior topdown & ventral-dorsal LR.

VQVAE-Monai-B12-both reconstructions & originals showing equally spaced 5 slices for 2 different test samples