Skip to content

Commit

Permalink
Commit from GitHub Actions (Build Notebooks)
Browse files Browse the repository at this point in the history
  • Loading branch information
Ben-Salmon committed Aug 19, 2024
1 parent 071043a commit dd20b66
Show file tree
Hide file tree
Showing 6 changed files with 72 additions and 137 deletions.
77 changes: 49 additions & 28 deletions 01_CARE/care_exercise.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@
"%load_ext tensorboard\n",
"\n",
"\n",
"from careamics_portfolio import PortfolioManager\n",
"import tifffile\n",
"import numpy as np\n",
"from pathlib import Path\n",
Expand Down Expand Up @@ -93,31 +92,6 @@
"Since the image pairs were synthetically created in this example, they are already aligned perfectly. Note that when working with real paired acquisitions, the low and high SNR images are not pixel-perfect aligned so they would often need to be co-registered before training a CARE model."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download the data\n",
"\n",
"To download the data, we use the careamics-portfolio package. The package provides a collection of microscopy datasets and convenience functions for downloading them."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Download the data\n",
"portfolio = PortfolioManager()\n",
"print(portfolio.denoising)\n",
"\n",
"root_path = Path(\"./data\")\n",
"files = portfolio.denoising.CARE_U2OS.download(root_path)\n",
"print(f\"Number of files downloaded: {len(files)}\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
Expand All @@ -133,6 +107,7 @@
"outputs": [],
"source": [
"# Define the paths\n",
"root_path = Path(\"./../data\")\n",
"root_path = root_path / \"denoising-CARE_U2OS.unzip\" / \"data\" / \"U2OS\"\n",
"assert root_path.exists(), f\"Path {root_path} does not exist\"\n",
"\n",
Expand Down Expand Up @@ -455,9 +430,26 @@
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-success\"><h1>Checkpoint 1: Data</h1>\n",
"\n",
"In this section, we prepared paired training data. \n",
"The steps were:\n",
"1) Loading the images.\n",
"2) Cropping them into patches.\n",
"3) Checking the patches visually.\n",
"4) Creating an instance of a pytorch dataset and dataloader.\n",
"\n",
"You'll see a similar preparation procedure followed for most deep learning vision tasks.\n",
"\n",
"Next, we'll use this data to train a denoising model.\n",
"</div>\n",
"\n",
"<hr style=\"height:2px;\">\n",
"<hr style=\"height:2px;\">\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Part 2: Training the model\n",
"\n",
Expand Down Expand Up @@ -682,9 +674,25 @@
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-success\"><h1>Checkpoint 2: Training</h1>\n",
"\n",
"In this section, we created and trained a UNet for denoising.\n",
"We:\n",
"1) Instantiated the model with random weights.\n",
"2) Chose a loss function to compare the output image to the ground truth clean image.\n",
"3) Chose an optimizer to minimize that loss function.\n",
"4) Trained the model with this optimizer.\n",
"5) Examined the training and validation loss curves to see how well our model trained.\n",
"\n",
"Next, we'll load a test set of noisy images and see how well our model denoises them.\n",
"</div>\n",
"\n",
"<hr style=\"height:2px;\">\n",
"<hr style=\"height:2px;\">\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Part 3: Predicting on the test dataset\n"
]
Expand Down Expand Up @@ -801,8 +809,21 @@
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-success\"><h1>Checkpoint 3: Predicting</h1>\n",
"\n",
"In this section, we evaluated the performance of our denoiser.\n",
"We:\n",
"1) Created a CAREDataset and Dataloader for a prediction loop.\n",
"2) Ran a prediction loop on the test data.\n",
"3) Examined the outputs.\n",
"\n",
"This notebook has shown how matched pairs of noisy and clean images can train a UNet to denoise, but what if we don't have any clean images? In the next notebook, we'll try Noise2Void, a method for training a UNet to denoise with only noisy images.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
Expand Down
76 changes: 13 additions & 63 deletions 02_Noise2Void/n2v_exercise.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,6 @@
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import tifffile\n",
"from careamics_portfolio import PortfolioManager\n",
"\n",
"from careamics import CAREamist\n",
"from careamics.config import (\n",
Expand Down Expand Up @@ -265,35 +264,19 @@
"use a scanning electron microscopy image (SEM)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"For this we first download the relevant dataset from the CAREamics portfolio library"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Explore portfolio\n",
"portfolio = PortfolioManager()\n",
"print(portfolio.denoising)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Download files # TODO File should be reused from previous exercise\n",
"root_path = Path(\"./data\")\n",
"files = portfolio.denoising.N2V_SEM.download(root_path)\n",
"print(f\"List of downloaded files: {files}\")"
"# Define the paths\n",
"root_path = Path(\"./../data\")\n",
"root_path = root_path / \"denoising-N2V_SEM.unzip\"\n",
"assert root_path.exists(), f\"Path {root_path} does not exist\"\n",
"\n",
"train_images_path = root_path / \"train.tif\"\n",
"validation_images_path = root_path / \"validation.tif\""
]
},
{
Expand All @@ -317,7 +300,7 @@
"outputs": [],
"source": [
"# Load images\n",
"train_image = tifffile.imread(files[0])\n",
"train_image = tifffile.imread(train_images_path)\n",
"print(f\"Train image shape: {train_image.shape}\")\n",
"plt.imshow(train_image, cmap=\"gray\")"
]
Expand All @@ -342,36 +325,11 @@
},
"outputs": [],
"source": [
"val_image = tifffile.imread(files[1])\n",
"val_image = tifffile.imread(validation_images_path)\n",
"print(f\"Validation image shape: {val_image.shape}\")\n",
"plt.imshow(val_image, cmap=\"gray\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"outputs": [],
"source": [
"# Set paths\n",
"\n",
"data_path = Path(root_path / \"n2v_sem\")\n",
"train_path = data_path / \"train\"\n",
"val_path = data_path / \"val\"\n",
"\n",
"train_path.mkdir(parents=True, exist_ok=True)\n",
"val_path.mkdir(parents=True, exist_ok=True)\n",
"\n",
"shutil.copy(root_path / files[0], train_path / \"train_image.tif\")\n",
"shutil.copy(root_path / files[1], val_path / \"val_image.tif\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
Expand Down Expand Up @@ -473,7 +431,7 @@
"metadata": {},
"outputs": [],
"source": [
"careamist.train(train_source=train_path, val_source=val_path)"
"careamist.train(train_source=train_images_path, val_source=validation_images_path)"
]
},
{
Expand Down Expand Up @@ -534,7 +492,7 @@
},
"outputs": [],
"source": [
"preds = careamist.predict(source=train_path, tile_size=(256, 256))[0]"
"preds = careamist.predict(source=train_images_path, tile_size=(256, 256))[0]"
]
},
{
Expand Down Expand Up @@ -666,7 +624,7 @@
"other_careamist = CAREamist(source=\"checkpoints/last.ckpt\")\n",
"\n",
"# And predict\n",
"new_preds = other_careamist.predict(source=train_path, tile_size=(256, 256))[0]\n",
"new_preds = other_careamist.predict(source=train_images_path, tile_size=(256, 256))[0]\n",
"\n",
"# Show the full image\n",
"fig, ax = plt.subplots(1, 2, figsize=(10, 5))\n",
Expand Down Expand Up @@ -738,15 +696,7 @@
},
"outputs": [],
"source": [
"import os\n",
"import urllib\n",
"# Download the data\n",
"root_dir = \"./data\"\n",
"if not os.path.exists(root_dir):\n",
" os.mkdir(root_dir)\n",
"mito_path = \"./data/mito-confocal-lowsnr.tif\"\n",
"if not os.path.exists(mito_path):\n",
" urllib.request.urlretrieve(\"https://s3.ap-northeast-1.wasabisys.com/gigadb-datasets/live/pub/10.5524/100001_101000/100888/03-mito-confocal/mito-confocal-lowsnr.tif\", mito_path)\n",
"mito_path = \"./../data/mito-confocal-lowsnr.tif\"\n",
"mito_image = tifffile.imread(mito_path)"
]
},
Expand Down
6 changes: 3 additions & 3 deletions 03_COSDD/bonus-exercise.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
"metadata": {},
"outputs": [],
"source": [
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")"
"assert torch.cuda.is_available()"
]
},
{
Expand Down Expand Up @@ -90,7 +90,7 @@
"outputs": [],
"source": [
"# load the data\n",
"lowsnr_path = \"./data/mito-confocal-lowsnr.tif\"\n",
"lowsnr_path = \"./../data/mito-confocal-lowsnr.tif\"\n",
"low_snr = tifffile.imread(lowsnr_path)\n",
"low_snr = low_snr[:, np.newaxis]\n",
"low_snr = torch.from_numpy(low_snr)\n",
Expand Down Expand Up @@ -121,7 +121,7 @@
"metadata": {},
"outputs": [],
"source": [
"inp_image = low_snr[:1, :, :512, :512].to(device)\n",
"inp_image = low_snr[:1, :, :512, :512].cuda()\n",
"reconstructions = hub.reconstruct(inp_image)\n",
"denoised = reconstructions[\"s_hat\"].cpu()\n",
"noisy = reconstructions[\"x_hat\"].cpu()"
Expand Down
18 changes: 2 additions & 16 deletions 03_COSDD/exercise.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@
"\n",
"### Task 1.1.\n",
"\n",
"In the next cell, the low signal-to-noise ratio data that we will be using in this exercise will be downloaded and storedis stored as a tiff file at `./data/mito-confocal-lowsnr.tif`. \n",
"The low signal-to-noise ratio data that we will be using in this exercise has been downloaded and stored as a tiff file at `./../data/mito-confocal-lowsnr.tif`. \n",
"\n",
"In the following cell, you'll load it and get it into a format suitable for training the denoiser.\n",
"\n",
Expand All @@ -97,20 +97,6 @@
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"root_dir = \"./data\"\n",
"if not os.path.exists(root_dir):\n",
" os.mkdir(root_dir)\n",
"lowsnr_path = \"./data/mito-confocal-lowsnr.tif\"\n",
"if not os.path.exists(lowsnr_path):\n",
" urllib.request.urlretrieve(\"https://s3.ap-northeast-1.wasabisys.com/gigadb-datasets/live/pub/10.5524/100001_101000/100888/03-mito-confocal/mito-confocal-lowsnr.tif\", lowsnr_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -637,7 +623,7 @@
"metadata": {},
"outputs": [],
"source": [
"lowsnr_path = \"./data/mito-confocal-lowsnr.tif\"\n",
"lowsnr_path = \"./../data/mito-confocal-lowsnr.tif\"\n",
"n_test_images = 10\n",
"# load the data\n",
"test_set = tifffile.imread(lowsnr_path)\n",
Expand Down
16 changes: 3 additions & 13 deletions 04_DenoiSplit/exercise.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -72,23 +72,11 @@
"source": [
"import os\n",
"\n",
"data_dir = \"./data\" # FILL IN THE PATH TO THE DATA DIRECTORY\n",
"work_dir = \".\"\n",
"tensorboard_log_dir = os.path.join(work_dir, \"tensorboard_logs\")\n",
"os.makedirs(tensorboard_log_dir, exist_ok=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e8d283b2",
"metadata": {},
"outputs": [],
"source": [
"# TODO set correctly for students\n",
"datapath = \"/home/igor.zubarev/data/biosr\""
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -191,6 +179,8 @@
},
"outputs": [],
"source": [
"datapath = \"./../data/\"\n",
"\n",
"# load the default config.\n",
"config = get_config()\n",
"\n",
Expand Down Expand Up @@ -828,7 +818,7 @@
},
{
"cell_type": "markdown",
"id": "17fd0444",
"id": "d3c449f5",
"metadata": {},
"source": [
"<hr style=\"height:2px;\"><div class=\"alert alert-block alert-success\"><h1>End of the exercise</h1>\n",
Expand Down
Loading

0 comments on commit dd20b66

Please sign in to comment.