Skip to content

ZoomMIL is a multiple instance learning (MIL) method that learns to perform multi-level zooming for efficient Whole-Slide Image (WSI) classification.

License

Notifications You must be signed in to change notification settings

boqchen/zoommil

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ZoomMIL

ZoomMIL is a multiple instance learning (MIL) method that learns to perform multi-level zooming for efficient Whole-Slide Image (WSI) classification. This repository contains the PyTorch code to reproduce the results of our corresponding paper Differentiable Zooming for Multiple Instance Learning on Whole-Slide Images.

Overview

Installation

histocartography

This code relies on functionality from the histocartography library. To install it, simply clone the github repository and add the path to your PYTHONPATH:

git clone https://github.com/histocartography/histocartography.git
export PYTHONPATH="<PATH>/histocartography:$PYTHONPATH"

conda environment

Create a conda environment and install the required packages from the provided environment.yml:

git clone https://github.com/histocartography/zoommil.git && cd zoommil
conda env create -f environment.yml
conda activate zoommil

Install pytorch:

conda install -n zoommil pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=11.1 -c pytorch -c conda-forge

Preprocessing also requires the OpenSlide library. On Linux, you can install it with

conda install -n zoommil -c conda-forge conda-forge/linux-64::openslide-python

Please check the documentation for more information.

Getting started

After cloning the repository and creating the conda environment, you can follow the steps below to get started.

Datasets

We evaluated ZoomMIL on three publicly available datasets:

The train/val/test splits for all datasets can be found here.

Preprocessing

Whole-slide images (e.g., from the BRIGHT dataset) can be preprocessed (tissue masking + patch feature extraction) as .h5 files:

python bin/preprocess.py --out_path <PATH_TO_PREPROCESSED_DATA> --in_path <PATH_TO_DOWNLOADED_DATA> --mode features --dataset BRIGHT

To only extract patches (without features), select the mode patches:

python bin/preprocess.py --out_path <PATH_TO_PREPROCESSED_DATA> --in_path <PATH_TO_DOWNLOADED_DATA> --mode patches --dataset BRIGHT

Training & testing

Adapt the paths in your config file, then run train.py to run the training and testing for ZoomMIL. This script expects WSIs that have been preprocessed as patch features.

python bin/train.py --config_path zoommil/config/sample_config.json

Citation

If you use this code, please consider citing our work:

@inproceedings{thandiackal2022zoommil,
  title={Differentiable Zooming for Multiple Instance Learning on Whole-Slide Images},
  author={Thandiackal, Kevin and Chen, Boqi and Pati, Pushpak and Jaume, Guillaume and Williamson, Drew FK and Gabrani, Maria and Goksel, Orcun},
  booktitle = {The European Conference on Computer Vision (ECCV)},
  year={2022}
}

About

ZoomMIL is a multiple instance learning (MIL) method that learns to perform multi-level zooming for efficient Whole-Slide Image (WSI) classification.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%