Skip to content

Releases: juglab/EmbedSeg

v0.2.5

14 Feb 09:32
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.2.0...v0.2.5

MIDL Notebooks

18 Apr 11:39
b43b8f6
Compare
Choose a tag to compare
MIDL Notebooks Pre-release
Pre-release

This release was used to compute numbers for the MIDL publication and is stable.

  • The normalization of the image intensities was done by dividing pixel intensities by 255 (for 8-bit images) and 65535 (for unsigned 16-bit images). While this normalization strategy lead to a faster training, it lead to a sometimes, poorer OOD performance. In the future releases, the default will be set to min-max-percentile (takes model longer to reach the same val IoU but leads to a better inference performance).

Minor bug-fixes

15 Jun 15:00
1f8fe3e
Compare
Choose a tag to compare
Minor bug-fixes Pre-release
Pre-release

A minor update since release v0.2.2. This includes:

  • Add display_zslice parameter and save_checkpoint_frequency parameter to configs dictionary here
  1. Support for visualization for setups when virtual_batch_multiplier > 1 is still missing.
  2. Also hardcoded install version of tifffile in setup.py here because latest version currently (2021.6.14) generates a warning message with imsave command while generating crops with bbbc010-2012 dataset. Will relax this version specification in release v0.2.4

TODOs include:

  1. Plan to update pytorch version to 1.9.0 in release v0.2.4 (currently pytorch version used is 1.1.0)
  2. Plan to add tile and stitch capability in release v0.2.4 for handling in large 2d and 3d images during inference
  3. Plan to add a parameter max_crops_per_image in release v0.2.4 to set an optional upper bound on number of crops extracted from each image
  4. Plan to save all instance crops and center crops as RLE files in release v0.2.4
  5. Plan to add an optional mask parameter during training which ignores loss computation from certain regions of the image in release v0.2.4
  6. Plan to deal with bug while evaluating var_loss and to have crops of desired size by additional padding.
  7. Plan to include support for more classes.
  8. Normalization for 3d ==> (0,1, 2)
  9. Make normalization as default option for better extensibility
  10. Parallelize operations like cropping
  11. Eliminate the specification of grid size in notebooks -set to some default value
  12. Simplify notebooks further
  13. Make colab versions of the notebooks
  14. Test center=learn capability for learning the center freely
  15. Add the ILP formulation for stitching 2d instance predictions
  16. Add the code for converting predictions from 2d model on xy, yz and xz slices to generate a 3D instance segmentation
  17. Add more examples from medical image datasets
  18. Add threejs visualizations of the instance segmentations. Explain how to generate these meshes, smoothen them and import them with threejs script.
  19. Padding with reflection instead of constant mode
  20. Include cluster_with_seeds in case nuclei or cell detections are additionally available

3d example notebooks

05 May 23:46
Compare
Choose a tag to compare
3d example notebooks Pre-release
Pre-release
  • Add all 3d example notebooks
  • Pad images with average background intensity instead of 0

Functional 2d + 3d code

17 Apr 22:11
Compare
Choose a tag to compare
Pre-release

Major changes:

  • Add 3d example notebooks for two datasets
  • Correct min_object_size (evaluated now from looking at the train and validation masks)
  • Save tif images with datatype np.uint16 (in the prediction notebooks )
  • Provide support in case evaluation GT images are not available (during prediction)

Some things which are still incorrect in v0.2.0:

  • n_y should be set to n_x for equal pixel/voxel sizes in y and x dimension. This is fixed in v0.2.1
  • anisotropy_factor is wrongly calculated for the 3d notebooks (it was calculated as the reciprocal). This is fixed in v0.2.1
  • train_size was set to 600 for the bbbc012-2010 dataset. This is raised to 1200 in v0.2.1

Functional 2d code, datasets, fully trained models and colormap

13 Jan 15:55
Compare
Choose a tag to compare
  • Initial functional 2d code (min_object_size was hard coded to 36 and will be updated in later iterations)
  • Assets include:
    • 2d images and GT instance annotations
    • 3d images and GT instance annotations
    • fully trained models (*demo.tar) (models trained from scratch up till 200 iterations)
    • glasbey-like colormap (cmap_60.npy)