-
Notifications
You must be signed in to change notification settings - Fork 199
Tutorial: Creating your own Deepgrow App
The simplest way to start off creating your own app is to clone a copy of any one of the sample-apps and then edit your way through. The below article will cover guidelines on what to modify first.
We will be using the Deepgrow as an example. The main.py
is the easiest to start with.
These define the network structure and download a set of pre-trained weights for good initialization for training. They can be disabled by commenting out a similar line as shown below:
resources = [(self.data[k]["path"][0], self.data[k]["url"]) for k in self.data.keys()]
However, we recommend to not disable it has been shown by plenty of prior literature [1, 2] that pre-trained weights allow for faster training, quicker convergence and better performance.
Both the models Deepgrow 2D & 3D have a specific input model size. For CT, Deepgrow 2D is not known have any effects with the model input size yet if the user wants they can modify it as per their need. model_size
can be verified for this purpose
For Deepgrow 3D a similar modification can be made: model_size=(128, 192, 192)
. The performance of Deepgrow 3D can moderately depend upon the input size, a smaller size is viable for smaller organs and a larger size for bigger organs such as Liver etc.
Deepgrow specific hyper-parameters are:
max_train_interactions=15, max_val_interactions=20,
The training process of Deepgrow is dependent upon the number of clicks or positive & negative guidance points that are provided in a training pair sample before the deep learning model takes a complete step of forward-backward pass. Performance of it has been shown to be optimal for 10-20 clicks in Sakinis et. al. We do not recommend manual tuning of this parameter.
DeepGrow 3D is quite memory intensive and we do not recommend increasing your batch size to more than 1, if you do not have a GPU with greater than 12GB of memory.
For DeepGrow 2D the batchsize is set to 4.
If there occurs a situation where there is a need of a transform that does not exist in Project MONAI, additional transforms can be introduced here.
References:
[1] Erhan, Dumitru, et al. "Why does unsupervised pre-training help deep learning?." Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 2010.
[2] He, Kaiming, Ross Girshick, and Piotr Dollár. "Rethinking imagenet pre-training." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.