-
Hi, first of all thank you for this amazing work. I am a PhD student trying to use RootPainter on large soil rhizotron images. I am also a model developper so I don't mind going into details. I am facing an issue on a machine with NVIDIA Quadro P620, cuda toolkit 12.3 installed and Pytorch properly detecting the GPU on trainer launch. I am trying to run both trainer and painter locally on the same machine using a common sync folder. The trainer creates the expected file structure, and seems to consume instructions properly. The problem is that when I start the painter, the release or the main from your repo, I get a pending "Network not training", "Starting training..." or "Loading Segmentation..." preventing me from annotating more than the first image. As the segmentation doesn't appear and my GPU ressources are not used, I think that the training is not performed but no particular error is logged by the trainer. For further details, I get a batch size of 0 uppon trainer startup. Also I didn't used your pip install as the torch version was not detecting the GPU (AssertionError: Torch not compiled with CUDA enabled). So I either tried a manual installation from your requirements and pytorch.org, or an install with pip, uninstalling torch and then reinstall torch, torchvision and torchaudio (both for cuda 11.8 and 12.1). Also tried to install cuda with conda instead of the toolkit but it didn't change the issue. Does this situation remind any common issue to you? Thank you very much in advance for your help, I am looking forward to discuss with you and I remain available for further details. Sincerely |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 10 replies
-
Dear Tristan,
There's not enough memory on your GPU to create a batch of 1. (I recommend 4+). I noticed the NVIDIA Quadro P620 only has 2GB of memory. This is really small. You will need to reduce the default patch size to get RootPainter to work on a GPU this small. There is a command line argument for this when you run the trainer. But I recommend you use a more powerful GPU. RootPainter typically requires 12GB (or more) for the GPU memory. I recommend 16GB or more. As your GPU has such a restricted memory, You may have more success running RootPainter via the colab notebook: Which will assign you a 'free' GPU from google. Kind regards, |
Beta Was this translation helpful? Give feedback.
-
In the colab notebook there is a cell which shows how to run the trainer with a reduced patch size: You can do something like this:
That said, I do not have much experience with using these smaller patch sizes with RootPainter so I am not really sure how it will effect your results, but it's something you could try if you want to get RootPainter running with your current GPU. |
Beta Was this translation helpful? Give feedback.
-
Dear Abraham, Thank you for your answer. Those image are already jpeg compressed. In fact, they result from an assembly of more than 60 macro images intended at recomposing a high resolution image of the whole rhizotron (90 * 45 cm). This is why we need to keep such high resolution after stiching in order to keep root hair information and a clear contouring of the roots. Just for supporting information on the minimum specs locally, I finally managed to train a model on the individual images prior stiching, using a laptop Quadro P600, restricting to batch size 1 and 3 workers due to the small 4 Gb Ram, and keeping the default patch size. That enabled a convenient local testing to convince my supervisors of the interest of purshasing a new hardware. Then, I was just wondering (I can start a new discussion if you want), How would you segment the larger 1 Gb rhizotron images based on a model trained on individual tiles prior stiching (20 Mb)? Is there a way to increase the limit image size for inference? Thank you in advance for your answer, Kind regards, Tristan |
Beta Was this translation helpful? Give feedback.
-
Dear Tristan, I'm glad you were able to find an interim solution and convince your supervisors to get better hardware with the Quadro P600. Nice work. Just to clarify, working with smaller images in a training dataset and then segmenting larger images is part of the standard workflow for using RootPainter. I suggest reading into the 'create training dataset' functionality described in the paper in the 'Creating a dataset' part of the methods section https://nph.onlinelibrary.wiley.com/doi/full/10.1111/nph.18387 The mini-guide also refers briefly to this process. I discuss creating a training dataset to train the model and then also processing the original images i.e full size/resolution. https://github.com/Abe404/root_painter/blob/master/docs/mini_guide.md
I'm not yet sure which limit you are referring to. Perhaps my answer regarding creating a training dataset helps clarify. Training a RootPainter model using smaller images and then segmenting much bigger ones is standard practice. The large image will be automatically split up into patches (internally) to allow it to be segmented. The segmentations for each of the smaller patches will then be joined together to give you the single segmentation file for the large image. All of this happens automatically when you process your folder of large images with the 'segment folder' command in RootPainter (available from the network menu). I hope that helps. Kind regards, |
Beta Was this translation helpful? Give feedback.
Thanks Tristan,
I've made this change in the source code now too:
225eeb5
It's on the master branch (so you will only get it if you pull /clone the source again) but it should make it into the next release.
Please let me know if you have any comments or suggestions regarding my change.
I'm happy you are getting amazing results. I'm curious, do you have any images you could share? (with the segmentation shown overlaid perhaps).
I have built a version of RootPainter…