-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Capabilities? #9
Comments
Great question! This is high on our list of features, and we're hoping to have a release of something this week. Re: passing data back to the model, yes, it's possible. We haven't explored this a lot yet but it's where we expect all of the interesting work to be. Re: checkpoints, the TensorFlow training example we'll be adding will include a 'checkpoints' folder. Re: data rate, from our tests it seems training in Docker is roughly on par with training outside Docker. |
Hi @samhodge, We have just added training capabilities through the trainingTemplateTF model. You can find instructions on how to train and infer models using this template here. It basically enables image-to-image training of an encoder-decoder model as follows:
|
Is there scope to train something beyond a simple encoder decoder? |
At the moment, the multi-scale encoder-decoder is the only supported model. We are planning on adding new model building blocks to the template along the way. In the meantime, you are always free to modify or add your own models inside the model_builder.py file! |
Is it possible to do training from the the nuke-ML-server?
The examples are inference in Caffe2 from Facebook
Is it possible to pass label or other ground truth data to the ML model and have it learn from the toolset, it seems that is a inference only.
Where would the model checkpoints be stored?
Would the data rate be adequate?
The text was updated successfully, but these errors were encountered: