Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Capabilities? #9

Open
samhodge opened this issue May 25, 2019 · 4 comments
Open

Training Capabilities? #9

samhodge opened this issue May 25, 2019 · 4 comments

Comments

@samhodge
Copy link

Is it possible to do training from the the nuke-ML-server?

The examples are inference in Caffe2 from Facebook

Is it possible to pass label or other ground truth data to the ML model and have it learn from the toolset, it seems that is a inference only.

Where would the model checkpoints be stored?

Would the data rate be adequate?

@ringdk
Copy link
Contributor

ringdk commented May 27, 2019

Great question!

This is high on our list of features, and we're hoping to have a release of something this week.

Re: passing data back to the model, yes, it's possible. We haven't explored this a lot yet but it's where we expect all of the interesting work to be.

Re: checkpoints, the TensorFlow training example we'll be adding will include a 'checkpoints' folder.

Re: data rate, from our tests it seems training in Docker is roughly on par with training outside Docker.

@johannabar
Copy link
Collaborator

Hi @samhodge,

We have just added training capabilities through the trainingTemplateTF model. You can find instructions on how to train and infer models using this template here.

It basically enables image-to-image training of an encoder-decoder model as follows:

  1. Fill in your groundtruth and input images in the data folder,
  2. Launch the training inside your Docker container,
  3. Infer using your trained model in Nuke through the nuke-ML-server.

@samhodge
Copy link
Author

samhodge commented Jun 7, 2019

Is there scope to train something beyond a simple encoder decoder?

@johannabar
Copy link
Collaborator

At the moment, the multi-scale encoder-decoder is the only supported model. We are planning on adding new model building blocks to the template along the way.

In the meantime, you are always free to modify or add your own models inside the model_builder.py file!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants