Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple locations for models #18

Open
ldmoser opened this issue Dec 7, 2020 · 5 comments
Open

Support multiple locations for models #18

ldmoser opened this issue Dec 7, 2020 · 5 comments

Comments

@ldmoser
Copy link

ldmoser commented Dec 7, 2020

Hi there! thank you so much for sharing this implementation.

The current code lists the models under the directory in which the server was launched, and that seems somewhat restrictive.

Would it be possible for you to add support to multiple locations for the models that the server can see?

Ideally I would like to launch the server and specify perhaps in an environment variable, multiple directories where it could search for models, separated by ":" (to follow Linux conventions).

And perhaps "baseModel.py" should be in a separate dir, so that one could point to it's location using PYTHONPATH when launching the server and that would make possible to all custom nodes, wherever they are, to import the base class.

How do you like that?

@ringdk
Copy link
Contributor

ringdk commented Dec 8, 2020 via email

@ldmoser
Copy link
Author

ldmoser commented Dec 8, 2020

Hi, thanks!

Do you have an impression whether the people using Docker containers are using in the context of VFX pipelines or are single users? I have a VFX pipeline in mind and hence the need for more versatility on the configuration..

I think that the os.walk() call will not be affected by any changes in sys.append.path. So the only solution is to build on-the-fly the "models" dir with symlinks to all models this particular server is supposed to see. That may be adequate for individuals playing with some machine learning models, but to integrate in a vfx pipeline it just doesn't seem to scale well. It's hard to debug, and maintain.

About the baseModel.py location. I understand it makes easier for a single user to just test and modify on top of it. But in order to rely on this software in a pipeline we should be able to install it as is, without local changes.
In that spirit, it would greatly help to separate what is API from what are model examples that one may not need installed.

Does that help motivate implementing the changes?

And if not so much, would you be open then, to consider merge requests to improve the package in those directions?

@dekekincaid
Copy link

@ringdk unfortunately we can't use docker for this.

@ldmoser We may just need Dmytro patch it in our branch.

@dekekincaid
Copy link

but yes, following up Lucio's comment, it would be nice to make this more flexible for both docker and local installs. Docker is great for services and we use it all the time for that but for what is essentially going to be just another process on a render farm using docker adds complexity which is not necessarily a great solution for us and I suspect other mid/large facilities. It also makes it very hard for us to burst to the cloud.

@dkorolov
Copy link

dkorolov commented Feb 3, 2021

From a practical point of view, for use not only as a playground but also in production, it would be good to separate the base Nuke MLserver code and API from the model code.
Then the base MLserver code will have a minimum of dependencies. And the base code of the server will not become outdated with every update of the machine learning packages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants