-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiple locations for models #18
Comments
Hello! Glad you're enjoying the ML Server!
Regarding model locations, baseModel.py and other nodes, we've gone with
the current structure (and not added custom locations) mainly because it's
simpler to see where the models are, and because locations can become
confusing once you use Docker containers (which seem to be how most people
are using the ML Server).
Although we haven't tried it, it's probably not difficult to either wrap
the server or manually add a "sys.append.path" to your custom code. I've
used symlinks in the past when I've needed to drop in new models to test,
being mindful to ensure the paths are available and correct once in the
Docker container.
Thanks again!
…On Mon, Dec 7, 2020 at 10:38 PM Lucio ***@***.***> wrote:
Hi there! thank you so much for sharing this implementation.
The current code lists the models under the directory in which the server
was launched, and that seems somewhat restrictive.
Would it be possible for you to add support to multiple locations for the
models that the server can see?
Ideally I would like to launch the server and specify perhaps in an
environment variable, multiple directories where it could search for
models, separated by ":" (to follow Linux conventions).
And perhaps "baseModel.py" should be in a separate dir, so that one could
point to it's location using PYTHONPATH when launching the server and that
would make possible to all custom nodes, wherever they are, to import the
base class.
How do you like that?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#18>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIH2B3AKAK65TPM75TJKQ3STVKOLANCNFSM4URDVQNQ>
.
|
Hi, thanks! Do you have an impression whether the people using Docker containers are using in the context of VFX pipelines or are single users? I have a VFX pipeline in mind and hence the need for more versatility on the configuration.. I think that the os.walk() call will not be affected by any changes in sys.append.path. So the only solution is to build on-the-fly the "models" dir with symlinks to all models this particular server is supposed to see. That may be adequate for individuals playing with some machine learning models, but to integrate in a vfx pipeline it just doesn't seem to scale well. It's hard to debug, and maintain. About the baseModel.py location. I understand it makes easier for a single user to just test and modify on top of it. But in order to rely on this software in a pipeline we should be able to install it as is, without local changes. Does that help motivate implementing the changes? And if not so much, would you be open then, to consider merge requests to improve the package in those directions? |
but yes, following up Lucio's comment, it would be nice to make this more flexible for both docker and local installs. Docker is great for services and we use it all the time for that but for what is essentially going to be just another process on a render farm using docker adds complexity which is not necessarily a great solution for us and I suspect other mid/large facilities. It also makes it very hard for us to burst to the cloud. |
From a practical point of view, for use not only as a playground but also in production, it would be good to separate the base Nuke MLserver code and API from the model code. |
Hi there! thank you so much for sharing this implementation.
The current code lists the models under the directory in which the server was launched, and that seems somewhat restrictive.
Would it be possible for you to add support to multiple locations for the models that the server can see?
Ideally I would like to launch the server and specify perhaps in an environment variable, multiple directories where it could search for models, separated by ":" (to follow Linux conventions).
And perhaps "baseModel.py" should be in a separate dir, so that one could point to it's location using PYTHONPATH when launching the server and that would make possible to all custom nodes, wherever they are, to import the base class.
How do you like that?
The text was updated successfully, but these errors were encountered: