Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-thread support for inference #203

Open
sunnyxiaohu opened this issue Mar 30, 2018 · 0 comments
Open

Multi-thread support for inference #203

sunnyxiaohu opened this issue Mar 30, 2018 · 0 comments

Comments

@sunnyxiaohu
Copy link

Hi, Sorry to disturbing you. There are two questions always confusing me.
Q1:
I was wondering if this version of Caffe support multi-thread when inference ? In other words, we just need to load the model once, and could use multiple threads to predict the results of input.
Q2:
I also searched a modified version of Caffe with links https://github.com/flx42/caffe.git and https://github.com/NVIDIA/gpu-rest-engine , which claimed that it could be used to properly handle multithreading. However, when testing, we found that it is not real multi-thread, which adopts serialization handling. How about your ideal ?

Sincerely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant