- language-agnostic PyTorch model serving
- serve JIT compiled PyTorch model in production environment
- docker == 18.09.1
- wget == 1.20.1
- your JIT traced PyTorch model (If you are not familiar with JIT tracing, please refer JIT Tutorial)
- run
bridge
request to the model server as follow (suppose your input dimension is 3)
curl -X POST -d '{"input":[1.0, 1.0, 1.0]}' localhost:8080/model/predict
- YongRae Jo ([email protected])
- YoonHo Jo ([email protected])
- GiChang Lee ([email protected])
- Seunghwan Hong
- SeungHyek Cho
- Alex Kim ([email protected])