This repository shows how to deploy and use OLive on Windows by running commands.
Install Docker.
..\utils\build.sh
pip install docker
Use cmd and type as below:
python cmd_pipeline.py --model [model_path] --model_type [model_type] --result [result_directory_path] [--other_parameters] [other parameters' value]
IMPORTANT: Any path in the parameter must be under the current directory (/cmd-tool).
-
--model_type: Required. support caffe, cntk, keras, scikit-learn, tensorflow and pytorch.
-
--model_path: Required. provide the local path of the model.
-
--result: Optional. The directory path for results.
-
--target_opset: Optional. Specify Opset for ONNX. For example, 7 for ONNX 1.2, and 8 for ONNX 1.3. Latest Opset is recommanded. Refer to ONNX Opset for the latest Opset.
-
--gpu: Optional. Use this boolean flag to enable GPU if you have one.
-
--model_inputs_names: Required for tensorflow frozen models and checkpoints. The model's input names.
-
--model_outputs_names: Required for tensorflow frozen models and checkpoints. The model's output names.
-
--model_input_shapes: Required for Pytorch models. List of tuples. The input shape(s) of the model. Each dimension separated by ','.
-
--initial_types: Required for scikit-learn. List of tuples.
-
--caffe_model_prototxt: Required for Caffe models. Prototxt files for caffe models.
-
--input_json: A JSON file that contains all neccessary run specs. For example:
{
"model": "/mnist/",
"model_type": "tensorflow",
"output_onnx_path": "mnist.onnx"
}
Details of other parameters can be referenced "Convert model to ONNX section" in onnx-pipeline.ipynb
For example:
python cmd_pipeline.py --model pytorch/saved_model.pb --model_type pytorch --model_input_shapes (3,3,224,224) --result result/
Then all the result JSONs will be produced under /result. Also print the logs for the process in the terminal. Check if there is any error.