Skip to content

Commit

Permalink
docs: Minor documentation fixes (#445)
Browse files Browse the repository at this point in the history
- Add doc link to MM_PAYLOAD_PROCESSORS variable in modelmesh repo
- Update defunct links for OpenVINO docs
- Update example inference responses
- CamelCase Modelmesh
- Add create namespace command to FVT setup instructions
- Add command to delete namespace in Quickstart cleanup instructions

---------

Signed-off-by: Christian Kadner <[email protected]>
  • Loading branch information
ckadner authored Oct 6, 2023
1 parent 9c38020 commit 454d9ec
Show file tree
Hide file tree
Showing 7 changed files with 37 additions and 28 deletions.
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# general
.env
.DS_Store
.run/
temp/

public/
target/
Expand Down Expand Up @@ -30,7 +33,7 @@ bin
.vscode
*~

# Modelmesh development related artifacts
# ModelMesh development related artifacts
devbuild
.develop_image_name
.dev/
9 changes: 5 additions & 4 deletions controllers/modelmesh/modelmesh.go
Original file line number Diff line number Diff line change
Expand Up @@ -261,29 +261,30 @@ func (m *Deployment) addMMEnvVars(deployment *appsv1.Deployment) error {
}

if m.EnableAccessLogging {
// See https://github.com/kserve/modelmesh/blob/v0.11.0/src/main/java/com/ibm/watson/modelmesh/ModelMeshEnvVars.java#L55
// See https://github.com/kserve/modelmesh/blob/v0.11.1/src/main/java/com/ibm/watson/modelmesh/ModelMeshEnvVars.java#L55
if err := setEnvironmentVar(ModelMeshContainerName, "MM_LOG_EACH_INVOKE", "true", deployment); err != nil {
return err
}
}

if m.GrpcMaxMessageSize > 0 {
// See https://github.com/kserve/modelmesh/blob/v0.11.0/src/main/java/com/ibm/watson/modelmesh/ModelMeshEnvVars.java#L38
// See https://github.com/kserve/modelmesh/blob/v0.11.1/src/main/java/com/ibm/watson/modelmesh/ModelMeshEnvVars.java#L38
if err := setEnvironmentVar(ModelMeshContainerName, "MM_SVC_GRPC_MAX_MSG_SIZE", strconv.Itoa(m.GrpcMaxMessageSize), deployment); err != nil {
return err
}
}

// See https://github.com/kserve/modelmesh/blob/v0.11.0/src/main/java/com/ibm/watson/modelmesh/ModelMeshEnvVars.java#L31
// See https://github.com/kserve/modelmesh/blob/v0.11.1/src/main/java/com/ibm/watson/modelmesh/ModelMeshEnvVars.java#L31
if err := setEnvironmentVar(ModelMeshContainerName, "MM_KVSTORE_PREFIX", ModelMeshEtcdPrefix, deployment); err != nil {
return err
}
// See https://github.com/kserve/modelmesh/blob/v0.11.0/src/main/java/com/ibm/watson/modelmesh/ModelMeshEnvVars.java#L68
// See https://github.com/kserve/modelmesh/blob/v0.11.1/src/main/java/com/ibm/watson/modelmesh/ModelMeshEnvVars.java#L68
if err := setEnvironmentVar(ModelMeshContainerName, "MM_DEFAULT_VMODEL_OWNER", m.DefaultVModelOwner, deployment); err != nil {
return err
}

if len(m.PayloadProcessors) > 0 {
// See https://github.com/kserve/modelmesh/blob/v0.11.1/src/main/java/com/ibm/watson/modelmesh/ModelMeshEnvVars.java#L26
if err := setEnvironmentVar(ModelMeshContainerName, "MM_PAYLOAD_PROCESSORS", m.PayloadProcessors, deployment); err != nil {
return err
}
Expand Down
31 changes: 16 additions & 15 deletions docs/model-formats/openvino-ir.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

## Format

Full documentation on OpenVINO IR format can be found [here](https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_IR_and_opsets.html#intermediate-representation-used-in-openvino).
Full documentation on OpenVINO IR format can be found [here](https://docs.openvino.ai/2022.3/openvino_docs_MO_DG_IR_and_opsets.html#intermediate-representation-used-in-openvino).

OpenVINO™ toolkit introduces its own format of graph representation and its own operation set. A graph is represented with two files: an XML file and a binary file. This representation is commonly referred to as the Intermediate Representation or IR.

An example of a small IR XML file can be found in the same [link above](https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_IR_and_opsets.html#intermediate-representation-used-in-openvino). The XML file doesn’t have big constant values, like convolution weights. Instead, it refers to a part of the accompanying binary file that stores such values in a binary format.
An example of a small IR XML file can be found in the same [link above](https://docs.openvino.ai/2022.3/openvino_docs_MO_DG_IR_and_opsets.html#intermediate-representation-used-in-openvino). The XML file doesn’t have big constant values, like convolution weights. Instead, it refers to a part of the accompanying binary file that stores such values in a binary format.

Models trained in other formats (Caffe, TensorFlow, MXNet, PaddlePaddle and ONNX) can be converted to OpenVINO IR format. To do so, use OpenVINO’s [Model Optimizer](https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html).
Models trained in other formats (Caffe, TensorFlow, MXNet, PaddlePaddle and ONNX) can be converted to OpenVINO IR format. To do so, use OpenVINO’s [Model Optimizer](https://docs.openvino.ai/2022.3/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html).

## Configuration

Expand All @@ -20,7 +20,7 @@ Here is an example of client code:
input_tensorname = 'input'
request.inputs[input_tensorname].CopyFrom(make_tensor_proto(img, shape=(1, 3, 224, 224)))

.....
...

output_tensorname = 'resnet_v1_50/predictions/Reshape_1'
predictions = make_ndarray(result.outputs[output_tensorname])
Expand All @@ -46,21 +46,22 @@ More details on model configuration can be found [here](https://docs.openvino.ai

The OpenVINO models need to be placed and mounted in a particular directory structure:

```
```shell
tree models/

models/
├── model1
   ├── 1
   │   ├── ir_model.bin
   │   └── ir_model.xml
   └── 2
   ├── ir_model.bin
   └── ir_model.xml
├── 1
├── ir_model.bin
└── ir_model.xml
└── 2
├── ir_model.bin
└── ir_model.xml
└── model2
   └── 1
      ├── ir_model.bin
      ├── ir_model.xml
      └── mapping_config.json
└── 1
├── ir_model.bin
├── ir_model.xml
└── mapping_config.json
└── model3
└── 1
└── model.onnx
Expand Down
2 changes: 1 addition & 1 deletion docs/predictors/run-inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ This would give a response similar to the following:
{
"name": "predict",
"datatype": "FP32",
"shape": [1],
"shape": [1, 1],
"data": [8]
}
]
Expand Down
6 changes: 3 additions & 3 deletions docs/predictors/setup-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ Models can be stored on [Kubernetes Persistent Volumes](https://kubernetes.io/do
There are two ways to enable PVC support in ModelMesh:

1. The Persistent Volume Claims can be added in the `storage-config` secret. This way all PVCs will be mounted to all serving runtime pods.
2. The `allowAnyPVC` configuration flag can be set to `true`. This way the Modelmesh controller will dynamically mount the PVC to a runtime pod at the time a predictor or inference service requiring it is being deployed.
2. The `allowAnyPVC` configuration flag can be set to `true`. This way the ModelMesh controller will dynamically mount the PVC to a runtime pod at the time a predictor or inference service requiring it is being deployed.

Follow the example instructions below to create a PVC, store a model on it, and configure ModelMesh to mount the PVC to the runtime serving pods so that the model can be loaded for inferencing.

Expand Down Expand Up @@ -230,7 +230,7 @@ As an alternative to preconfiguring all _allowed_ PVCs in the `storage-config` s

Let's update (or create) the `model-serving-config` ConfigMap.

Note, if you already have a `model-serving-config` ConfigMap, you might want to retain the existing config overrides. You can check your current configuration flags by running:
**Note**, if you already have a `model-serving-config` ConfigMap, you might want to retain the existing config overrides. You can check your current configuration flags by running:

```shell
kubectl get cm "model-serving-config" -o jsonpath="{.data['config\.yaml']}"`
Expand Down Expand Up @@ -319,7 +319,7 @@ The response should look like the following:
{
"model_name": "sklearn-pvc-example__isvc-3d2daa3370",
"outputs": [
{"name": "predict", "datatype": "INT64", "shape": [1], "data": [8]}
{"name": "predict", "datatype": "INT64", "shape": [1, 1], "data": [8]}
]
}
```
Expand Down
11 changes: 7 additions & 4 deletions docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ Here, we deploy an SKLearn MNIST model which is served from the local MinIO cont

```shell
kubectl apply -f - <<EOF
---
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
Expand All @@ -109,6 +110,7 @@ using the `storageUri` field in lieu of the storage spec:

```shell
kubectl apply -f - <<EOF
---
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
Expand All @@ -127,7 +129,7 @@ EOF

After applying this `InferenceService`, you should see that it is likely not yet ready.

```
```shell
kubectl get isvc

NAME URL READY PREV LATEST PREVROLLEDOUTREVISION LATESTREADYREVISION AGE
Expand Down Expand Up @@ -229,7 +231,7 @@ This should give you output like the following:
{
"name": "predict",
"datatype": "INT64",
"shape": ["1"],
"shape": ["1", "1"],
"contents": {
"int64Contents": ["8"]
}
Expand Down Expand Up @@ -264,8 +266,8 @@ This should give you a response like the following:
"outputs": [
{
"name": "predict",
"datatype": "FP32",
"shape": [1],
"datatype": "INT64",
"shape": [1, 1],
"data": [8]
}
]
Expand All @@ -281,4 +283,5 @@ command from the root of the project:

```shell
./scripts/delete.sh --namespace modelmesh-serving
kubectl delete namespace modelmesh-serving
```
1 change: 1 addition & 0 deletions fvt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ The FVTs rely on a set of models existing in a configured `localMinIO` storage.
If starting with a fresh namespace, install ModelMesh Serving configured for the FVTs with:

```Shell
kubectl create namespace modelmesh-serving
./scripts/install.sh --namespace modelmesh-serving --fvt --dev-mode-logging
```

Expand Down

0 comments on commit 454d9ec

Please sign in to comment.