Skip to content
This repository has been archived by the owner on Jul 10, 2023. It is now read-only.

Commit

Permalink
Merge pull request #99 from intel/v0.7.0_PR
Browse files Browse the repository at this point in the history
  • Loading branch information
whbruce authored Dec 8, 2021
2 parents 9cbf744 + a855975 commit ead1dee
Show file tree
Hide file tree
Showing 122 changed files with 2,594 additions and 8,221 deletions.
3 changes: 1 addition & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
__pycache__/
.cache/
.vscode/
.bash_history
*.generated
docker/openvino_base_environment.txt
docker/Dockerfile.env
docker/final.env
models
tests/results/**/*
samples/ava_ai_extension/tests/results/**/*
samples/edgex_bridge/edgex/**/*
92 changes: 62 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,9 @@ VA Serving includes a sample client [vaclient](./vaclient/README.md) that can co
Before running a pipeline, we need to know what pipelines are available. We do this using vaclient's `list-pipeline` command.
In new shell run the following command:
```bash
$ ./vaclient/vaclient.sh list-pipelines
./vaclient/vaclient.sh list-pipelines
```
```
- object_detection/person_vehicle_bike
- object_classification/vehicle_attributes
- audio_detection/environment
Expand All @@ -125,12 +127,15 @@ $ ./vaclient/vaclient.sh list-pipelines
> **Note:** The pipelines you will see may differ slightly
Pipelines are displayed as a name/version tuple. The name reflects the action and version supplies more details of that action. Let's go with `object_detection/person_vehicle_bike`. Now we need to choose a media source. We recommend the [IoT Devkit sample videos](https://github.com/intel-iot-devkit/sample-videos) to get started. As the pipeline version indicates support for detecting people, person-bicycle-car-detection.mp4 would be a good choice.
> **Note:** Make sure to include `raw=true` parameter in the Github URL as shown in our examples. Failure to do so will result in a pipeline execution error.
vaclient offers a `run` command that takes two additional arguments the `pipeline` and the `uri` for the media source. The `run` command displays inference results until either the media is exhausted or `CTRL+C` is pressed.

Inference result bounding boxes are displayed in the format `label (confidence) [top left width height] {meta-data}` provided applicable data is present. At the end of the pipeline run, the average fps is shown.
```
$ ./vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true
./vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true
```
```
Timestamp 48583333333
- vehicle (0.95) [0.00, 0.12, 0.15, 0.36]
Timestamp 48666666666
Expand Down Expand Up @@ -159,55 +164,76 @@ All being well it will go into `QUEUED` then `RUNNING` state. We can interrogate
> **NOTE:** The pipeline instance value depends on the number of pipelines started while the server is running so may differ from the value shown in the following examples.
```
$ ./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true
./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true
```
```
<snip>
Starting pipeline...
Pipeline running: object_detection/person_vehicle_bike, instance = 2
Starting pipeline object_detection/person_vehicle_bike, instance = 2
```
You will need both the pipeline tuple and `instance` id for the status command. This command will display pipeline state:
```
$ ./vaclient/vaclient.sh status object_detection/person_vehicle_bike 2
./vaclient/vaclient.sh status object_detection/person_vehicle_bike 2
```
```
<snip>
RUNNING
```
Then wait for a minute or so and try again. Pipeline will be completed.
```
$ ./vaclient/vaclient.sh status object_detection/person_vehicle_bike 2
./vaclient/vaclient.sh status object_detection/person_vehicle_bike 2
```
```
<snip>
COMPLETED
```
### Aborted
If a pipeline is stopped, rather than allowed to complete, it goes into the ABORTED state.
Start the pipeline again, this time we'll stop it.
```
$ ./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true
./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true
```
```
<snip>
Starting pipeline...
Pipeline running: object_detection/person_vehicle_bike, instance = 3
$ ./vaclient/vaclient.sh status object_detection/person_vehicle_bike 3
Starting pipeline object_detection/person_vehicle_bike, instance = 3
```
```
./vaclient/vaclient.sh status object_detection/person_vehicle_bike 3
```
```
<snip>
RUNNING
$ ./vaclient/vaclient.sh stop object_detection/person_vehicle_bike 3
```
```
./vaclient/vaclient.sh stop object_detection/person_vehicle_bike 3
```
```
<snip>
Stopping Pipeline...
Pipeline stopped
avg_fps: 24.33
$ ./vaclient/vaclient.sh status object_detection/person_vehicle_bike 3
```
```
./vaclient/vaclient.sh status object_detection/person_vehicle_bike 3
```
```
<snip>
ABORTED
```
### Error
The error state covers a number of outcomes such as the request could not be satisfied, a pipeline dependency was missing or an initialization problem. We can create an error condition by supplying a valid but unreachable uri.
```
$ ./vaclient/vaclient.sh start object_detection/person_vehicle_bike http://bad-uri
./vaclient/vaclient.sh start object_detection/person_vehicle_bike http://bad-uri
```
```
<snip>
Starting pipeline...
Pipeline running: object_detection/person_vehicle_bike, instance = 4
Starting pipeline object_detection/person_vehicle_bike, instance = 4
```
Note that VA Serving does not report an error at this stage as it goes into `QUEUED` state before it realizes that the source is not providing media.
Checking on state a few seconds later will show the error.
```
$ ./vaclient/vaclient.sh status object_detection/person_vehicle_bike 4
./vaclient/vaclient.sh status object_detection/person_vehicle_bike 4
```
```
<snip>
ERROR
```
Expand All @@ -217,16 +243,15 @@ RTSP allows you to connect to a server and display a video stream. VA Serving in

First start VA Serving with RTSP enabled. By default, the RTSP stream will use port 8554.
```
$ docker/run.sh --enable-rtsp -v /tmp:/tmp
docker/run.sh --enable-rtsp -v /tmp:/tmp
```
Then start a pipeline specifying the RTSP server endpoint path `vaserving`. In this case the RTSP endpoint would be `rtsp://localhost:8554/vaserving`
```
$ ./vaclient/vaclient.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --rtsp-path vaserving
./vaclient/vaclient.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --rtsp-path vaserving
```
If you see the error
```
Starting pipeline...
Pipeline running: object_detection/person_vehicle_bike, instance = 1
Starting pipeline object_detection/person_vehicle_bike, instance = 1
Error in pipeline, please check vaserving log messages
```
You probably forgot to enable RTSP in the server.
Expand All @@ -240,9 +265,11 @@ Now start `vlc` and from the `Media` menu select `Open Network Stream`. For URL
## Change Pipeline and Source Media
With vaclient it is easy to customize service requests. Here will use a vehicle classification pipeline `object_classification/vehicle_attributes` with the Iot Devkit video `car-detection.mp4`. Note how vaclient now displays classification metadata including type and color of vehicle.
```
$ ./vaclient/vaclient.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true
Starting pipeline...
Pipeline running: object_classification/vehicle_attributes, instance = 1
./vaclient/vaclient.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true
```
```
Starting pipeline object_classification/vehicle_attributes, instance = 1
Pipeline running
<snip>
Timestamp 18080000000
- vehicle (1.00) [0.41, 0.00, 0.57, 0.33] {'color': 'red', 'type': 'car'}
Expand Down Expand Up @@ -271,9 +298,10 @@ If you look at video you can see that there are some errors in classification -
Inference accelerator devices can be easily selected using the device parameter. Here we run the car classification pipeline again,
but this time use the integrated GPU for detection inference by setting the `detection-device` parameter.
```
$ ./vaclient/vaclient.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true --parameter detection-device GPU --parameter detection-model-instance-id person_vehicle_bike_detection_gpu
Starting pipeline...
Pipeline running: object_classification/vehicle_attributes, instance = 2
./vaclient/vaclient.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true --parameter detection-device GPU --parameter detection-model-instance-id person_vehicle_bike_detection_gpu
```
```
Starting pipeline object_classification/vehicle_attributes, instance = 2
```
> **Note:** The GPU inference plug-in dynamically builds OpenCL kernels when it is first loaded resulting in a ~30s delay before inference results are produced.
Expand All @@ -286,7 +314,9 @@ As the previous example has shown, the vaclient application works by converting
The `--show-request` option displays the REST verb, uri and body in the request.
Let's repeat the previous GPU inference example, adding RTSP output and show the underlying request.
```
$ ./vaclient/vaclient.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true --parameter detection-device GPU --rtsp-path vaserving --show-request
./vaclient/vaclient.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true --parameter detection-device GPU --rtsp-path vaserving --show-request
```
```
<snip>
POST http://localhost:8080/pipelines/object_classification/vehicle_attributes
Body:{'source': {'uri': 'https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true', 'type': 'uri'}, 'destination': {'metadata': {'type': 'file', 'path': '/tmp/results.jsonl', 'format': 'json-lines'}, 'frame': {'type': 'rtsp', 'path': 'vaserving'}}, 'parameters': {'detection-device': 'GPU'}}
Expand Down Expand Up @@ -332,11 +362,11 @@ They are easier to understand when the json is pretty-printed

The `--show-request` output can be easily converted int a curl command.
```
$ curl <URI> -X <VERB> -H "Content-Type: application/json' -d <BODY>
curl <URI> -X <VERB> -H "Content-Type: application/json' -d <BODY>
```
So the above request would be as below. Note the pipeline instance `1` returned by the request.
```bash
$ curl localhost:8080/pipelines/object_classification/vehicle_attributes -X POST -H \
curl localhost:8080/pipelines/object_classification/vehicle_attributes -X POST -H \
'Content-Type: application/json' -d \
'{
"source": {
Expand All @@ -358,6 +388,8 @@ $ curl localhost:8080/pipelines/object_classification/vehicle_attributes -X POST
"detection-device": "GPU"
}
}'
```
```
1
```
# Changing Pipeline Model
Expand Down
15 changes: 12 additions & 3 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,15 @@ RUN if [[ ${VA_SERVING_BASE} == *"openvino/ubuntu20_data_runtime:2021.2" ]]; the
rm -rf /var/lib/apt/lists/* ;\
fi

# Install boost library required for HDDL plugin
RUN if [[ ${VA_SERVING_BASE} == *"openvino/ubuntu20_data_runtime"* ]]; then \
DEBIAN_FRONTEND=noninteractive apt-get update && \
apt-get install -y -q --no-install-recommends \
libboost-program-options1.71.0 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* ;\
fi

RUN DEBIAN_FRONTEND=noninteractive apt-get update && \
apt-get upgrade -y -q && \
apt-get dist-upgrade -y -q && \
Expand All @@ -70,6 +79,9 @@ COPY ./vaserving /home/video-analytics-serving/vaserving
COPY ./vaclient /home/video-analytics-serving/vaclient
COPY --chown=vaserving ./tools /home/video-analytics-serving/tools

# Copy GVA Python extensions
COPY ./extensions /home/video-analytics-serving/extensions

# Media Analytics Framework set via environment variable
ENV FRAMEWORK=${FRAMEWORK}
WORKDIR /home/video-analytics-serving
Expand Down Expand Up @@ -107,9 +119,6 @@ ONBUILD ARG PIPELINES_PATH
ONBUILD ENV PIPELINES_PATH=${PIPELINES_PATH}
ONBUILD COPY ${PIPELINES_PATH} /home/video-analytics-serving/pipelines

# Copy GVA Python extensions
ONBUILD COPY ./extensions /home/video-analytics-serving/extensions

# Stage that is used is controlled via PIPELINES_COMMAND build argument
FROM ${PIPELINES_COMMAND} as video-analytics-serving-with-models-and-pipelines
########################################################
Expand Down
8 changes: 4 additions & 4 deletions docker/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ DOCKERFILE_DIR=$(dirname "$(readlink -f "$0")")
SOURCE_DIR=$(dirname "$DOCKERFILE_DIR")

BASE_IMAGE_FFMPEG="openvisualcloud/xeone3-ubuntu1804-analytics-ffmpeg:20.10"
BASE_IMAGE_GSTREAMER="openvino/ubuntu20_data_runtime:2021.4.1"
BASE_IMAGE_GSTREAMER="openvino/ubuntu20_data_runtime:2021.4.2"

BASE_IMAGE=${BASE_IMAGE:-""}
BASE_BUILD_CONTEXT=
Expand All @@ -36,7 +36,7 @@ BASE_BUILD_OPTIONS="--network=host "

SUPPORTED_IMAGES=($BASE_IMAGE_GSTREAMER $BASE_IMAGE_FFMPEG)
OPEN_MODEL_ZOO_TOOLS_IMAGE=${OPEN_MODEL_ZOO_TOOLS_IMAGE:-"openvino/ubuntu20_data_dev"}
OPEN_MODEL_ZOO_VERSION=${OPEN_MODEL_ZOO_VERSION:-"2021.4.1"}
OPEN_MODEL_ZOO_VERSION=${OPEN_MODEL_ZOO_VERSION:-"2021.4.2"}
FORCE_MODEL_DOWNLOAD=

DEFAULT_GSTREAMER_BASE_BUILD_TAG="video-analytics-serving-gstreamer-base"
Expand Down Expand Up @@ -419,7 +419,7 @@ cp -f $DOCKERFILE_DIR/Dockerfile $DOCKERFILE_DIR/Dockerfile.env
ENVIRONMENT_FILE_LIST=

if [[ "$BASE_IMAGE" == *"openvino/"* ]]; then
$RUN_PREFIX docker run -t --rm $DOCKER_RUN_ENVIRONMENT --entrypoint /bin/bash -e HOSTNAME=BASE $BASE_IMAGE "-i" "-c" "env" > $DOCKERFILE_DIR/openvino_base_environment.txt
$RUN_PREFIX docker run -t --rm --entrypoint /bin/bash -e HOSTNAME=BASE $BASE_IMAGE "-i" "-c" "env" > $DOCKERFILE_DIR/openvino_base_environment.txt
ENVIRONMENT_FILE_LIST+="$DOCKERFILE_DIR/openvino_base_environment.txt "
fi

Expand All @@ -430,7 +430,7 @@ for ENVIRONMENT_FILE in ${ENVIRONMENT_FILES[@]}; do
done

if [ ! -z "$ENVIRONMENT_FILE_LIST" ]; then
cat $ENVIRONMENT_FILE_LIST | grep -E '=' | tr '\n' ' ' | tr '\r' ' ' > $DOCKERFILE_DIR/final.env
cat $ENVIRONMENT_FILE_LIST | grep -E '=' | sed -e 's/,\s\+/,/g' | tr '\n' ' ' | tr '\r' ' ' > $DOCKERFILE_DIR/final.env
echo " HOME=/home/video-analytics-serving " >> $DOCKERFILE_DIR/final.env
echo "ENV " | cat - $DOCKERFILE_DIR/final.env | tr -d '\n' >> $DOCKERFILE_DIR/Dockerfile.env
printf "\nENV PYTHONPATH=\$PYTHONPATH:/home/video-analytics-serving\nENV GST_PLUGIN_PATH=\$GST_PLUGIN_PATH:/usr/lib/x86_64-linux-gnu/gstreamer-1.0/" >> $DOCKERFILE_DIR/Dockerfile.env
Expand Down
30 changes: 24 additions & 6 deletions docker/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,11 @@ enable_hardware_access() {
if ls /dev/dri/render* 1> /dev/null 2>&1; then
echo "Found /dev/dri/render entry - enabling for GPU"
DEVICES+='--device /dev/dri '
USER_GROUPS+="--group-add $(stat -c '%g' /dev/dri/render*) "
RENDER_GROUPS=$(stat -c '%g' /dev/dri/render*)
for group in $RENDER_GROUPS
do
USER_GROUPS+="--group-add $group "
done
fi

# Intel(R) NCS2
Expand All @@ -94,10 +98,21 @@ enable_hardware_access() {
fi

# HDDL
if [ -e /dev/ion ]; then
echo "Found /dev/ion - enabling for HDDL-R"
DEVICES+="--device /dev/ion "
VOLUME_MOUNT+="-v /var/tmp:/var/tmp "
if compgen -G /dev/myriad* > /dev/null ; then
echo "Found /dev/myriad devices - enabling for HDDL-R"
VOLUME_MOUNT+="-v /var/tmp:/var/tmp -v /dev/shm:/dev/shm "
fi

# Webcam
for device in $(ls /dev | grep video); do
echo "Found /dev/$device - enabling webcam"
DEVICES+="--device /dev/$device "
done

# Microphone
if [ -e /dev/snd ]; then
echo "Found /dev/snd - enabling microphone"
DEVICES+="--device /dev/snd "
fi
}

Expand Down Expand Up @@ -298,16 +313,19 @@ if [ "${MODE}" == "DEV" ]; then
PIPELINES=$SOURCE_DIR/pipelines/$FRAMEWORK
fi
PRIVILEGED="--privileged "
elif [ ! -z "$ENTRYPOINT" ]; then
MODE=CUSTOM_ENTRYPOINT
elif [ "${MODE}" == "SERVICE" ]; then
if [ -z "$PORTS" ]; then
PORTS+="-p 8080:8080 "
fi
enable_hardware_access
else
echo "Invalid Mode"
show_help
fi

enable_hardware_access

if [ ! -z "$ENABLE_RTSP" ]; then
ENVIRONMENT+="-e ENABLE_RTSP=true -e RTSP_PORT=$RTSP_PORT "
PORTS+="-p $RTSP_PORT:$RTSP_PORT "
Expand Down
Loading

0 comments on commit ead1dee

Please sign in to comment.