PipelineAI Home
PipelineAI is fully compatible with AWS SageMaker.
Specifically, you can upload PipelineAI-optimized Docker images to your public or private Docker Repo for use with AWS SageMaker's Custom Docker image support.
Click HERE for more details.
PipelineAI Products
PipelineAI Features
Each model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction.
Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.
Click HERE to view model samples for the following:
- Scikit-Learn
- TensorFlow
- Keras
- Spark ML (formerly called Spark MLlib)
- Xgboost
- PMML/PFA
- Custom Java
- Custom Python
- Model Ensembles
Coming Soon: Support for Amazon MXNet, Microsoft CNTK, and ONNX
- Python
- Java
- Scala
- C++
- Caffe2
- Theano
- TensorFlow Serving (TensorFlow)
- Nvidia TensorRT (TensorFlow, Caffe2)
Coming Soon: Amazon MXNet, Microsoft CNTK, and ONNX
- Install Docker
- Install Miniconda with Python 2 or 3 (Preferred) Support
(Windows Only) Install PowerShell
Notes:
- This command line interface requires Python 2 or 3 and Docker as detailed above in the Pre-Requisites section.
pip install cli-pipeline==1.5.3 --user --ignore-installed --no-cache -U
pipeline version
### EXPECTED OUTPUT ###
cli_version: 1.5.x <-- MAKE SURE THIS MATCHES THE VERSION YOU INSTALLED ABOVE
default train base image: docker.io/pipelineai/train-cpu:1.5.0
default predict base image: docker.io/pipelineai/predict-cpu:1.5.0
capabilities_enabled: ['train-server', 'train-kube', 'train-sage', 'predict-server', 'predict-kube', 'predict-sage', 'predict-kafka']
capabilities_available: ['optimize', 'jupyter', 'spark', 'airflow', 'kafka']
Email [email protected] to enable the advanced capabilities.
pipeline
### EXPECTED OUTPUT ###
...
Usage: pipeline <-- This List of CLI Commands
pipeline predict-http-test <-- Test Model Cluster (Http Endpoint)
pipeline predict-kafka-consume <-- Consume Kafka Predictions
pipeline predict-kafka-describe <-- Describe Kafka Prediction Cluster
pipeline predict-kafka-start <-- Start Kafka Prediction Cluster
pipeline predict-kafka-test <-- Test Model Cluster (Kafka Endpoint)
pipeline predict-kube-autoscale <-- Configure AutoScaling for Model Cluster
pipeline predict-kube-connect <-- Create Secure Tunnel to Model Cluster
pipeline predict-kube-describe <-- Describe Model Cluster
pipeline predict-kube-logs <-- View Model Cluster Logs
pipeline predict-kube-route <-- Route Live Traffic
pipeline predict-kube-scale <-- Scale Model Cluster
pipeline predict-kube-shell <-- Shell into Model Cluster
pipeline predict-kube-start <-- Start Model Cluster from Docker Registry
pipeline predict-kube-status <-- Status of Model Cluster
pipeline predict-kube-stop <-- Stop Model Cluster
pipeline predict-kube-test <-- Test Model Cluster
pipeline predict-sage-route <-- Route Live Traffic (SageMaker)
pipeline predict-sage-start <-- Start Model Cluster from Docker Registry (SageMaker)
pipeline predict-sage-test <-- Test Model Cluster (SageMaker)
pipeline predict-server-build <-- Build Model Server
pipeline predict-server-logs <-- View Model Server Logs
pipeline predict-server-pull <-- Pull Model Server from Docker Registry
pipeline predict-server-push <-- Push Model Server to Docker Registry
pipeline predict-server-shell <-- Shell into Model Server (Debugging)
pipeline predict-server-start <-- Start Model Server
pipeline predict-server-stop <-- Stop Model Server
pipeline predict-server-test <-- Test Model Server
pipeline predict-kafka-test <-- Predict with Kafka-based Model Endpoint
pipeline train-kube-connect <-- Create Secure Tunnel to Training Cluster
pipeline train-kube-describe <-- Describe Training Cluster
pipeline train-kube-logs <-- View Training Cluster Logs
pipeline train-kube-scale <-- Scale Training Cluster
pipeline train-kube-shell <-- Shell into Training Cluster
pipeline train-kube-start <-- Start Training Cluster from Docker Registry
pipeline train-kube-status <-- Status of Training Cluster
pipeline train-kube-stop <-- Stop Training Cluster
pipeline train-server-build <-- Build Training Server
pipeline train-server-logs <-- View Training Server Logs
pipeline train-server-pull <-- Pull Training Server from Docker Registry
pipeline train-server-push <-- Push Training Server to Docker Registry
pipeline train-server-shell <-- Shell into Training Server (Debugging)
pipeline train-server-start <-- Start Training Server
pipeline train-server-stop <-- Stop Training Server
pipeline version <-- View This CLI Version
...
git clone https://github.com/PipelineAI/models
cd ./models
ls -l ./tensorflow/census/model
### EXPECTED OUTPUT ###
...
pipeline_conda_environment.yml <-- Required. Sets up the conda environment
pipeline_condarc <-- Required. Configure Conda proxy servers (.condarc)
pipeline_setup.sh <-- Required. Init script performed upon Docker build
pipeline_train.py <-- Required. `main()` is required. Pass args with `--train-args`
...
pipeline train-server-build --model-type=tensorflow --model-name=census --model-tag=a --model-path=./tensorflow/census/model
Notes:
--model-path
must be relative. On Windows, be sure to use the forward slash\
for--model-path
.- If you see
CondaHTTPError: HTTP 000 CONNECTION FAILED for url
or[Errno 111] Connection refused'
orConnectionError(MaxRetryError("HTTPSConnectionPool
, you need to update./tensorflow/census/model/pipeline_condarc
to include proxy servers per THIS document. - For
pip
installs, you may also need toexport HTTP_PROXY
andexport HTTPS_PROXY
within./tensorflow/census/model/pipeline_setup.sh
pipeline train-server-start --model-type=tensorflow --model-name=census --model-tag=a --input-path=./tensorflow/census/input --output-path=./tensorflow/census/output --train-args="--train-files=training/adult.training.csv\ --eval-files=validation/adult.validation.csv\ --num-epochs=2\ --learning-rate=0.025"
Notes:
--train-args
is a single argument passed into thepipeline_train.py
. Therefore, you must escape spaces (\
) between arguments.--input-path
and--output-path
are relative to the current working directory (outside the Docker container) and will be mapped as directories inside the Docker container from/root
.--train-files
and--eval-files
are relative to--input-path
inside the Docker container.- Models, logs, and event are written to
--output-path
(or a subdirectory within). These will be available outside of the Docker container. - To prevent overwriting the output of a previous run, you should either 1) change the
--output-path
between calls or 2) create a new unique subfolder with--output-path
in yourpipeline_train.py
(ie. timestamp). See examples below. - On Windows, be sure to use the forward slash
\
for--input-path
and--output-path
(not the args inside of--train-args
). - If you see
port is already allocated
oralready in use by container
, you already have a container running. List and remove any conflicting containers. For example,docker ps
and/ordocker rm -f train-census-a-tensorflow-tfserving-cpu
.
(We are working on making this more intuitive.)
pipeline train-server-logs --model-type=tensorflow --model-name=census --model-tag=a
Press Ctrl-C
to exit out of the logs.
Make sure you pressed Ctrl-C
to exit out of the logs.
ls -l ./tensorflow/census/output/
### EXPECTED OUTPUT ###
...
drwxr-xr-x 11 cfregly staff 352 Nov 22 11:20 1511367633
drwxr-xr-x 11 cfregly staff 352 Nov 22 11:21 1511367665
drwxr-xr-x 11 cfregly staff 352 Nov 22 11:22 1511367765 <= Sub-directories of training output
...
Multiple training runs will produce multiple subdirectories - each with a different timestamp.
http://localhost:6006
pipeline train-server-stop --model-type=tensorflow --model-name=census --model-tag=a
Note: This is relative to where you cloned the models
repo above.
ls -l ./tensorflow/mnist/model
### EXPECTED OUTPUT ###
...
pipeline_conda_environment.yml <-- Required. Sets up the conda environment
pipeline_condarc <-- Required. Configure Conda proxy servers (.condarc)
pipeline_predict.py <-- Required. `predict(request: bytes) -> bytes` is required
pipeline_setup.sh <-- Required. Init script performed upon Docker build
pipeline_tfserving/ <-- Optional. Only TensorFlow Serving requires this directory
...
Inspect TensorFlow Serving Model
ls -l ./tensorflow/mnist/model/pipeline_tfserving/
### EXPECTED OUTPUT ###
...
pipeline_tfserving.config <-- Required by TensorFlow Serving. Custom request-batch sizes, etc.
1510612525/
1510612528/ <-- TensorFlow Serving finds the latest (highest) version
...
This command bundles the TensorFlow runtime with the model.
pipeline predict-server-build --model-type=tensorflow --model-name=mnist --model-tag=a --model-path=./tensorflow/mnist/model
Notes:
--model-path
must be relative. On Windows, be sure to use the forward slash\
for--model-path
.- If you see
CondaHTTPError: HTTP 000 CONNECTION FAILED for url
or[Errno 111] Connection refused'
orConnectionError(MaxRetryError("HTTPSConnectionPool
, you need to update./tensorflow/census/pipeline_condarc
to include proxy servers per THIS document. - For
pip
installs, you may also need toexport HTTP_PROXY
andexport HTTPS_PROXY
within./tensorflow/census/model/pipeline_setup.sh
pipeline predict-server-start --model-type=tensorflow --model-name=mnist --model-tag=a --memory-limit=2G
Notes:
- Ignore the following warning:
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
- If you see
port is already allocated
oralready in use by container
, you already have a container running. List and remove any conflicting containers. For example,docker ps
and/ordocker rm -f train-tfserving-tensorflow-mnist-a
. - You can change the port(s) by specifying the following:
--predict-port=8081
,--prometheus-port=9001
,--grafana-port=3001
. (Be sure to change the ports in the examples below to match your new ports.)
Note: Only the predict()
method is required. Everything else is optional.
cat ./tensorflow/mnist/model/pipeline_predict.py
### EXPECTED OUTPUT ###
import os
import logging
from pipeline_model import TensorFlowServingModel <-- Optional. Wraps TensorFlow Serving
from pipeline_monitor import prometheus_monitor as monitor <-- Optional. Monitor runtime metrics
from pipeline_logger import log <-- Optional. Log to console, file, kafka
...
__all__ = ['predict'] <-- Optional. Being a good Python citizen.
...
def _initialize_upon_import() -> TensorFlowServingModel: <-- Optional. Called once at server startup
return TensorFlowServingModel(host='localhost', <-- Optional. Wraps TensorFlow Serving
port=9000,
model_name=os.environ['PIPELINE_MODEL_NAME'],
inputs_name='inputs', <-- Optional. TensorFlow SignatureDef inputs
outputs_name='outputs', <-- Optional. TensorFlow SignatureDef outputs
timeout=100) <-- Optional. TensorFlow Serving timeout
_model = _initialize_upon_import() <-- Optional. Called once upon server startup
_labels = {'model_runtime': os.environ['PIPELINE_MODEL_RUNTIME'], <-- Optional. Tag metrics
'model_type': os.environ['PIPELINE_MODEL_TYPE'],
'model_name': os.environ['PIPELINE_MODEL_NAME'],
'model_tag': os.environ['PIPELINE_MODEL_TAG']}
_logger = logging.getLogger('predict-logger') <-- Optional. Standard Python logging
@log(labels=_labels, logger=_logger) <-- Optional. Sample and compare predictions
def predict(request: bytes) -> bytes: <-- Required. Called on every prediction
with monitor(labels=_labels, name="transform_request"): <-- Optional. Expose fine-grained metrics
transformed_request = _transform_request(request) <-- Optional. Transform input (json) into TensorFlow (tensor)
with monitor(labels=_labels, name="predict"):
predictions = _model.predict(transformed_request) <-- Optional. Calls _model.predict()
with monitor(labels=_labels, name="transform_response"):
transformed_response = _transform_response(predictions) <-- Optional. Transform TensorFlow (tensor) into output (json)
return transformed_response <-- Required. Returns the predicted value(s)
...
Wait for the model runtime to settle...
pipeline predict-server-logs --model-type=tensorflow --model-name=mnist --model-tag=a
### EXPECTED OUTPUT ###
...
2017-10-10 03:56:00.695 INFO 121 --- [ run-main-0] i.p.predict.jvm.PredictionServiceMain$ : Started PredictionServiceMain. in 7.566 seconds (JVM running for 20.739)
[debug] Thread run-main-0 exited.
[debug] Waiting for thread container-0 to terminate.
...
INFO[0050] Completed initial partial maintenance sweep through 4 in-memory fingerprints in 40.002264633s. source="storage.go:1398"
...
Notes:
- You need to
Ctrl-C
out of the log viewing before proceeding.
Before proceeding, make sure you hit Ctrl-C
after viewing the logs in the previous step.
pipeline predict-server-test --model-endpoint-url=http://localhost:8080/invocations --test-request-path=./tensorflow/mnist/input/predict/test_request.json
### EXPECTED OUTPUT ###
...
('{"variant": "tfserving-cpu-tensorflow-mnist-a", "outputs":{"outputs": '
'[0.11128007620573044, 1.4478533557849005e-05, 0.43401220440864563, '
'0.06995827704668045, 0.0028081508353352547, 0.27867695689201355, '
'0.017851119861006737, 0.006651509087532759, 0.07679300010204315, '
'0.001954273320734501]}}')
...
### FORMATTED OUTPUT ###
Digit Confidence
===== ==========
0 0.0022526539396494627
1 2.63791100074684e-10
2 0.4638307988643646 <-- Prediction
3 0.21909376978874207
4 3.2985670372909226e-07
5 0.29357224702835083
6 0.00019597385835368186
7 5.230629176367074e-05
8 0.020996594801545143
9 5.426473762781825e-06
Notes:
-
You may see
502 Bad Gateway
or'{"results":["fallback"]}'
if you predict too quickly. Let the server settle a bit - and try again. -
Instead of
localhost
, you may need to use192.168.99.100
or another IP/Host that maps to your local Docker host. This usually happens when using Docker Quick Terminal on Windows 7.
pipeline predict-server-test --model-endpoint-url=http://localhost:8080/invocations --test-request-path=./tensorflow/mnist/input/predict/test_request.json --test-request-concurrency=100
Notes:
- Instead of
localhost
, you may need to use192.168.99.100
or another IP/Host that maps to your local Docker host. This usually happens when using Docker Quick Terminal on Windows 7.
Use the REST API to POST a JSON document representing the number 2.
curl -X POST -H "Content-Type: application/json" \
-d '{"image": [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05098039656877518, 0.529411792755127, 0.3960784673690796, 0.572549045085907, 0.572549045085907, 0.847058892250061, 0.8156863451004028, 0.9960784912109375, 1.0, 1.0, 0.9960784912109375, 0.5960784554481506, 0.027450982481241226, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.32156863808631897, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.7882353663444519, 0.11764706671237946, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.32156863808631897, 0.9921569228172302, 0.988235354423523, 0.7921569347381592, 0.9450981020927429, 0.545098066329956, 0.21568629145622253, 0.3450980484485626, 0.45098042488098145, 0.125490203499794, 0.125490203499794, 0.03921568766236305, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.32156863808631897, 0.9921569228172302, 0.803921639919281, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6352941393852234, 0.9921569228172302, 0.803921639919281, 0.24705883860588074, 0.3490196168422699, 0.6509804129600525, 0.32156863808631897, 0.32156863808631897, 0.1098039299249649, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.007843137718737125, 0.7529412508010864, 0.9921569228172302, 0.9725490808486938, 0.9686275124549866, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.8274510502815247, 0.29019609093666077, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2549019753932953, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.847058892250061, 0.027450982481241226, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5921568870544434, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.7333333492279053, 0.44705885648727417, 0.23137256503105164, 0.23137256503105164, 0.4784314036369324, 0.9921569228172302, 0.9921569228172302, 0.03921568766236305, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5568627715110779, 0.9568628072738647, 0.7098039388656616, 0.08235294371843338, 0.019607843831181526, 0.0, 0.0, 0.0, 0.08627451211214066, 0.9921569228172302, 0.9921569228172302, 0.43137258291244507, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.15294118225574493, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08627451211214066, 0.9921569228172302, 0.9921569228172302, 0.46666669845581055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08627451211214066, 0.9921569228172302, 0.9921569228172302, 0.46666669845581055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08627451211214066, 0.9921569228172302, 0.9921569228172302, 0.46666669845581055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1882353127002716, 0.9921569228172302, 0.9921569228172302, 0.46666669845581055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6705882549285889, 0.9921569228172302, 0.9921569228172302, 0.12156863510608673, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2392157018184662, 0.9647059440612793, 0.9921569228172302, 0.6274510025978088, 0.003921568859368563, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08235294371843338, 0.44705885648727417, 0.16470588743686676, 0.0, 0.0, 0.2549019753932953, 0.9294118285179138, 0.9921569228172302, 0.9333333969116211, 0.27450981736183167, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4941176772117615, 0.9529412388801575, 0.0, 0.0, 0.5803921818733215, 0.9333333969116211, 0.9921569228172302, 0.9921569228172302, 0.4078431725502014, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7411764860153198, 0.9764706492424011, 0.5529412031173706, 0.8784314393997192, 0.9921569228172302, 0.9921569228172302, 0.9490196704864502, 0.43529415130615234, 0.007843137718737125, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6235294342041016, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9764706492424011, 0.6274510025978088, 0.1882353127002716, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.18431372940540314, 0.5882353186607361, 0.729411780834198, 0.5686274766921997, 0.3529411852359772, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]}' \
http://localhost:8080/invocations \
-w "\n\n"
### Expected Output ###
{"variant": "tfserving-cpu-tensorflow-mnist-a", "outputs":{"outputs": [0.11128007620573044, 1.4478533557849005e-05, 0.43401220440864563, 0.06995827704668045, 0.0028081508353352547, 0.27867695689201355, 0.017851119861006737, 0.006651509087532759, 0.07679300010204315, 0.001954273320734501]}}
### Formatted Output
Digit Confidence
===== ==========
0 0.0022526539396494627
1 2.63791100074684e-10
2 0.4638307988643646 <-- Prediction
3 0.21909376978874207
4 3.2985670372909226e-07
5 0.29357224702835083
6 0.00019597385835368186
7 5.230629176367074e-05
8 0.020996594801545143
9 5.426473762781825e-06
Notes:
- Instead of
localhost
, you may need to use192.168.99.100
or another IP/Host that maps to your local Docker host. This usually happens when using Docker Quick Terminal on Windows 7.
Re-run the Prediction REST API while watching the following dashboard URL:
http://localhost:8080/dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22%22%2C%22stream%22%3A%22http%3A%2F%2Flocalhost%3A8080%2Fdashboard.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D
Notes:
- Instead of
localhost
, you may need to use192.168.99.100
or another IP/Host that maps to your local Docker host. This usually happens when using Docker Quick Terminal on Windows 7.
Re-run the Prediction REST API while watching the following detailed metrics dashboard URL.
http://localhost:3000/
Notes:
- Instead of
localhost
, you may need to use192.168.99.100
or another IP/Host that maps to your local Docker host. This usually happens when using Docker Quick Terminal on Windows 7.
Username/Password: admin/admin
Set Type
to Prometheues
.
Instead of localhost
, you may need to use 192.168.99.100
or another IP/Host that maps to your local Docker host. This usually happens when using Docker Quick Terminal on Windows 7.
Set Url
to http://localhost:9090
.
Set Access
to direct
.
Click Save & Test
.
Click Dashboards -> Import
upper-left menu drop-down.
Copy and Paste THIS raw json file into the paste JSON
box.
Select the Prometheus-based data source that you setup above and click Import
.
Change the Date Range in the upper right to Last 5m
and the Refresh Every to 5s
.
Create additional PipelineAI Prediction widgets using THIS guide to the Prometheus Syntax.
pipeline predict-server-stop --model-type=tensorflow --model-name=mnist --model-tag=a
PipelineAI is fully compatible with AWS SageMaker.
Specifically, you can upload PipelineAI-optimized Docker images to your private AWS Elastic Container Registry (ECR) for use with AWS SageMaker's Custom Docker image support.
Follow THESE steps to upload the predict-mnist
Docker image above to AWS SageMaker.
Follow the steps below to create an AWS SageMaker Model Endpoint with the Docker Image uploaded in the previous step.
aws-iam-arn
: arn:aws:iam::...:role/service-role/AmazonSageMaker-ExecutionRole-...aws-instance-type
: Click HERE for instance types.
pipeline predict-sage-start --model-name=mnist --model-type=tensorflow --model-tag=a --aws-iam-arn=<full-aws-iam-arn-SageMaker-ExecutionRole> --aws-instance-type=<aws-instance-type>
Note: This step assumes you have setup your AWS credentials in your environment. Follow THESE steps to setup your AWS credentials for this PipelineAI CLI command.
pipeline predict-sage-test --model-name=mnist --test-request-path=./tensorflow/mnist/input/predict/test_request.json --test-request-concurrency=100
### EXPECTED OUTPUT ###
...
Variant: 'mnist-a-tensorflow-tfserving-cpu' <-- Variant name (ie. a)
('{"outputs":{"outputs": [0.11128007620573044, 1.4478533557849005e-05, '
'0.43401220440864563, 0.06995827704668045, 0.0028081508353352547, '
'0.27867695689201355, 0.017851119861006737, 0.006651509087532759, '
'0.07679300010204315, 0.001954273320734501]}}')
...
Additional PipelineAI Standalone and Enterprise Features
See below for feature details. Click HERE to compare PipelineAI Products.