Skip to content

I used Azure to configure a cloud-based machine learning production model, deployed it to a REST EP(URI + authentication key) for real-time inferencing, and consumed it. I also created, published, and consumed a pipeline

Notifications You must be signed in to change notification settings

GowthamiWudaru/Operationalizing-Machine-Learning-With-Azure

Repository files navigation

Operationalizing-Machine-Learning-With-Azure

We use Azure to configure a cloud-based machine learning production model, deploy it to a REST EP(URI + authentication key) for real-time inferencing, and consume it. We also create, publish to REST EP, and consume a pipeline.

project overview

Architectural Diagram

Architecture Diagram

Key Steps

Deploy model in Azure ML Studio

We first register a dataset and create an automl run. Once the run is complete, we can deploy the best model. After deployment, the model is registered and Endpoint is available for us to use. We then enable application insights for this EP with help of logs.py script. After that we make use of swagger URI provided in the EP and run both swagger.sh and serve.py so we can see the swagger documentation (It gives us a place to check what paths are available and the expected requests and responses). We run endpoint.py which send request to REST URI and valid JSON response can be seen in the output. Additionally, we use apache benchmark to get the stats of the EP.

Register Dataset dataset registered

AutoML Run Complete automl complete

AutoML best Model automl best model voting Ensemble

Deployed AutoML best Model project overview

Enable Application Insights model deployed

Log outputs - fulloutput logs.py output

Swagger Documentation swagger runs in localhost methods in swagger documentation health check for the deployed docker image

Consume EndPoint endpoint.py output endpoint.py output benchmark.sh output - fulloutput apache benchmark output apache benchmark stats

Pipeline

We create and publish a pipeline containing an automl module and the dataset. The pipeline is invoked using the pipeline REST EP(The last run will have finished status in Screencast)

Create and publish a pipeline pipeline EP is active pipeline steps pipeline overview

Configure a pipeline with the Python SDK pipeline steps in jupyter notebook

Use a REST endpoint to interact with a Pipeline pipeline EP is active pipeline REST EP is running

Future work

As of now, we only have automl run in the pipeline. We can add the deploy best model(if the primary metric is in required range) to the pipeline and also add a simple health check for the REST EP.

Using Service principal for authentication as it is a better way to write secure scripts or programs, allowing you to apply both permissions restrictions and locally stored static credential information.

The model performance can be improved by giving a balanced dataset as input(by undersampling or oversampling methods) and by exploring the automl config(like adding custom FeaturizationConfig)

Screen Recording

Youtube video

About

I used Azure to configure a cloud-based machine learning production model, deployed it to a REST EP(URI + authentication key) for real-time inferencing, and consumed it. I also created, published, and consumed a pipeline

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published