diff --git a/CHANGELOG.md b/CHANGELOG.md index 9d53236..6ac99fe 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file. - ##### The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - ##### This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [2.0.0] - XXXX-XX-XX + +### Changed +- Simplified examples to the minimum core functionality necessary and removed all dependencies on `infernet-ml`. +- Updated images used for deploying the Infernet Node. + ## [1.0.1] - 2024-07-31 ### Fixed diff --git a/projects/gpt4/container/README.md b/projects/gpt4/container/README.md index 5e3aa0a..5a6bd16 100644 --- a/projects/gpt4/container/README.md +++ b/projects/gpt4/container/README.md @@ -1,7 +1,9 @@ # GPT 4 -In this example, we run a minimalist container that makes use of the OpenAI [completions API](https://platform.openai.com/docs/api-reference/chat) to serve text generation requests. + +In this example, we will run a minimalist container that makes use of the OpenAI [completions API](https://platform.openai.com/docs/api-reference/chat) to serve text generation requests. ## Requirements + To use the model you'll need to have an OpenAI API key. Get one on [OpenAI](https://openai.com/)'s website. ## Run the Container @@ -24,7 +26,8 @@ curl -X POST localhost:3000/service_output -H "Content-Type: application/json" \ ## Next steps -This container is for demonstration purposes only, and is purposefully simplified for readability and ease of comprehension. For a production-ready version of this code, check out: +This container is for demonstration purposes only, and is purposefully simplified for +readability and ease of comprehension. For a production-ready version of this code, check out: -- The [CSS Inference Workflow](https://infernet-ml.docs.ritual.net/reference/infernet_ml/workflows/inference/css_inference_workflow/): A Python class that supports multiple API providers, including OpenAI, that can be used to build production-ready containers. +- The [CSS Inference Workflow](https://infernet-ml.docs.ritual.net/reference/infernet_ml/workflows/inference/css_inference_workflow/): A Python class that supports multiple API providers, including OpenAI, and can be used to build production-ready containers. - The [CSS Inference Service](https://infernet-services.docs.ritual.net/reference/css_inference_service/): A production-ready, [Infernet](https://docs.ritual.net/infernet/node/introduction)-compatible container that works out-of-the-box with minimal configuration, and serves inference using the `CSS Inference Workflow`. diff --git a/projects/hello-world/hello-world.md b/projects/hello-world/hello-world.md index 4562e59..d6490d6 100644 --- a/projects/hello-world/hello-world.md +++ b/projects/hello-world/hello-world.md @@ -3,12 +3,12 @@ Welcome to the first tutorial of Infernet! In this tutorial we will guide you through the process of setting up and running an Infernet Node, and then demonstrate how to create and monitor off-chain compute jobs and on-chain subscriptions. -To interact with infernet, one could either create a job by accessing an infernet node directly through it's API (we'll +To interact with infernet, one could either create a job by accessing an Infernet Node directly through it's API (we'll refer to this as an off-chain job), or by creating a subscription on-chain (we'll refer to this as an on-chain job). ## Requesting an off-chain job: Hello World! -This project is a simple [flask-app](container/src/app.py) that is compatible with `infernet`, and simply +This project is a simple [flask-app](container/src/app.py) that is compatible with Infernet, and simply [echoes what you send to it](container/src/app.py#L16). ### Install Docker & Verify Installation @@ -42,11 +42,11 @@ make build-container project=hello-world Then, from the top-level project directory, Run the following make command: -``` +```bash make deploy-container project=hello-world ``` -This will deploy an infernet node along with the `hello-world` image. +This will deploy an Infernet Node along with the `hello-world` image. ### Creating an off-chain job through the API @@ -107,11 +107,11 @@ In another terminal, run `docker container ls`, you should see something like th ```bash CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -c2ca0ffe7817 ritualnetwork/infernet-anvil:0.0.0 "anvil --host 0.0.0.…" 9 seconds ago Up 8 seconds 0.0.0.0:8545->3000/tcp anvil-node +c2ca0ffe7817 ritualnetwork/infernet-anvil:1.0.0 "anvil --host 0.0.0.…" 9 seconds ago Up 8 seconds 0.0.0.0:8545->3000/tcp infernet-anvil 0b686a6a0e5f ritualnetwork/hello-world-infernet:0.0.2 "gunicorn app:create…" 9 seconds ago Up 8 seconds 0.0.0.0:3000->3000/tcp hello-world -28b2e5608655 ritualnetwork/infernet-node:0.1.1 "/app/entrypoint.sh" 10 seconds ago Up 10 seconds 0.0.0.0:4000->4000/tcp deploy-node-1 -03ba51ff48b8 fluent/fluent-bit:latest "/fluent-bit/bin/flu…" 10 seconds ago Up 10 seconds 2020/tcp, 0.0.0.0:24224->24224/tcp deploy-fluentbit-1 -a0d96f29a238 redis:latest "docker-entrypoint.s…" 10 seconds ago Up 10 seconds 0.0.0.0:6379->6379/tcp deploy-redis-1 +28b2e5608655 ritualnetwork/infernet-node:1.3.1 "/app/entrypoint.sh" 10 seconds ago Up 10 seconds 0.0.0.0:4000->4000/tcp deploy-node-1 +03ba51ff48b8 fluent/fluent-bit:latest "/fluent-bit/bin/flu…" 10 seconds ago Up 10 seconds 2020/tcp, 0.0.0.0:24224->24224/tcp infernet-fluentbit +a0d96f29a238 redis:latest "docker-entrypoint.s…" 10 seconds ago Up 10 seconds 0.0.0.0:6379->6379/tcp infernet-redis ``` You can see that the anvil node is running on port `8545`, and the infernet @@ -125,7 +125,7 @@ All this contract does is to request a job from the infernet node, and upon rece the result, it will use the `forge` console to print the result. **Anvil Logs**: First, it's useful to look at the logs of the anvil node to see what's going on. In -a new terminal, run `docker logs -f anvil-node`. +a new terminal, run `docker logs -f infernet-anvil`. **Deploying the contracts**: In another terminal, run the following command: diff --git a/projects/onnx-iris/container/README.md b/projects/onnx-iris/container/README.md index ab34725..f495432 100644 --- a/projects/onnx-iris/container/README.md +++ b/projects/onnx-iris/container/README.md @@ -1,23 +1,10 @@ -# Iris Classification via ONNX Runtime +# Running an ONNX model -This example uses a pre-trained model to classify iris flowers. The code for the model -is located at -our [simple-ml-models](https://github.com/ritual-net/simple-ml-models/tree/main/iris_classification) -repository. +In this example, we will serve a pre-trained model to classify iris flowers via the ONNX runntime. The code for the model +is located at our [simple-ml-models](https://github.com/ritual-net/simple-ml-models/tree/main/iris_classification) repository. -## Overview - -We're making use of -the [ONNXInferenceWorkflow](https://github.com/ritual-net/infernet-ml/blob/main/src/ml/workflows/inference/onnx_inference_workflow.py) -class to run the model. This is one of many workflows that we currently support in our -[infernet-ml](https://github.com/ritual-net/infernet-ml). Consult the library's -documentation for more info on workflows that -are supported. - -## Building & Running the Container in Isolation - -Note that this container is meant to be started by the infernet-node. For development & -Testing purposes, you can run the container in isolation using the following commands. +This container is meant to be started by the Infernet Node. For development and +testing purposes, you can run the container in isolation using the following commands. ### Building the Container @@ -44,7 +31,7 @@ Run the following command to run an inference: ```bash curl -X POST http://127.0.0.1:3000/service_output \ -H "Content-Type: application/json" \ - -d '{"source":1, "data": {"input": [[1.0380048, 0.5586108, 1.1037828, 1.712096]]}}' + -d '{"source": 1, "data": {"input": [[1.0380048, 0.5586108, 1.1037828, 1.712096]]}}' ``` #### Note Regarding the Input @@ -63,27 +50,23 @@ Putting this input into a vector and scaling it, we get the following scaled inp [1.0380048, 0.5586108, 1.1037828, 1.712096] ``` -Refer -to [this function in the model's repository](https://github.com/ritual-net/simple-ml-models/blob/03ebc6fb15d33efe20b7782505b1a65ce3975222/iris_classification/iris_inference_pytorch.py#L13) -for more information on how the input is scaled. +Refer to [this function in the model's repository](https://github.com/ritual-net/simple-ml-models/blob/03ebc6fb15d33efe20b7782505b1a65ce3975222/iris_classification/iris_inference_pytorch.py#L13) for more information on how the input +is scaled. -For more context on the Iris dataset, refer to -the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/iris). +For more context on the Iris dataset, refer to the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/iris). ### Output By running the above command, you should get a response similar to the following: ```json -[ - [ - [ - 0.0010151526657864451, - 0.014391022734344006, - 0.9845937490463257 - ] +{ + "output": [ + 0.0010151526657864451, + 0.014391022734344006, + 0.9845937490463257 ] -] +} ``` The response corresponds to the model's prediction for each of the classes: @@ -93,4 +76,12 @@ The response corresponds to the model's prediction for each of the classes: ``` In this case, the model predicts that the input corresponds to the class `virginica`with -a probability of `0.9845937490463257`(~98.5%). +a probability of `0.9845937490463257` (~98.5%). + +## Next steps + +This container is for demonstration purposes only, and is purposefully simplified for readability and ease of comprehension. For a production-ready version of this code, check out: + +- The [ONNX Inference Workflow](https://infernet-ml.docs.ritual.net/reference/infernet_ml/workflows/inference/onnx_inference_workflow): A Python class that can run any ONNX model from a variety of storage sources. +- The [ONNX Inference Service](https://infernet-services.docs.ritual.net/reference/onnx_inference_service): A production-ready, [Infernet](https://docs.ritual.net/infernet/node/introduction)-compatible container that works out-of-the-box +with minimal configuration, and serves ONNX inference using the `ONNX Inference Workflow`. diff --git a/projects/onnx-iris/container/src/app.py b/projects/onnx-iris/container/src/app.py index c1f9eb7..b2bcd17 100644 --- a/projects/onnx-iris/container/src/app.py +++ b/projects/onnx-iris/container/src/app.py @@ -1,19 +1,11 @@ import logging from typing import Any, cast, List -from infernet_ml.utils.common_types import TensorInput -import numpy as np from eth_abi import decode, encode # type: ignore -from infernet_ml.utils.model_loader import ( - HFLoadArgs, - ModelSource, -) -from infernet_ml.utils.service_models import InfernetInput, JobLocation -from infernet_ml.workflows.inference.onnx_inference_workflow import ( - ONNXInferenceWorkflow, - ONNXInferenceInput, - ONNXInferenceResult, -) +from huggingface_hub import hf_hub_download # type: ignore +import numpy as np +import onnx +from onnxruntime import InferenceSession # type: ignore from quart import Quart, request from quart.json.provider import DefaultJSONProvider @@ -33,14 +25,10 @@ def default(obj: Any) -> Any: def create_app() -> Quart: Quart.json_provider_class = NumpyJsonEncodingProvider app = Quart(__name__) - # we are downloading the model from the hub. - # model repo is located at: https://huggingface.co/Ritual-Net/iris-dataset - workflow = ONNXInferenceWorkflow( - model_source=ModelSource.HUGGINGFACE_HUB, - load_args=HFLoadArgs(repo_id="Ritual-Net/iris-dataset", filename="iris.onnx"), - ) - workflow.setup() + # Model repo is located at: https://huggingface.co/Ritual-Net/iris-dataset + REPO_ID = "Ritual-Net/iris-dataset" + FILENAME = "iris.onnx" @app.route("/") def index() -> str: @@ -51,43 +39,65 @@ def index() -> str: @app.route("/service_output", methods=["POST"]) async def inference() -> Any: - req_data = await request.get_json() """ - InfernetInput has the format: + Input data has the format: source: (0 on-chain, 1 off-chain) + destination: (0 on-chain, 1 off-chain) data: dict[str, Any] """ - infernet_input: InfernetInput = InfernetInput(**req_data) - - match infernet_input: - case InfernetInput(source=JobLocation.OFFCHAIN): - web2_input = cast(dict[str, Any], infernet_input.data) - values = cast(List[List[float]], web2_input["input"]) - case InfernetInput(source=JobLocation.ONCHAIN): - web3_input: List[int] = decode( - ["uint256[]"], bytes.fromhex(cast(str, infernet_input.data)) - )[0] - values = [[float(v) / 1e6 for v in web3_input]] + req_data: dict[str, Any] = await request.get_json() + onchain_source = True if req_data.get("source") == 0 else False + onchain_destination = True if req_data.get("destination") == 0 else False + data = req_data.get("data") - """ - The input to the onnx inference workflow needs to conform to ONNX runtime's - input_feed format. For more information refer to: - https://docs.ritual.net/ml-workflows/inference-workflows/onnx_inference_workflow - """ - _input = ONNXInferenceInput( - inputs={"input": TensorInput(shape=(1, 4), dtype="float", values=values)}, + if onchain_source: + """ + For on-chain requests, the prompt is sent as a generalized hex-string + which we will decode to the appropriate format. + """ + web3_input: List[int] = decode( + ["uint256[]"], bytes.fromhex(cast(str, data)) + )[0] + values = [[float(v) / 1e6 for v in web3_input]] + else: + """For off-chain requests, the input is sent as is.""" + web2_input = cast(dict[str, Any], data) + values = cast(list[list[float]], web2_input["input"]) + + # Prepare the input data for the model + dtype = cast(np.dtype[np.float32], "float32") + shape = (len(values), len(values[0])) + + # Download the model from the hub + path = hf_hub_download(repo_id=REPO_ID, filename=FILENAME, force_download=False) + model = onnx.load(path) + onnx.checker.check_model(model) + session = InferenceSession(path) + output_names = [output.name for output in model.graph.output] + + # Run the model + outputs = session.run( + output_names, + { + "input": np.array( + values, + dtype=dtype, + ).reshape(shape) + }, ) - result: ONNXInferenceResult = workflow.inference(_input) - - match infernet_input: - case InfernetInput(destination=JobLocation.OFFCHAIN): - """ - In case of an off-chain request, the result is returned as is. - """ - return result - case InfernetInput(destination=JobLocation.ONCHAIN): - """ - In case of an on-chain request, the result is returned in the format: + + # Get the predictions + output = outputs[0] + predictions = { + "values": output.flatten(), + "dtype": "float32", + "shape": output.shape, + } + + # Depending on the destination, the result is returned in a different format. + if onchain_destination: + """ + For on-chain requests, the result is returned in the format: { "raw_input": str, "processed_input": str, @@ -95,20 +105,22 @@ async def inference() -> Any: "processed_output": str, "proof": str, } - refer to: https://docs.ritual.net/infernet/node/advanced/containers for - more info. - """ - predictions = result[0] - predictions_normalized = [int(p * 1e6) for p in predictions.values] - return { - "raw_input": "", - "processed_input": "", - "raw_output": encode(["uint256[]"], [predictions_normalized]).hex(), - "processed_output": "", - "proof": "", - } - case _: - raise ValueError("Invalid destination") + refer to: https://docs.ritual.net/infernet/node/advanced/containers for more + info. + """ + predictions_normalized = [int(p * 1e6) for p in predictions["values"]] + return { + "raw_input": "", + "processed_input": "", + "raw_output": encode(["uint256[]"], [predictions_normalized]).hex(), + "processed_output": "", + "proof": "", + } + else: + """ + For off-chain request, the result is returned as is. + """ + return {"output": predictions["values"]} return app diff --git a/projects/onnx-iris/container/src/requirements.txt b/projects/onnx-iris/container/src/requirements.txt index 2fa2424..150c982 100644 --- a/projects/onnx-iris/container/src/requirements.txt +++ b/projects/onnx-iris/container/src/requirements.txt @@ -1,4 +1,6 @@ +huggingface-hub==0.17.3 +numpy==1.26.4 +onnx==1.16.1 +onnxruntime==1.18.0 quart==0.19.4 -infernet-ml==1.0.0 -infernet-ml[onnx_inference]==1.0.0 web3==6.15.0 diff --git a/projects/onnx-iris/onnx-iris.md b/projects/onnx-iris/onnx-iris.md deleted file mode 100644 index bc056c2..0000000 --- a/projects/onnx-iris/onnx-iris.md +++ /dev/null @@ -1,271 +0,0 @@ -# Running an ONNX Model on Infernet - -Welcome to this comprehensive guide where we'll explore how to run an ONNX model on Infernet, using our [infernet-container-starter](https://github.com/ritual-net/infernet-container-starter/) -examples repository. This tutorial is designed to give you and end-to-end understanding of how you can run your own -custom pre-trained models, and interact with them on-chain and off-chain. - -**Model:** This example uses a pre-trained model to classify iris flowers. The code for the model -can be found in our [simple-ml-models](https://github.com/ritual-net/simple-ml-models/tree/main/iris_classification) repository. - -## Pre-requisites - -For this tutorial you'll need to have the following installed. - -1. [Docker](https://docs.docker.com/engine/install/) -2. [Foundry](https://book.getfoundry.sh/getting-started/installation) - -### Ensure `docker` & `foundry` exist - -To check for `docker`, run the following command in your terminal: - -```bash copy -docker --version -# Docker version 25.0.2, build 29cf629 (example output) -``` - -You'll also need to ensure that docker-compose exists in your terminal: - -```bash copy -which docker-compose -# /usr/local/bin/docker-compose (example output) -``` - -To check for `foundry`, run the following command in your terminal: - -```bash copy -forge --version -# forge 0.2.0 (551bcb5 2024-02-28T07:40:42.782478000Z) (example output) -``` - -### Clone the starter repository - -If you haven't already, clone the infernet-container-starter repository. All of the code for this tutorial is located -under the `projects/onnx-iris` directory. - -```bash copy -# Clone locally -git clone --recurse-submodules https://github.com/ritual-net/infernet-container-starter -# Navigate to the repository -cd infernet-container-starter -``` - -## Making Inference Requests via Node API (a la Web2 request) - -### Build the `onnx-iris` container - -From the top-level directory of this repository, simply run the following command to build the `onnx-iris` container: - -```bash copy -make build-container project=onnx-iris -``` - -After the container is built, you can deploy an infernet-node that utilizes that -container by running the following command: - -```bash copy -make deploy-container project=onnx-iris -``` - -Now, you can make inference requests to the infernet-node. In a new tab, run: - -```bash copy -curl -X POST "http://127.0.0.1:4000/api/jobs" \ - -H "Content-Type: application/json" \ - -d '{"containers":["onnx-iris"], "data": {"input": [[1.0380048, 0.5586108, 1.1037828, 1.712096]]}}' -``` - -You should get an output similar to the following: - -```json -{ - "id": "074b9e98-f1f6-463c-b185-651878f3b4f6" -} -``` - -Now, you can check the status of the job by running (Make sure job id matches the one -you got from the previous request): - -```bash -curl -X GET "http://127.0.0.1:4000/api/jobs?id=074b9e98-f1f6-463c-b185-651878f3b4f6" -``` - -Should return: - -```json -[ - { - "id": "074b9e98-f1f6-463c-b185-651878f3b4f6", - "result": { - "container": "onnx-iris", - "output": [ - [ - [ - 0.0010151526657864451, - 0.014391022734344006, - 0.9845937490463257 - ] - ] - ] - }, - "status": "success" - } -] -``` - -The `output` corresponds to the model's prediction for each of the classes: - -```python -['setosa', 'versicolor', 'virginica'] -``` - -In this case, the model predicts that the input corresponds to the class `virginica`with -a probability of `0.9845937490463257`(~98.5%). - -#### Note Regarding the Input - -The inputs provided above correspond to an iris flower with the following -characteristics. Refer to the - -1. Sepal Length: `5.5cm` -2. Sepal Width: `2.4cm` -3. Petal Length: `3.8cm` -4. Petal Width: `1.1cm` - -Putting this input into a vector and scaling it, we get the following scaled input: - -```python -[1.0380048, 0.5586108, 1.1037828, 1.712096] -``` - -Refer -to [this function in the model's repository](https://github.com/ritual-net/simple-ml-models/blob/03ebc6fb15d33efe20b7782505b1a65ce3975222/iris_classification/iris_inference_pytorch.py#L13) -for more information on how the input is scaled. - -For more context on the Iris dataset, refer to -the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/iris). - -## Making Inference Requests via Contracts (a la Web3 request) - -The [contracts](contracts) directory contains a simple forge -project that can be used to interact with the Infernet Node. - -Here, we have a very simple -contract, [IrisClassifier](contracts/src/IrisClassifier.sol), -that requests a compute job from the Infernet Node and then retrieves the result. -We are going to make the same request as above, but this time using a smart contract. -Since floats are not supported in Solidity, we convert all floats to `uint256` by -multiplying the input vector entries by `1e6`: - -```Solidity - uint256[] memory iris_data = new uint256[](4); -iris_data[0] = 1_038_004; -iris_data[1] = 558_610; -iris_data[2] = 1_103_782; -iris_data[3] = 1_712_096; -``` - -We have multiplied the input by 1e6 to have enough accuracy. This can be seen -[here](contracts/src/IrisClassifier.sol#19) in the contract's -code. - -### Monitoring the EVM Logs - -The infernet node configuration for this project includes -an [infernet anvil node](projects/hello-world/README.mdllo-world/README.md#77) with pre-deployed contracts. You can view the -logs of the anvil node to see what's going on. In a new terminal, run: - -```bash -docker logs -f infernet-anvil -``` - -As you deploy the contract and make requests, you should see logs indicating the -requests and responses. - -### Deploying the Contract - -Simply run the following command to deploy the contract: - -```bash -project=onnx-iris make deploy-contracts -``` - -In your anvil logs you should see the following: - -```bash -eth_getTransactionReceipt - - Transaction: 0xeed605eacdace39a48635f6d14215b386523766f80a113b4484f542d862889a4 - Contract created: 0x13D69Cf7d6CE4218F646B759Dcf334D82c023d8e - Gas used: 714269 - - Block Number: 1 - Block Hash: 0x4e6333f91e86a0a0be357b63fba9eb5f5ba287805ac35aaa7698fd05445730f5 - Block Time: "Mon, 19 Feb 2024 20:31:17 +0000" - -eth_blockNumber -``` - -beautiful, we can see that a new contract has been created -at `0x663F3ad617193148711d28f5334eE4Ed07016602`. That's the address of -the `IrisClassifier` contract. We are now going to call this contract. To do so, -we are using -the [CallContract.s.sol](contracts/script/CallContract.s.sol) -script. Note that the address of the -contract [is hardcoded in the script](contracts/script/CallContract.s.sol#L13), -and should match the address we see above. Since this is a test environment and we're -using a test deployer address, this address is quite deterministic and shouldn't change. -Otherwise, change the address in the script to match the address of the contract you -just deployed. - -### Calling the Contract - -To call the contract, run the following command: - -```bash -project=onnx-iris make call-contract -``` - -In the anvil logs, you should see the following: - -```bash -eth_sendRawTransaction - - -_____ _____ _______ _ _ _ -| __ \|_ _|__ __| | | | /\ | | -| |__) | | | | | | | | | / \ | | -| _ / | | | | | | | |/ /\ \ | | -| | \ \ _| |_ | | | |__| / ____ \| |____ -|_| \_\_____| |_| \____/_/ \_\______| - - -predictions: (adjusted by 6 decimals, 1_000_000 = 100%, 1_000 = 0.1%) -Setosa: 1015 -Versicolor: 14391 -Virginica: 984593 - - Transaction: 0x77c7ff26ed20ffb1a32baf467a3cead6ed81fe5ae7d2e419491ca92b4ac826f0 - Gas used: 111091 - - Block Number: 3 - Block Hash: 0x78f98f4d54ebdca2a8aa46c3b9b7e7ae36348373dbeb83c91a4600dd6aba2c55 - Block Time: "Mon, 19 Feb 2024 20:33:00 +0000" - -eth_blockNumber -eth_newFilter -eth_getFilterLogs -``` - -Beautiful! We can see that the same result has been posted to the contract. - -### Next Steps - -From here, you can bring your own pre-trained ONNX model, and with minimal changes, you can make it both work with an -infernet-node as well as a smart contract. - -### More Information - -1. Check out our [other examples](../../readme.md) if you haven't already -2. [Infernet Callback Consumer Tutorial](https://docs.ritual.net/infernet/sdk/consumers/Callback) -3. [Infernet Nodes Docoumentation](https://docs.ritual.net/infernet/node/introduction) -4. [Infernet-Compatible Containers](https://docs.ritual.net/infernet/node/advanced/containers) diff --git a/projects/prompt-to-nft/prompt-to-nft.md b/projects/prompt-to-nft/prompt-to-nft.md deleted file mode 100644 index fb186c1..0000000 --- a/projects/prompt-to-nft/prompt-to-nft.md +++ /dev/null @@ -1,416 +0,0 @@ -# Prompt to NFT - -In this tutorial we are going to create a dapp where we can generate NFT's by a single prompt from the user. This -project has many components: - -1. A service that runs Stable Diffusion. -2. A NextJS frontend that connects to the local Anvil node -3. An NFT smart contract which is also a [Infernet Consumer](https://docs.ritual.net/infernet/sdk/consumers/Callback). -4. An Infernet container which collects the prompt, calls the Stable Diffusion service, retrieves the NFT and uploads it - to Arweave. -5. An anvil node to which we will deploy the NFT smart contract. - -## Install Pre-requisites - -For this tutorial you'll need to have the following installed. - -1. [Docker](https://docs.docker.com/engine/install/) -2. [Foundry](https://book.getfoundry.sh/getting-started/installation) - -## Setting up a stable diffusion service - -Included with this tutorial, is a [containerized stable-diffusion service](./stablediffusion). - -### Rent a GPU machine -To run this service, you will need to have access to a machine with a powerful GPU. In the video above, we use an -A100 instance on [Paperspace](https://www.paperspace.com/). - -### Install docker -You will have to install docker. - -For Ubuntu, you can run the following commands: - -```bash copy -# install docker -sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -``` -As docker installation may vary depending on your operating system, consult the -[official documentation](https://docs.docker.com/engine/install/ubuntu/) for more information. - -After installation, you can verify that docker is installed by running: - -```bash -# sudo docker run hello-world -Hello from Docker! -``` - -### Ensure CUDA is installed -Depending on where you rent your GPU machine, CUDA is typically pre-installed. For Ubuntu, you can follow the -instructions [here](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#prepare-ubuntu). - -You can verify that CUDA is installed by running: - -```bash copy -# verify Installation -python -c ' -import torch -print("torch.cuda.is_available()", torch.cuda.is_available()) -print("torch.cuda.device_count()", torch.cuda.device_count()) -print("torch.cuda.current_device()", torch.cuda.current_device()) -print("torch.cuda.get_device_name(0)", torch.cuda.get_device_name(0)) -' -``` - -If CUDA is installed and available, your output will look similar to the following: - -```bash -torch.cuda.is_available() True -torch.cuda.device_count() 1 -torch.cuda.current_device() 0 -torch.cuda.get_device_name(0) Tesla V100-SXM2-16GB -``` - -### Ensure `nvidia-container-runtime` is installed -For your container to be able to access the GPU, you will need to install the `nvidia-container-runtime`. -On Ubuntu, you can run the following commands: - -```bash copy -# Docker GPU support -# nvidia container-runtime repos -# https://nvidia.github.io/nvidia-container-runtime/ -curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \ -sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID) -curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \ -sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list -sudo apt-get update - -# install nvidia-container-runtime -# https://docs.docker.com/config/containers/resource_constraints/#gpu -sudo apt-get install -y nvidia-container-runtime -``` -As always, consult the [official documentation](https://nvidia.github.io/nvidia-container-runtime/) for more -information. - -You can verify that `nvidia-container-runtime` is installed by running: - -```bash copy -which nvidia-container-runtime-hook -# this should return a path to the nvidia-container-runtime-hook -``` - -Now, with the pre-requisites installed, we can move on to setting up the stable diffusion service. - -### Clone this repository - -```bash copy -# Clone locally -git clone --recurse-submodules https://github.com/ritual-net/infernet-container-starter -# Navigate to the repository -cd infernet-container-starter -``` - -### Build the Stable Diffusion service - -This will build the `stablediffusion` service container. -```bash copy -make build-service project=prompt-to-nft service=stablediffusion -``` - -### Run the Stable Diffusion service -```bash copy -make run-service project=prompt-to-nft service=stablediffusion -``` - -This will start the `stablediffusion` service. Note that this service will have to download a large model file, -so it may take a few minutes to be fully ready. Downloaded model will get cached, so subsequent runs will be faster. - - -## Setting up the Infernet Node along with the `prompt-to-nft` container - -You can follow the following steps on your local machine to setup the Infernet Node and the `prompt-to-nft` container. - -### Ensure `docker` & `foundry` exist -To check for `docker`, run the following command in your terminal: -```bash copy -docker --version -# Docker version 25.0.2, build 29cf629 (example output) -``` - -You'll also need to ensure that docker-compose exists in your terminal: -```bash copy -which docker-compose -# /usr/local/bin/docker-compose (example output) -``` - -To check for `foundry`, run the following command in your terminal: -```bash copy -forge --version -# forge 0.2.0 (551bcb5 2024-02-28T07:40:42.782478000Z) (example output) -``` - -### Clone the starter repository -Just like our other examples, we're going to clone this repository. -All of the code and instructions for this tutorial can be found in the -[`projects/prompt-to-nft`](./prompt-to-nft) -directory of the repository. - -```bash copy -# Clone locally -git clone --recurse-submodules https://github.com/ritual-net/infernet-container-starter -# Navigate to the repository -cd infernet-container-starter -``` - -### Configure the `prompt-to-nft` container - -#### Configure the URL for the Stable Diffusion service -The `prompt-to-nft` container needs to know where to find the stable diffusion service. To do this, we need to -modify the configuration file for the `prompt-to-nft` container. We have a sample [config.sample.json](./container/config.sample.json) file. -Simply navigate to the [`projects/prompt-to-nft/container`](./container) directory and set up the config file: - -```bash -cd projects/prompt-to-nft/container -cp config.sample.json config.json -``` - -In the `containers` field, you will see the following: - -```json -"containers": [ - { - // etc. etc. - "env": { - "ARWEAVE_WALLET_FILE_PATH": "/app/wallet/keyfile-arweave.json", - "IMAGE_GEN_SERVICE_URL": "http://your.services.ip:port" // <- replace with your service's IP and port - } - } -}, -``` - -#### Configure the path to your Arweave wallet - -Create a directory named `wallet` in the `container` directory and place your Arweave wallet file in it. - -```bash -mkdir wallet -cp /path/to/your/arweave-wallet.json wallet/keyfile-arweave.json -``` - -By default the `prompt-to-nft` container will look for a wallet file at `/app/wallet/keyfile-arweave.json`. The `wallet` -directory you have created, will get copied into your docker file at the build step below. If your wallet filename is -different, you can change the `ARWEAVE_WALLET_FILE_PATH` environment variable in the `config.json` file. - -```json -"containers": [ - { - // etc. etc. - "env": { - "ARWEAVE_WALLET_FILE_PATH": "/app/wallet/keyfile-arweave.json", // <- replace with your wallet file name - "IMAGE_GEN_SERVICE_URL": "http://your.services.ip:port" - } - } -}, -``` - -### Build the `prompt-to-nft` container - -First, navigate back to the root of the repository. Then simply run the following command to build the `prompt-to-nft` -container: - -```bash copy -cd ../../.. -make build-container project=prompt-to-nft -``` - -### Deploy the `prompt-to-nft` container with Infernet - -You can run a simple command to deploy the `prompt-to-nft` container along with bootstrapping the rest of the -Infernet node stack in one go: - -```bash copy -make deploy-container project=prompt-to-nft -``` - -### Check the running containers - -At this point it makes sense to check the running containers to ensure everything is running as expected. - -```bash -# > docker container ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -0dbc30f67e1e ritualnetwork/example-prompt-to-nft-infernet:latest "hypercorn app:creat…" 8 seconds ago Up 7 seconds -0.0.0.0:3000->3000/tcp prompt-to-nft -0c5140e0f41b ritualnetwork/infernet-anvil:0.0.0 "anvil --host 0.0.0.…" 23 hours ago Up 23 hours -0.0.0.0:8545->3000/tcp anvil-node -f5682ec2ad31 ritualnetwork/infernet-node:latest "/app/entrypoint.sh" 23 hours ago Up 9 seconds -0.0.0.0:4000->4000/tcp deploy-node-1 -c1ece27ba112 fluent/fluent-bit:latest "/fluent-bit/bin/flu…" 23 hours ago Up 10 seconds 2020/tcp, -0.0.0.0:24224->24224/tcp, :::24224->24224/tcp deploy-fluentbit-1 -3cccea24a303 redis:latest "docker-entrypoint.s…" 23 hours ago Up 10 seconds 0.0.0.0:6379->6379/tcp, -:::6379->6379/tcp deploy-redis-1 -``` - -You should see five different images running, including the Infernet node and the prompt-to-nft container. - -## Minting an NFT by directly calling the consumer contract - -In the following steps, we will deploy our NFT consumer contract and call it using a forge script to mint an NFT. - -### Setup - -Notice that in [one of the steps above](#check-the-running-containers) we have an Anvil node running on port `8545`. - -By default, the [`anvil-node`](https://hub.docker.com/r/ritualnetwork/infernet-anvil) image used deploys the -[Infernet SDK](https://docs.ritual.net/infernet/sdk/introduction) and other relevant contracts for you: -- Registry: `0x663F3ad617193148711d28f5334eE4Ed07016602` -- Primary node: `0x70997970C51812dc3A010C7d01b50e0d17dc79C8` - -### Deploy our NFT Consumer contract - -In this step, we will deploy our NFT consumer contract to the Anvil node. Our [`DiffusionNFT.sol`](./contracts/src/DiffusionNFT.sol) -contract is a simple ERC721 contract which implements our consumer interface. - - -#### Anvil logs - -During this process, it is useful to look at the logs of the Anvil node to see what's going on. To follow the logs, -in a new terminal, run: - -```bash copy -docker logs -f anvil-node -``` - -#### Deploying the contract - -Once ready, to deploy the [`DiffusionNFT`](./contracts/src/DiffusionNFT.sol) consumer contract, in another terminal, run: - -```bash copy -make deploy-contracts project=prompt-to-nft -``` - -You should expect to see similar Anvil logs: - -```bash -# > make deploy-contracts project=prompt-to-nft - -eth_getTransactionReceipt - -Transaction: 0x0577dc98192d971bafb30d53cb217c9a9c16f92ab435d20a697024a4f122c048 -Contract created: 0x13D69Cf7d6CE4218F646B759Dcf334D82c023d8e -Gas used: 1582129 - -Block Number: 1 -Block Hash: 0x1113522c8422bde163f21461c7c66496e08d4bb44f56e4131c2af57f8457f5a5 -Block Time: "Wed, 6 Mar 2024 05:03:45 +0000" - -eth_getTransactionByHash -``` - -From our logs, we can see that the `DiffusionNFT` contract has been deployed to address -`0x13D69Cf7d6CE4218F646B759Dcf334D82c023d8e`. - -### Call the contract - -Now, let's call the contract to mint an NFT! In the same terminal, run: - -```bash copy -make call-contract project=prompt-to-nft prompt="A golden retriever skiing." -``` - -You should first expect to see an initiation transaction sent to the `DiffusionNFT` contract: - -```bash - -eth_getTransactionReceipt - -Transaction: 0x571022944a1aca5647e10a58b2242a83d88f2e54dca0c7b4afe3c4b61fa3faf6 -Gas used: 214390 - -Block Number: 2 -Block Hash: 0x167a45bb2d30ab3732553aafb1755a3e126b2e1ae7ef52ca96bd75acb0eeb5eb -Block Time: "Wed, 6 Mar 2024 05:06:09 +0000" - -``` -Shortly after that you should see another transaction submitted from the Infernet Node which is the -result of your on-chain subscription and its associated job request: - -```bash -eth_sendRawTransaction -_____ _____ _______ _ _ _ -| __ \|_ _|__ __| | | | /\ | | -| |__) | | | | | | | | | / \ | | -| _ / | | | | | | | |/ /\ \ | | -| | \ \ _| |_ | | | |__| / ____ \| |____ -|_| \_\_____| |_| \____/_/ \_\______| - - -nft minted! https://arweave.net/ -nft id 1 -nft owner 0x1804c8AB1F12E6bbf3894d4083f33e07309d1f38 - -Transaction: 0xcaf67e3f627c57652fa563a9b6f0f7fd27911409b3a7317165a6f5dfb5aff9fd -Gas used: 250851 - -Block Number: 3 -Block Hash: 0xfad6f6743bd2d2751723be4c5be6251130b0f06a46ca61c8d77077169214f6a6 -Block Time: "Wed, 6 Mar 2024 05:06:18 +0000" - -eth_blockNumber -``` - -We can now confirm that the address of the Infernet Node (see the logged `node` parameter in the Anvil logs above) -matches the address of the node we setup by default for our Infernet Node. - -We can also see that the owner of the NFT is `0x1804c8AB1F12E6bbf3894d4083f33e07309d1f38` and the NFT has been minted -and uploaded to Arweave. - -Congratulations! 🎉 You have successfully minted an NFT! - -## Minting an NFT from the UI - -This project also includes a simple NextJS frontend that connects to the local Anvil node. This frontend allows you to -connect your wallet and mint an NFT by providing a prompt. - -### Pre-requisites -Ensure that you have the following installed: -1. [NodeJS](https://nodejs.org/en) -2. A node package manager. This can be either `npm`, `yarn`, `pnpm` or `bun`. Of course, we recommend `bun`. - -### Run the UI - -From the top-level directory of the repository, simply run the following command: - -```bash copy -make run-service project=prompt-to-nft service=ui -``` - -This will start the UI service. You can now navigate to `http://localhost:3001` in your browser to see the UI. -![ui image](./img/ui.png)j - -### Connect your wallet -By clicking "Connect Wallet", your wallet will also ask you to switch to our anvil testnet. By accepting, you will be -connected. -![metamask prompt](./img/metamask-anvil.png) - -Here, you should also see the NFT you minted earlier through the direct foundry script. - -![ui just after connecting](./img/just-connected.png) - -### Get Some ETH - -To be able to mint the NFT, you will need some ETH. You can get some testnet ETH the "Request 1 ETH" button at -the top of the page. If your balance does not update, you can refresh the page. - -### Enter a prompt & mint a new NFT -You can now enter a prompt and hit the "Generate NFT" button. A look at your anvil-node & infernet-node logs will -show you the transactions being sent and the NFT being minted. The newly-minted NFT will also appear in the UI. - -![mint screen](./img/mint-screen.png) - -Once your NFT's been generated, the UI will attempt to fetch it from arweave and display it. This usually takes less -than a minute. - -![fetching from arweave](./img/fetching-from-arweave.png) - -And there you have it! You've minted an NFT from a prompt using the UI! -![minted nft](./img/minted-nft.png) diff --git a/projects/tgi-llm/container/README.md b/projects/tgi-llm/container/README.md index bde320c..973c5dc 100644 --- a/projects/tgi-llm/container/README.md +++ b/projects/tgi-llm/container/README.md @@ -1,27 +1,21 @@ # TGI LLM -In this example, we're running an infernet node along with a TGI service. +In this example, we're running an Infernet Node along with a TGI service. ## Deploying TGI Service -If you have your own TGI service running, feel free to skip this part. Otherwise, -you can deploy the TGI service using the following command. +If your TGI service is already running, feel free to skip this part. Otherwise, +you can deploy a TGI service using the following command. -Make sure you have a machine with proper GPU support. Clone this repository & -run the following command: - -```bash -make run-service project=tgi-llm service=tgi -``` +For more information, check out our [Setting up a TGI LLM Service](https://learn.ritual.net/examples/tgi_inference_with_mistral_7b#setting-up-a-tgi-llm-service) tutorial! ## Deploying Infernet Node Locally -Running an infernet node involves a simple configuration step & running step. +Running an Infernet Node involves a simple configuration step & running step. ### Configuration -Copy our [sample config file](./config.sample.json) into a new file -called `config.json`. +Copy our [sample config file](./config.sample.json) into a new file called `config.json`. ```bash cp config.sample.json config.json @@ -32,7 +26,7 @@ TGI Service you just deployed. ```json { - // etc. + // ... "containers": [ { "id": "tgi-llm", @@ -44,7 +38,8 @@ TGI Service you just deployed. "allowed_ips": [], "command": "--bind=0.0.0.0:3000 --workers=2", "env": { - "TGI_SERVICE_URL": "http://{your-service-ip}:{your-service-port}" // <- Change this to the TGI service you deployed + // TODO: replace with your service ip & port + "TGI_SERVICE_URL": "http://{your-service-ip}:{your-service-port}" } } ] @@ -53,7 +48,7 @@ TGI Service you just deployed. ### Running the Infernet Node Locally -With that out of the way, you can now run the infernet node using the following command +With that out of the way, you can now run the Infernet Node using the following command at the top-level directory of this repo: ``` @@ -62,7 +57,7 @@ make deploy-container project=tgi-llm ## Testing the Infernet Node -You can test the infernet node by posting a job in the node's REST api. +You can test the Infernet Node by posting a job in the node's REST api. ```bash copy curl -X POST "http://127.0.0.1:4000/api/jobs" \ @@ -92,7 +87,7 @@ You can expect a response similar to the following: # "id":"f026c7c2-7027-4c2d-b662-2b48c9433a12", # "result": { # "container": "tgi-llm", -# "output": +# "output": # { # "output": "\n\nI\u2019m not sure if this is a real question or not, but I\u2019m" # } @@ -102,4 +97,11 @@ You can expect a response similar to the following: # ] ``` -Congratulations! You've successfully ran an infernet node with a TGI service. +Congratulations! You've successfully ran an Infernet Node with a TGI service. + +## Next steps + +This container is for demonstration purposes only, and is purposefully simplified for readability and ease of comprehension. For a production-ready version of this code, check out: + +- The [TGI Client Inference Workflow](https://infernet-ml.docs.ritual.net/reference/infernet_ml/workflows/inference/tgi_client_inference_workflow): A Python class that implements a TGI service client similar to this example, and can be used to build production-ready containers. +- The [TGI Client Inference Service](https://infernet-services.docs.ritual.net/reference/tgi_client_inference_service): A production-ready, [Infernet](https://docs.ritual.net/infernet/node/introduction)-compatible container that works out-of-the-box with minimal configuration, and serves inference using the `TGI Client Inference Workflow`. diff --git a/projects/tgi-llm/container/src/app.py b/projects/tgi-llm/container/src/app.py index dbe378f..6823173 100644 --- a/projects/tgi-llm/container/src/app.py +++ b/projects/tgi-llm/container/src/app.py @@ -42,7 +42,6 @@ async def inference() -> dict[str, Any]: """For off-chain requests, the prompt is sent as is.""" prompt = cast(dict[str, Any], data).get("prompt") - service_url = os.environ["TGI_SERVICE_URL"] client = Client(service_url, timeout=30) reponse = client.generate(cast(str, prompt)) @@ -75,7 +74,6 @@ async def inference() -> dict[str, Any]: """ return {"data": content} - return app diff --git a/projects/torch-iris/torch-iris.md b/projects/torch-iris/torch-iris.md deleted file mode 100644 index bd024c5..0000000 --- a/projects/torch-iris/torch-iris.md +++ /dev/null @@ -1,292 +0,0 @@ -# Running a Torch Model on Infernet - -Welcome to this comprehensive guide where we'll explore how to run a `pytorch` model on Infernet. If you've followed -our ONNX example, you'll find this guide to be quite similar. - -**Model:** This example uses a pre-trained model to classify iris flowers. The code for the model -is located at the [simple-ml-models](https://github.com/ritual-net/simple-ml-models/tree/main/iris_classification) -repository. - -## Pre-requisites - -For this tutorial you'll need to have the following installed. - -1. [Docker](https://docs.docker.com/engine/install/) -2. [Foundry](https://book.getfoundry.sh/getting-started/installation) - -### Ensure `docker` & `foundry` exist - -To check for `docker`, run the following command in your terminal: - -```bash copy -docker --version -# Docker version 25.0.2, build 29cf629 (example output) -``` - -You'll also need to ensure that docker-compose exists in your terminal: - -```bash copy -which docker-compose -# /usr/local/bin/docker-compose (example output) -``` - -To check for `foundry`, run the following command in your terminal: - -```bash copy -forge --version -# forge 0.2.0 (551bcb5 2024-02-28T07:40:42.782478000Z) (example output) -``` - -### Clone the starter repository - -If you haven't already, clone the infernet-container-starter repository. All of the code for this tutorial is located -under the `projects/torch-iris` directory. - -```bash copy -# Clone locally -git clone --recurse-submodules https://github.com/ritual-net/infernet-container-starter -# Navigate to the repository -cd infernet-container-starter -``` - -### Build the `torch-iris` container - -From the top-level directory of this repository, simply run the following command to build the `torch-iris` container: - -```bash copy -make build-container project=torch-iris -``` - -After the container is built, you can deploy an infernet-node that utilizes that -container by running the following command: - -```bash -make deploy-container project=torch-iris -``` - -## Making Inference Requests via Node API (a la Web2 request) - -Now, you can make inference requests to the infernet-node. In a new tab, run: - -```bash -curl -X POST "http://127.0.0.1:4000/api/jobs" \ - -H "Content-Type: application/json" \ - -d '{"containers":["torch-iris"], "data": {"input": [[1.0380048, 0.5586108, 1.1037828, 1.712096]]}}' -``` - -You should get an output similar to the following: - -```json -{ - "id": "6d5e47f0-5907-4ab2-9523-862dccb80d67" -} -``` - -Now, you can check the status of the job by running (make sure job id matches the one -you got from the previous request): - -```bash -curl "http://127.0.0.1:4000/api/jobs?id=6d5e47f0-5907-4ab2-9523-862dccb80d67" -``` - -Should return: - -```json -[ - { - "id": "6d5e47f0-5907-4ab2-9523-862dccb80d67", - "result": { - "container": "torch-iris", - "output": { - "input_data": [ - [ - 1.038004755973816, - 0.5586107969284058, - 1.1037827730178833, - 1.7120959758758545 - ] - ], - "input_shapes": [ - [ - 4 - ] - ], - "output_data": [ - [ - 0.0016699483385309577, - 0.021144982427358627, - 0.977185070514679 - ] - ] - } - }, - "status": "success" - } -] -``` - -#### Note Regarding the Input - -The inputs provided above correspond to an iris flower with the following -characteristics. Refer to the - -1. Sepal Length: `5.5cm` -2. Sepal Width: `2.4cm` -3. Petal Length: `3.8cm` -4. Petal Width: `1.1cm` - -Putting this input into a vector and scaling it, we get the following scaled input: - -```python -[1.0380048, 0.5586108, 1.1037828, 1.712096] -``` - -Refer -to [this function in the model's repository](https://github.com/ritual-net/simple-ml-models/blob/03ebc6fb15d33efe20b7782505b1a65ce3975222/iris_classification/iris_inference_pytorch.py#L13) -for more information on how the input is scaled. - -For more context on the Iris dataset, refer to -the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/iris). - -## Making Inference Requests via Contracts (a la Web3 request) - -The [contracts](contracts) directory contains a simple forge -project that can be used to interact with the Infernet Node. - -Here, we have a very simple -contract, [IrisClassifier](contracts/src/IrisClassifier.sol), -that requests a compute job from the Infernet Node and then retrieves the result. -We are going to make the same request as above, but this time using a smart contract. -Since floats are not supported in Solidity, we convert all floats to `uint256` by -multiplying the input vector entries by `1e6`: - -```solidity - uint256[] memory iris_data = new uint256[](4); -iris_data[0] = 1_038_004; -iris_data[1] = 558_610; -iris_data[2] = 1_103_782; -iris_data[3] = 1_712_096; -``` - -We have multiplied the input by 1e6 to have enough decimals accuracy. This can be seen -[here](contracts/src/IrisClassifier.sol#19) in the contract's -code. - -### Infernet's Anvil Testnet - -To request an on-chain job, you'll need to deploy contracts using the infernet sdk. -We already have a public [anvil node](https://hub.docker.com/r/ritualnetwork/infernet-anvil) docker image which has the -corresponding infernet sdk contracts deployed, along with a node that has -registered itself to listen to on-chain subscription events. - -* Registry Address: `0x663F3ad617193148711d28f5334eE4Ed07016602` -* Node Address: `0x70997970C51812dc3A010C7d01b50e0d17dc79C8` (This is the second account in the anvil's accounts.) - -### Monitoring the EVM Logs - -The infernet node configuration for this project includes our anvil node. You can monitor the logs of the anvil node to -see what's going on. In a new terminal, run: - -```bash -docker logs -f anvil-node -``` - -As you deploy the contract and make requests, you should see logs indicating the -requests and responses. - -### Deploying the Contract - -Simply run the following command to deploy the contract: - -```bash -project=torch-iris make deploy-contracts -``` - -In your anvil logs you should see the following: - -```bash -eth_feeHistory -eth_sendRawTransaction -eth_getTransactionReceipt - - Transaction: 0x8e7e96d0a062285ee6fea864c43c29af65b962d260955e6284ab79dae145b32c - Contract created: 0x13D69Cf7d6CE4218F646B759Dcf334D82c023d8e - Gas used: 725947 - - Block Number: 1 - Block Hash: 0x88c1a1af024cca6f921284bd61663b1d500aa6d22d06571f0a085c2d8e1ffe92 - Block Time: "Mon, 19 Feb 2024 16:44:00 +0000" - -eth_blockNumber -eth_newFilter -eth_getFilterLogs -eth_blockNumber -``` - -beautiful, we can see that a new contract has been created -at `0x13D69Cf7d6CE4218F646B759Dcf334D82c023d8e`. That's the address of -the `IrisClassifier` contract. We are now going to call this contract. To do so, -we are using -the [CallContract.s.sol](contracts/script/CallContract.s.sol) -script. Note that the address of the -contract [is hardcoded in the script](contracts/script/CallContract.s.sol#L13), -and should match the address we see above. Since this is a test environment and we're -using a test deployer address, this address is quite deterministic and shouldn't change. -Otherwise, change the address in the script to match the address of the contract you -just deployed. - -### Calling the Contract - -To call the contract, run the following command: - -```bash -project=torch-iris make call-contract -``` - -In the anvil logs, you should see the following: - -```bash -eth_sendRawTransaction - - -_____ _____ _______ _ _ _ -| __ \|_ _|__ __| | | | /\ | | -| |__) | | | | | | | | | / \ | | -| _ / | | | | | | | |/ /\ \ | | -| | \ \ _| |_ | | | |__| / ____ \| |____ -|_| \_\_____| |_| \____/_/ \_\______| - - -about to decode babyyy -predictions: (adjusted by 6 decimals, 1_000_000 = 100%, 1_000 = 0.1%) -Setosa: 1669 -Versicolor: 21144 -Virginica: 977185 - - Transaction: 0x252158ab9dd2178b6a11e417090988782861d208d8e9bb01c4e0635316fd95c9 - Gas used: 111762 - - Block Number: 3 - Block Hash: 0xfba07bd65da8dde644ba07ff67f0d79ed36f388760f27dcf02d96f7912d34c4c - Block Time: "Mon, 19 Feb 2024 16:54:07 +0000" - -eth_blockNumbereth_blockNumber -eth_blockNumber -``` - -Beautiful! We can see that the same result has been posted to the contract. - -For more information about the container, consult -the [container's readme.](container/README.md) - -### Next Steps - -From here, you can bring your own trained pytorch model, and with minimal changes, you can make it both work with an -infernet-node as well as a smart contract. - -### More Information - -1. Check out our [ONNX example](../onnx-iris/onnx-iris.md) if you haven't already. -2. [Infernet Callback Consumer Tutorial](https://docs.ritual.net/infernet/sdk/consumers/Callback) -3. [Infernet Nodes Docoumentation](https://docs.ritual.net/infernet/node/introduction) -4. [Infernet-Compatible Containers](https://docs.ritual.net/infernet/node/advanced/containers)