Skip to content

Commit

Permalink
Merge pull request #21 from Lumerin-protocol/main
Browse files Browse the repository at this point in the history
images and documents and swagger oh my
  • Loading branch information
rcondron authored Aug 21, 2024
2 parents e3aef8a + 250b20f commit b867209
Show file tree
Hide file tree
Showing 71 changed files with 4,309 additions and 1,075 deletions.
18 changes: 13 additions & 5 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ jobs:
cd launcher
make
cd ../proxy-router
cp ./models-config.json.example ../ui-desktop/models-config.json
make build
cd ../ui-desktop
cp ./.env.example .env
Expand All @@ -83,7 +84,8 @@ jobs:
unzip -o -j $LLAMACPP build/bin/llama-server
echo '{"run":["./llama-server -m ./'$MODEL'","./proxy-router","./ui-desktop-1.0.0.AppImage"]}' > mor-launch.json
cp ./proxy-router/.env.example .env
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch llama-server ./proxy-router/bin/proxy-router .env $MODEL mor-launch.json ./ui-desktop/dist/ui-desktop-1.0.0.AppImage
cp ./proxy-router/models-config.json.example models-config.json
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch llama-server ./proxy-router/bin/proxy-router .env $MODEL mor-launch.json ./ui-desktop/dist/ui-desktop-1.0.0.AppImage models-config.json
- name: Upload artifacts
uses: actions/upload-artifact@v4
Expand Down Expand Up @@ -128,6 +130,7 @@ jobs:
cd launcher
make
cd ../proxy-router
cp ./models-config.json.example ../ui-desktop/models-config.json
make build
cd ../ui-desktop
cp ./.env.example .env
Expand All @@ -150,8 +153,9 @@ jobs:
unzip -o -j $LLAMACPP build/bin/llama-server
echo '{"run":["./llama-server -m ./'$MODEL'","./proxy-router","./ui-desktop.app/Contents/MacOS/ui-desktop"]}' > mor-launch.json
cp ./proxy-router/.env.example .env
cp ./proxy-router/models-config.json.example models-config.json
unzip ./ui-desktop/dist/ui-desktop-1.0.0-mac.zip
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch ./proxy-router/bin/proxy-router .env llama-server $MODEL mor-launch.json
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch ./proxy-router/bin/proxy-router .env llama-server $MODEL mor-launch.json models-config.json
zip -r $ARTIFACT ui-desktop.app
- name: Upload artifacts
Expand Down Expand Up @@ -197,6 +201,7 @@ jobs:
cd launcher
make
cd ../proxy-router
cp ./models-config.json.example ../ui-desktop/models-config.json
make build
cd ../ui-desktop
cp ./.env.example .env
Expand All @@ -219,8 +224,9 @@ jobs:
unzip -o -j $LLAMACPP build/bin/llama-server
echo '{"run":["./llama-server -m ./'$MODEL'","./proxy-router","./ui-desktop.app/Contents/MacOS/ui-desktop"]}' > mor-launch.json
cp ./proxy-router/.env.example .env
cp ./proxy-router/models-config.json.example models-config.json
unzip ./ui-desktop/dist/ui-desktop-1.0.0-arm64-mac.zip
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch ./proxy-router/bin/proxy-router .env llama-server $MODEL mor-launch.json
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch ./proxy-router/bin/proxy-router .env llama-server $MODEL mor-launch.json models-config.json
zip -r $ARTIFACT ui-desktop.app
- name: Upload artifacts
Expand Down Expand Up @@ -270,6 +276,7 @@ jobs:
cd launcher
make
cd ../proxy-router
cp ./models-config.json.example ../ui-desktop/models-config.json
make build
cd ../ui-desktop
cp ./.env.example .env
Expand All @@ -291,11 +298,12 @@ jobs:
wget -nv https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/$MODEL
unzip -o -j $LLAMACPP llama-server.exe llama.dll ggml.dll
echo '{"run":["./llama-server.exe -m ./'$MODEL'","./proxy-router.exe","./ui-desktop-1.0.0.exe"]}' > mor-launch.json
cp ./proxy-router/.env.example .env
cp ./proxy-router/.env.example.win .env
cp ./proxy-router/models-config.json.example models-config.json
mv ./proxy-router/bin/proxy-router proxy-router.exe
mv ./launcher/mor-launch mor-launch.exe
mv "./ui-desktop/dist/ui-desktop 1.0.0.exe" ui-desktop-1.0.0.exe
7z a $ARTIFACT LICENSE mor-launch.exe proxy-router.exe .env llama-server.exe llama.dll ggml.dll $MODEL mor-launch.json ui-desktop-1.0.0.exe
7z a $ARTIFACT LICENSE mor-launch.exe proxy-router.exe .env llama-server.exe llama.dll ggml.dll $MODEL mor-launch.json ui-desktop-1.0.0.exe models-config.json
- name: Upload artifacts
uses: actions/upload-artifact@v4
Expand Down
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,6 @@
*/**/node_modules
***/.DS_Store
***/.DS_Store
***/.scratch
***/.env
***/.$*drawio.bkp
***/.$*drawio.dtmp
62 changes: 62 additions & 0 deletions docs/00-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Overview of the Morpheus-Lumerin Environment

![Architecture-Overview](images/overview.png)

This document is intended to provide a high level overview of the major architectural components between model compute-providers and consumers in the Morpheus-Lumerin environment.

The ultimate goal is to show how configuration of the compute-provider environment and the consumer nodes can enable prompts and inference from the consumer to the hosted models by the provider. The key enablers being the Arbitrum blockchain, Morpheus token for staking and bidding (transactions to pay for use) and the Lumerin proxy-router to anonymously route traffic based on smart contract governance.

In other words, referring to the overview model...how do we get to conversation **6** where prompts and inference are happening?

Numbers below reference the circled elements in the diagram above.

## 0. Existing Foundation Elements
- [Readme](../README.md) - for more details
- Arbitrum Ethereum Layer 2 blockchain
- Morpheus Token (MOR) for staking and bidding
- Lumerin Smart Contract for governance and routing

## 1. Provider AI Model
- [01-model-setup.md](01-model-setup.md) - for more details
- Existing, Hosted AI model that is available for inference
- In the real world, this is assumed to be a high-horsepower server or server farm tuned for large language models and available via standard OpenAI API interface on a privately accessed endpoint (IP address:port or DNS name:port) eg: `http://mycoolaimodel.serverfarm.io:8080`
- In the packaged software releases, llama.cpp (llama-server) example is included to run on the same machine as the other components to show how the components work together. It is not a real-world model and is not tuned for performance.

## 2. Provider Proxy-Router
- [02-provider-setup.md](02-provider-setup.md) - for more details
- The proxy-router is the core "router" that talks to and listens to the blockchain, routes prompts and inference between the providers hosted models via bids and the consumers that purchase and use the model
- In a real-world scenario, this proxy-router would be a separate, small server or even docker container that is not part of the AI Model Server Instance (it can be, but it's nice to separate the architecture either for anonymity or performance)
- Installation on the provider side is as simple as setting up the environment variables and running the proxy-router software.
- There is a sample `.env.example` file located within the ./proxy-router folder that shoudld be copied to `.env` and edited with the appropriate values.
- Please see [proxy-router .ENV Variables](#proxy-router-env-variables) below for more information on the key values needed in the .env file
- The proxy-router needs to be on both the provider and consumer environment and have access to an Arbitrum Ethereum node via web sockets (WSS) for listening to and posting elements on the blockchain

## 3. Provider - setup Provider, Model and Bid on the blockchain
- [03-provider-offer.md](03-provider-offer.md) - for more details
- Once the proxy-router is setup, and the provider's wallet has the proper amount of ETH and MOR, use the Swagger API Interface (http://yourlocalproxy:8082/swagger/index.html as example) to do the following:
1. Authorize the diamond contract to spend on your wallet's behalf
1. Register your provider (the proxy-router) on the blockchain (http://mycoolproxy.serverfarm.io:3333)
1. Register your model on the blockchain
1. Create a bid for your model on the blockchain
- Further details on how to do this are in the [Provider Offer Guide](provider-offer.md)

## 4. Consumer Node Setup
- [04-consumer-setup.md](04-consumer-setup.md) - for more details
- [04a-consumer-setup-source.md](04a-consumer-setup-source.md) - for more details on setting up from gtihub source
- The consumer node is the "client" that will be purchasing bids from the blockchain, sending prompts via the proxy-router and receiving inference back from the provider's model'
- The components are very similar to the Provider side of things with the exception that the consumer node will typically not be hosting a model, but will be sending prompts to the proxy-router and receiving inference back
- In this case, the easiest way to install is to use the packaged releases for your platform on Github and follow the instructions in the README.md file
- These packages include 3 different pieces of software
- llama.cpp (llama-server) - a simple example model that can be run on the same machine as the proxy-router and ui-desktop to show how the components work together and run local (free) inference
- proxy-router - the same software as the provider side, but with different environment variables and a different role
- ui-desktop - Electron GUI that enables the user to interact with the models (via the API) to browse offered bids, purchase and send prompts
- The consumer node will need to have the proxy-router running and the UI-Desktop running to interact with the models and bids on the blockchain

## 5. Purchase Bid
- [05-purchase-bid.md](05-purchase-bid.md) - for more details
- Once the UI-Desktop is up and running, the consumer can browse the available bids on the blockchain
- Select a bid and stake the intended MOR amount (minimum should be shown)

## 6. Prompt & Inference
- [06-model-interaction.md](06-model-interaction.md) - for more details
- Once the bid is purchased, the consumer can send prompts to the proxy-router via the UI-Desktop
5 changes: 5 additions & 0 deletions docs/01-model-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
## TODO

- The intent of this document is to outline how to run llama.cpp on your local machine and could include details on AWS / EC2 Build recommendations for compute-providers
- The end state should be presentation of a proxy-router accessible private endpoint that the proxy-router can talk to to serve its models
------------
97 changes: 97 additions & 0 deletions docs/02-provider-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@

# Provider Hosting (Local LLM to offer, Proxy-Router running as background/service):

## Assumptions:
* Your AI model has been configured, started and made available to the proxy-router server via a private endpoint (IP:PORT or DNS:PORT) eg: `http://mycoolaimodel.domain.com:8080`
* Optional
* You can use the provided `llama.cpp` and `tinyllama` model to test locally
* If your local model is listening on a different port locally, you will need to modify the `OPENAI_BASE_URL` in the .env file to match the correct port
* You have an existing funded wallet with saMOR and saETH and also have the `private key` for the wallet (this will be needed for the .env file configuration)
* You have created an Alchemy or Infura free account and have a private API key for the Arbitrum Sepolia testnet (wss://arb-sepolia.g.alchemy.com/v2/<your_private_alchemy_api_key>)
* Your proxy-router must have a publicly accessible endpoint for the provider (ip:port or fqdn:port no protocol) eg: `mycoolmornode.domain.com:3333` - this will be used when creating the provider on the blockchain

## Installation & Configuration Steps:
1. Download latest release for your operating system: https://github.com/Lumerin-protocol/Morpheus-Lumerin-Node/releases

1. Extract the zip to a local folder (examples)
* Windows: `(%USERPROFILE%)/Downloads/morpheus)`
* Linux & MacOS: `~/Downloads/morpheus`
* On MacOS you may need to execute `xattr -c proxy-router` in a command window to remove the quarantine flag on MacOS

1. Edit the `.env` file following the guide below [proxy-router .ENV Variables](#proxy-router-env-variables)

1. **(OPTIONAL)** - External Provider or Pass through
* In some cases you will want to leverage external or existing AI Providers in the network via their own, private API
* Dependencies:
* `model-config.json` file in the proxy-router directory
* proxy-router .env file for proxy-router must also be updated to include `MODELS_CONFIG_PATH=<path_to_proxy-router>/models-config.json`
* Once your provider is up and running, deploy a new model and model bid via the diamond contract (you will need the `model_ID` for the configuration)
* Edit the model-config.json to the following json format ... with
* The JSON ID will be the ModelID that you created above, modelName, apiTYpe, apiURL and apiKey are from the external provider and specific to their offered models
* Once the model-config.json file is updated, the morpheus node will need to be restarted to pick up the new configuration (not all models (eg: image generation can be utilized via the UI-Desktop, but API integration is possible)
* Example model-config.json file for external providers
```
#model-config.json
{
"0x4b5d6c2d3e4f5a6b7c8de7f89a0b19e07f4a6e1f2c3a3c28d9d5e6": {
"modelName": "v1-5-specialmodel.modelversion [externalmodel]",
"apiType": "provider_api_type",
"apiUrl": "https://api.externalmodel.com/v1/xyz/generate",
"apiKey": "api-key-from-external-provider"
},
"0xb2c8a6b2c1d9ed7f0e9a3b4c2d6e5f14f9b8c3a7e5d6a1a0b9c7d8e4f30f4a7b": {
"modelName": "v1-7-specialmodel2.modelversion [externalmodel]",
"apiType": "provider_api_type",
"apiUrl": "https://api.externalmodel.com/v1/abc/generate",
"apiKey": "api-key-from-external-provider"
}
}
```

## Start the Proxy Router
1. On your server, launch the proxy-router with the modified .env file shown above
* Windows: Double click the `proxy-router.exe` (You will need to tell Windows Defender this is ok to run)
* Linux & MacOS: Open a terminal and navigate to the folder and run `./proxy-router`from the morpheus/proxy-router folder
1. This will start the proxy-router and begin monitoring the blockchain for events

## Validating Steps:
1. Once the proxy-router is running, you can navigate to the Swagger API Interface (http://localhost:8082/swagger/index.html as example) to validate that the proxy-router is running and listening for blockchain events
1. You can also check the logs in the `./data` directory for any errors or issues that may have occurred during startup
1. Once validated, you can move on and create your provider, model and bid on the blockchain [03-provider-offer.md](03-provider-offer.md)


----------------
### proxy-router .ENV Variables
Key Values in the .env file are (there are others, but these are primarly responsible for connecting to the blockchain, the provider AI model and listening for incoming traffic):
- `WALLET_PRIVATE_KEY=`
- Private Key from your wallet needed for the proxy-router to sign transactions and respond to provided prompts (this is why the proxy router must be secured and the API endpoint protected)
- `ETH_NODE_ADDRESS=wss://arb-sepolia.g.alchemy.com/v2/<your_private_alchemy_api_key>`
- Ethereum Node Address for the Arbitrum blockchain (via Alchemy or Infura)
- This websocket (wss) address is key for the proxy-router to listen and post to the blockchain
- We recommend using your own private ETH Node Address for better performance (free account setup via Alchemy or Infura)
- `DIAMOND_CONTRACT_ADDRESS=0x8e19288d908b2d9F8D7C539c74C899808AC3dE45`
- This is the key Lumerin Smart Contract (currently Sepolia Arbitrum testnet)
- This is the address of the smart contract that the proxy-router will interact with to post providers, models & bids
- This address will change as the smart-contract is updated and for mainnet contract interaction
- `MOR_TOKEN_ADDRESS=0xc1664f994fd3991f98ae944bc16b9aed673ef5fd`
- This is the Morpheus Token (saMOR) address for Sepolia Arbitrum testnet
- This address will be different for mainnet token
- `WEB_ADDRESS=0.0.0.0:8082`
- This is the local listenting port for your proxy-router API (Swagger) interface
- Based on your local needs, this may need to change (8082 is default)
- `WEB_PUBLIC_URL=localhost:8082`
- If you have or will be exposing your API interface to a local, PRIVATE (or VPN) network, you can change this to the DNS name or IP and port where the API will be available. The default is just on the local machine (localhost)
- The PORT must be the same as in the `WEB_ADDRESS` setting
- `OPENAI_BASE_URL=http://localhost:8080/v1`
- This is where the proxy-router should send OpenAI compatible requests to the provider model.
- By default (and included in the Morpheus-Lumerin software releases) this is set to `http://localhost:8080/v1` for the included llama.cpp model
- In a real-world scenario, this would be the IP address and port of the provider model server or server farm that is hosting the AI model separately from the proxy-router
- `PROXY_STORAGE_PATH=./data/`
- This is the path where the proxy-router will store logs and other data
- This path should be writable by the user running the proxy-router software
- `MODELS_CONFIG_PATH=`
- location of the models-config.json file that contains the models that the proxy-router will be providing.
- it has the capability to also (via private API) call external providers models (like Prodia)
- `PROXY_ADDRESS=0.0.0.0:3333`
- This is the local listening port for the proxy-router to receive prompts and inference requests from the consumer nodes
- This is the port that the consumer nodes will send prompts to and should be available publicly and via the provider definition setup on the blockchain
Loading

0 comments on commit b867209

Please sign in to comment.