Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

images and documents and swagger oh my #21

Merged
merged 53 commits into from
Aug 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
36228ce
add model-config.json instructions and consumer reference
Aug 5, 2024
8285329
Merge pull request #149 from Lumerin-protocol/consumer-readme
LumerinIO Aug 5, 2024
d087b50
fix: more type definitions
shev-titan Aug 7, 2024
f1930ee
fix: update readme, replace louper with swagger equivalent
shev-titan Aug 7, 2024
79561aa
update mac detail instructions and direct proxy-router interaction vi…
Aug 7, 2024
72beab7
clean up doc
Aug 7, 2024
a569dd5
udpate
Aug 7, 2024
cb72ab8
cleanup llama nohup
Aug 7, 2024
6afbdeb
fix: update swagger tags
shev-titan Aug 7, 2024
0ff6a8a
update swagger url
Aug 7, 2024
3ecd2db
final update on swagger link
Aug 7, 2024
7252a0c
Merge pull request #150 from Lumerin-protocol/fix/api-doc
abs2023 Aug 7, 2024
2b69fb9
resolved conflicts
Aug 7, 2024
7a409d7
Merge pull request #151 from Lumerin-protocol/consumer-readme
LumerinIO Aug 7, 2024
b47f440
Fixed issue with missed model
bohdan-titan Aug 12, 2024
ea069a9
Merge pull request #152 from Lumerin-protocol/feature/image-gen
alex-sandrk Aug 12, 2024
01c255e
add model-config env var
Aug 12, 2024
042b7a5
update order of provider setup
Aug 12, 2024
1762c62
Merge pull request #154 from Lumerin-protocol/consumer-readme
abs2023 Aug 12, 2024
12f6715
feat: show local models from config file
alex-sandrk Aug 14, 2024
61ba084
feat: update swagger
alex-sandrk Aug 14, 2024
1ee8657
fix: test
alex-sandrk Aug 14, 2024
d55bfbe
fix: models select
alex-sandrk Aug 14, 2024
943bf70
Merge branch 'dev' of github.com:Lumerin-protocol/Morpheus-Lumerin-No…
alex-sandrk Aug 14, 2024
69ec448
feat: update ci
alex-sandrk Aug 14, 2024
2514e0f
refactor: fix typo
alex-sandrk Aug 14, 2024
4597766
Merge pull request #155 from Lumerin-protocol/feat/local-models-v2
alex-sandrk Aug 14, 2024
9d622dd
Merge branch 'dev' of github.com:Lumerin-protocol/Morpheus-Lumerin-No…
alex-sandrk Aug 14, 2024
7f2e6b9
Merge pull request #156 from Lumerin-protocol/fix/models-select
bohdan-titan Aug 14, 2024
77e6fc7
fix: correct report abi
alex-sandrk Aug 14, 2024
72520ad
Merge pull request #157 from Lumerin-protocol/fix/report-abi
alex-sandrk Aug 14, 2024
1642892
Fixed issues
bohdan-titan Aug 15, 2024
8150d37
feat: select different local models
alex-sandrk Aug 15, 2024
d2f642e
fix
bohdan-titan Aug 15, 2024
e971497
Merge pull request #159 from Lumerin-protocol/feat/chat-w-local-models
alex-sandrk Aug 15, 2024
3950229
Merge branch 'dev' into feature/image-improvement
alex-sandrk Aug 15, 2024
5193714
Merge pull request #158 from Lumerin-protocol/feature/image-improvement
alex-sandrk Aug 15, 2024
b50a69f
fix: create storage folder
alex-sandrk Aug 19, 2024
5072ba7
fix: win specific path
alex-sandrk Aug 19, 2024
41124a4
fix: win specific path
alex-sandrk Aug 19, 2024
c4482dc
fix: win specific path
alex-sandrk Aug 19, 2024
b8f6f9f
fix: win specific path
alex-sandrk Aug 19, 2024
fdbdcf2
Merge pull request #160 from Lumerin-protocol/fix/win-build
alex-sandrk Aug 19, 2024
df7badd
Fixed demo issues
bohdan-titan Aug 20, 2024
7a92e8d
changed var to cosnt
bohdan-titan Aug 20, 2024
173063b
Merge pull request #161 from Lumerin-protocol/feature/ui-bugs
bohdan-titan Aug 20, 2024
ac74271
Refactor docs - added todo and 6 step framework
Aug 20, 2024
83434a1
updated images to .png
Aug 20, 2024
6e30174
update images
Aug 20, 2024
3ca3a81
update naming
Aug 20, 2024
d0f68d7
Merge pull request #162 from Lumerin-protocol/consumer-readme
abs2023 Aug 20, 2024
56b5da4
Merge pull request #163 from Lumerin-protocol/dev
abs2023 Aug 20, 2024
250b20f
Merge pull request #164 from Lumerin-protocol/stg
abs2023 Aug 21, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 13 additions & 5 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ jobs:
cd launcher
make
cd ../proxy-router
cp ./models-config.json.example ../ui-desktop/models-config.json
make build
cd ../ui-desktop
cp ./.env.example .env
Expand All @@ -83,7 +84,8 @@ jobs:
unzip -o -j $LLAMACPP build/bin/llama-server
echo '{"run":["./llama-server -m ./'$MODEL'","./proxy-router","./ui-desktop-1.0.0.AppImage"]}' > mor-launch.json
cp ./proxy-router/.env.example .env
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch llama-server ./proxy-router/bin/proxy-router .env $MODEL mor-launch.json ./ui-desktop/dist/ui-desktop-1.0.0.AppImage
cp ./proxy-router/models-config.json.example models-config.json
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch llama-server ./proxy-router/bin/proxy-router .env $MODEL mor-launch.json ./ui-desktop/dist/ui-desktop-1.0.0.AppImage models-config.json

- name: Upload artifacts
uses: actions/upload-artifact@v4
Expand Down Expand Up @@ -128,6 +130,7 @@ jobs:
cd launcher
make
cd ../proxy-router
cp ./models-config.json.example ../ui-desktop/models-config.json
make build
cd ../ui-desktop
cp ./.env.example .env
Expand All @@ -150,8 +153,9 @@ jobs:
unzip -o -j $LLAMACPP build/bin/llama-server
echo '{"run":["./llama-server -m ./'$MODEL'","./proxy-router","./ui-desktop.app/Contents/MacOS/ui-desktop"]}' > mor-launch.json
cp ./proxy-router/.env.example .env
cp ./proxy-router/models-config.json.example models-config.json
unzip ./ui-desktop/dist/ui-desktop-1.0.0-mac.zip
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch ./proxy-router/bin/proxy-router .env llama-server $MODEL mor-launch.json
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch ./proxy-router/bin/proxy-router .env llama-server $MODEL mor-launch.json models-config.json
zip -r $ARTIFACT ui-desktop.app

- name: Upload artifacts
Expand Down Expand Up @@ -197,6 +201,7 @@ jobs:
cd launcher
make
cd ../proxy-router
cp ./models-config.json.example ../ui-desktop/models-config.json
make build
cd ../ui-desktop
cp ./.env.example .env
Expand All @@ -219,8 +224,9 @@ jobs:
unzip -o -j $LLAMACPP build/bin/llama-server
echo '{"run":["./llama-server -m ./'$MODEL'","./proxy-router","./ui-desktop.app/Contents/MacOS/ui-desktop"]}' > mor-launch.json
cp ./proxy-router/.env.example .env
cp ./proxy-router/models-config.json.example models-config.json
unzip ./ui-desktop/dist/ui-desktop-1.0.0-arm64-mac.zip
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch ./proxy-router/bin/proxy-router .env llama-server $MODEL mor-launch.json
zip -j $ARTIFACT ./LICENSE ./launcher/mor-launch ./proxy-router/bin/proxy-router .env llama-server $MODEL mor-launch.json models-config.json
zip -r $ARTIFACT ui-desktop.app

- name: Upload artifacts
Expand Down Expand Up @@ -270,6 +276,7 @@ jobs:
cd launcher
make
cd ../proxy-router
cp ./models-config.json.example ../ui-desktop/models-config.json
make build
cd ../ui-desktop
cp ./.env.example .env
Expand All @@ -291,11 +298,12 @@ jobs:
wget -nv https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/$MODEL
unzip -o -j $LLAMACPP llama-server.exe llama.dll ggml.dll
echo '{"run":["./llama-server.exe -m ./'$MODEL'","./proxy-router.exe","./ui-desktop-1.0.0.exe"]}' > mor-launch.json
cp ./proxy-router/.env.example .env
cp ./proxy-router/.env.example.win .env
cp ./proxy-router/models-config.json.example models-config.json
mv ./proxy-router/bin/proxy-router proxy-router.exe
mv ./launcher/mor-launch mor-launch.exe
mv "./ui-desktop/dist/ui-desktop 1.0.0.exe" ui-desktop-1.0.0.exe
7z a $ARTIFACT LICENSE mor-launch.exe proxy-router.exe .env llama-server.exe llama.dll ggml.dll $MODEL mor-launch.json ui-desktop-1.0.0.exe
7z a $ARTIFACT LICENSE mor-launch.exe proxy-router.exe .env llama-server.exe llama.dll ggml.dll $MODEL mor-launch.json ui-desktop-1.0.0.exe models-config.json

- name: Upload artifacts
uses: actions/upload-artifact@v4
Expand Down
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,6 @@
*/**/node_modules
***/.DS_Store
***/.DS_Store
***/.scratch
***/.env
***/.$*drawio.bkp
***/.$*drawio.dtmp
62 changes: 62 additions & 0 deletions docs/00-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Overview of the Morpheus-Lumerin Environment

![Architecture-Overview](images/overview.png)

This document is intended to provide a high level overview of the major architectural components between model compute-providers and consumers in the Morpheus-Lumerin environment.

The ultimate goal is to show how configuration of the compute-provider environment and the consumer nodes can enable prompts and inference from the consumer to the hosted models by the provider. The key enablers being the Arbitrum blockchain, Morpheus token for staking and bidding (transactions to pay for use) and the Lumerin proxy-router to anonymously route traffic based on smart contract governance.

In other words, referring to the overview model...how do we get to conversation **6** where prompts and inference are happening?

Numbers below reference the circled elements in the diagram above.

## 0. Existing Foundation Elements
- [Readme](../README.md) - for more details
- Arbitrum Ethereum Layer 2 blockchain
- Morpheus Token (MOR) for staking and bidding
- Lumerin Smart Contract for governance and routing

## 1. Provider AI Model
- [01-model-setup.md](01-model-setup.md) - for more details
- Existing, Hosted AI model that is available for inference
- In the real world, this is assumed to be a high-horsepower server or server farm tuned for large language models and available via standard OpenAI API interface on a privately accessed endpoint (IP address:port or DNS name:port) eg: `http://mycoolaimodel.serverfarm.io:8080`
- In the packaged software releases, llama.cpp (llama-server) example is included to run on the same machine as the other components to show how the components work together. It is not a real-world model and is not tuned for performance.

## 2. Provider Proxy-Router
- [02-provider-setup.md](02-provider-setup.md) - for more details
- The proxy-router is the core "router" that talks to and listens to the blockchain, routes prompts and inference between the providers hosted models via bids and the consumers that purchase and use the model
- In a real-world scenario, this proxy-router would be a separate, small server or even docker container that is not part of the AI Model Server Instance (it can be, but it's nice to separate the architecture either for anonymity or performance)
- Installation on the provider side is as simple as setting up the environment variables and running the proxy-router software.
- There is a sample `.env.example` file located within the ./proxy-router folder that shoudld be copied to `.env` and edited with the appropriate values.
- Please see [proxy-router .ENV Variables](#proxy-router-env-variables) below for more information on the key values needed in the .env file
- The proxy-router needs to be on both the provider and consumer environment and have access to an Arbitrum Ethereum node via web sockets (WSS) for listening to and posting elements on the blockchain

## 3. Provider - setup Provider, Model and Bid on the blockchain
- [03-provider-offer.md](03-provider-offer.md) - for more details
- Once the proxy-router is setup, and the provider's wallet has the proper amount of ETH and MOR, use the Swagger API Interface (http://yourlocalproxy:8082/swagger/index.html as example) to do the following:
1. Authorize the diamond contract to spend on your wallet's behalf
1. Register your provider (the proxy-router) on the blockchain (http://mycoolproxy.serverfarm.io:3333)
1. Register your model on the blockchain
1. Create a bid for your model on the blockchain
- Further details on how to do this are in the [Provider Offer Guide](provider-offer.md)

## 4. Consumer Node Setup
- [04-consumer-setup.md](04-consumer-setup.md) - for more details
- [04a-consumer-setup-source.md](04a-consumer-setup-source.md) - for more details on setting up from gtihub source
- The consumer node is the "client" that will be purchasing bids from the blockchain, sending prompts via the proxy-router and receiving inference back from the provider's model'
- The components are very similar to the Provider side of things with the exception that the consumer node will typically not be hosting a model, but will be sending prompts to the proxy-router and receiving inference back
- In this case, the easiest way to install is to use the packaged releases for your platform on Github and follow the instructions in the README.md file
- These packages include 3 different pieces of software
- llama.cpp (llama-server) - a simple example model that can be run on the same machine as the proxy-router and ui-desktop to show how the components work together and run local (free) inference
- proxy-router - the same software as the provider side, but with different environment variables and a different role
- ui-desktop - Electron GUI that enables the user to interact with the models (via the API) to browse offered bids, purchase and send prompts
- The consumer node will need to have the proxy-router running and the UI-Desktop running to interact with the models and bids on the blockchain

## 5. Purchase Bid
- [05-purchase-bid.md](05-purchase-bid.md) - for more details
- Once the UI-Desktop is up and running, the consumer can browse the available bids on the blockchain
- Select a bid and stake the intended MOR amount (minimum should be shown)

## 6. Prompt & Inference
- [06-model-interaction.md](06-model-interaction.md) - for more details
- Once the bid is purchased, the consumer can send prompts to the proxy-router via the UI-Desktop
5 changes: 5 additions & 0 deletions docs/01-model-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
## TODO

- The intent of this document is to outline how to run llama.cpp on your local machine and could include details on AWS / EC2 Build recommendations for compute-providers
- The end state should be presentation of a proxy-router accessible private endpoint that the proxy-router can talk to to serve its models
------------
97 changes: 97 additions & 0 deletions docs/02-provider-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@

# Provider Hosting (Local LLM to offer, Proxy-Router running as background/service):

## Assumptions:
* Your AI model has been configured, started and made available to the proxy-router server via a private endpoint (IP:PORT or DNS:PORT) eg: `http://mycoolaimodel.domain.com:8080`
* Optional
* You can use the provided `llama.cpp` and `tinyllama` model to test locally
* If your local model is listening on a different port locally, you will need to modify the `OPENAI_BASE_URL` in the .env file to match the correct port
* You have an existing funded wallet with saMOR and saETH and also have the `private key` for the wallet (this will be needed for the .env file configuration)
* You have created an Alchemy or Infura free account and have a private API key for the Arbitrum Sepolia testnet (wss://arb-sepolia.g.alchemy.com/v2/<your_private_alchemy_api_key>)
* Your proxy-router must have a publicly accessible endpoint for the provider (ip:port or fqdn:port no protocol) eg: `mycoolmornode.domain.com:3333` - this will be used when creating the provider on the blockchain

## Installation & Configuration Steps:
1. Download latest release for your operating system: https://github.com/Lumerin-protocol/Morpheus-Lumerin-Node/releases

1. Extract the zip to a local folder (examples)
* Windows: `(%USERPROFILE%)/Downloads/morpheus)`
* Linux & MacOS: `~/Downloads/morpheus`
* On MacOS you may need to execute `xattr -c proxy-router` in a command window to remove the quarantine flag on MacOS

1. Edit the `.env` file following the guide below [proxy-router .ENV Variables](#proxy-router-env-variables)

1. **(OPTIONAL)** - External Provider or Pass through
* In some cases you will want to leverage external or existing AI Providers in the network via their own, private API
* Dependencies:
* `model-config.json` file in the proxy-router directory
* proxy-router .env file for proxy-router must also be updated to include `MODELS_CONFIG_PATH=<path_to_proxy-router>/models-config.json`
* Once your provider is up and running, deploy a new model and model bid via the diamond contract (you will need the `model_ID` for the configuration)
* Edit the model-config.json to the following json format ... with
* The JSON ID will be the ModelID that you created above, modelName, apiTYpe, apiURL and apiKey are from the external provider and specific to their offered models
* Once the model-config.json file is updated, the morpheus node will need to be restarted to pick up the new configuration (not all models (eg: image generation can be utilized via the UI-Desktop, but API integration is possible)
* Example model-config.json file for external providers
```
#model-config.json
{
"0x4b5d6c2d3e4f5a6b7c8de7f89a0b19e07f4a6e1f2c3a3c28d9d5e6": {
"modelName": "v1-5-specialmodel.modelversion [externalmodel]",
"apiType": "provider_api_type",
"apiUrl": "https://api.externalmodel.com/v1/xyz/generate",
"apiKey": "api-key-from-external-provider"
},
"0xb2c8a6b2c1d9ed7f0e9a3b4c2d6e5f14f9b8c3a7e5d6a1a0b9c7d8e4f30f4a7b": {
"modelName": "v1-7-specialmodel2.modelversion [externalmodel]",
"apiType": "provider_api_type",
"apiUrl": "https://api.externalmodel.com/v1/abc/generate",
"apiKey": "api-key-from-external-provider"
}
}
```

## Start the Proxy Router
1. On your server, launch the proxy-router with the modified .env file shown above
* Windows: Double click the `proxy-router.exe` (You will need to tell Windows Defender this is ok to run)
* Linux & MacOS: Open a terminal and navigate to the folder and run `./proxy-router`from the morpheus/proxy-router folder
1. This will start the proxy-router and begin monitoring the blockchain for events

## Validating Steps:
1. Once the proxy-router is running, you can navigate to the Swagger API Interface (http://localhost:8082/swagger/index.html as example) to validate that the proxy-router is running and listening for blockchain events
1. You can also check the logs in the `./data` directory for any errors or issues that may have occurred during startup
1. Once validated, you can move on and create your provider, model and bid on the blockchain [03-provider-offer.md](03-provider-offer.md)


----------------
### proxy-router .ENV Variables
Key Values in the .env file are (there are others, but these are primarly responsible for connecting to the blockchain, the provider AI model and listening for incoming traffic):
- `WALLET_PRIVATE_KEY=`
- Private Key from your wallet needed for the proxy-router to sign transactions and respond to provided prompts (this is why the proxy router must be secured and the API endpoint protected)
- `ETH_NODE_ADDRESS=wss://arb-sepolia.g.alchemy.com/v2/<your_private_alchemy_api_key>`
- Ethereum Node Address for the Arbitrum blockchain (via Alchemy or Infura)
- This websocket (wss) address is key for the proxy-router to listen and post to the blockchain
- We recommend using your own private ETH Node Address for better performance (free account setup via Alchemy or Infura)
- `DIAMOND_CONTRACT_ADDRESS=0x8e19288d908b2d9F8D7C539c74C899808AC3dE45`
- This is the key Lumerin Smart Contract (currently Sepolia Arbitrum testnet)
- This is the address of the smart contract that the proxy-router will interact with to post providers, models & bids
- This address will change as the smart-contract is updated and for mainnet contract interaction
- `MOR_TOKEN_ADDRESS=0xc1664f994fd3991f98ae944bc16b9aed673ef5fd`
- This is the Morpheus Token (saMOR) address for Sepolia Arbitrum testnet
- This address will be different for mainnet token
- `WEB_ADDRESS=0.0.0.0:8082`
- This is the local listenting port for your proxy-router API (Swagger) interface
- Based on your local needs, this may need to change (8082 is default)
- `WEB_PUBLIC_URL=localhost:8082`
- If you have or will be exposing your API interface to a local, PRIVATE (or VPN) network, you can change this to the DNS name or IP and port where the API will be available. The default is just on the local machine (localhost)
- The PORT must be the same as in the `WEB_ADDRESS` setting
- `OPENAI_BASE_URL=http://localhost:8080/v1`
- This is where the proxy-router should send OpenAI compatible requests to the provider model.
- By default (and included in the Morpheus-Lumerin software releases) this is set to `http://localhost:8080/v1` for the included llama.cpp model
- In a real-world scenario, this would be the IP address and port of the provider model server or server farm that is hosting the AI model separately from the proxy-router
- `PROXY_STORAGE_PATH=./data/`
- This is the path where the proxy-router will store logs and other data
- This path should be writable by the user running the proxy-router software
- `MODELS_CONFIG_PATH=`
- location of the models-config.json file that contains the models that the proxy-router will be providing.
- it has the capability to also (via private API) call external providers models (like Prodia)
- `PROXY_ADDRESS=0.0.0.0:3333`
- This is the local listening port for the proxy-router to receive prompts and inference requests from the consumer nodes
- This is the port that the consumer nodes will send prompts to and should be available publicly and via the provider definition setup on the blockchain
Loading