This workspace contains the following crates:
cli
: contains the code to run the intuition TUI client.consumer
: contains the code to RAW, DECODED and RESOLVER consumers.hasura
: contains the migrations and hasura config.histoflux
: streams historical data from our contracts to a queue. Currently supports SQS queues.image-guard
: contains the code to guard the images.models
: contains the domain models for the intuition data as basic traits for the data.substreams-sink
: contains the code to consume the Substreams events.
Besides that, we have a docker-compose.yml
file to run the full pipeline locally, a Makefile
to run some commands using cargo make
and the LICENSE
file.
Note that all of the crates are under intensive development, so the code is subject to change.
In order to be able to use the convenience commands in the Makefile, you need to install cargo make
:
- Install cargo make (
cargo install --force cargo-make
)
For Hasura, you need to:
- Install hasura-cli
And for SQS queues, you need to have AWS configured in your system, so you need to have a file in ˜./.aws/config
with the following content:
[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
There is a .env.sample
file that you need to use as a template to create the .env
file. First, you need to set the values for following variables:
PINATA_GATEWAY_TOKEN
: You can get the token from PinataPINATA_API_JWT
: You can get the token from PinataRPC_URL_MAINNET
: We are currently using Alchemy. You can create new ones using the Alchemy dashboardRPC_URL_BASE_MAINNET
: We are currently using Alchemy. You can create new ones using the Alchemy dashboardAWS_ACCESS_KEY_ID
: You can get the values from your AWS accountAWS_SECRET_ACCESS_KEY
: You can get the values from your AWS accountHF_TOKEN
: You can get the token from Hugging FaceSUBSTREAMS_API_TOKEN
: You can get the token from Substreams
After filling all of the variables, you can run the following commands:
./start.sh
./cli.sh
Later, you can use ./stop.sh
to stop all services or ./restart.sh
to restart all services and clear attached volumes
cp .env.sample .env
source .env
cargo make start-docker-and-migrate
docker compose down -v
docker compose up -d --force-recreate
cargo make migrate-database
cargo nextest run
None so far.
First you need to copy the .env.sample
file to .env
and source it. Make sure you set the correct values for the environment variables.
cp .env.sample .env
source .env
If you want to run the local raw consumer connected to the real raw SQS queue you can run
RUST_LOG=info cargo run --bin consumer --mode raw
( or simply cargo make raw-consumer
)
If you want to run the local decoded consumer connected to the real decoded SQS queue you can run
RUST_LOG=info cargo run --bin consumer --mode decoded
( or simply cargo make decoded-consumer
)
If you want to run the local raw consumer connected to the local SQS queue you can run
RUST_LOG=info cargo run --bin consumer --features local --mode raw --local
(or cargo make raw-consumer-local
)
If you want to run the local decoded consumer connected to the local SQS queue you can run
RUST_LOG=info cargo run --bin consumer --features local --mode decoded --local
(or cargo make decoded-consumer-local
)
We use feature flags to differentiate between the local and the remote execution environment.
Also note that you need to set the right environment variables for the queues (RAW_CONSUMER_QUEUE_URL
and DECODED_CONSUMER_QUEUE_URL
) in order to switch between the local and the remote execution environment.
cargo make start-docker-and-migrate
to start the docker compose and run the migrations.cargo make clippy
to run clippycargo make fmt
to run rustfmt
First you need to install minikube
:
brew install minikube
Install k9s
brew install k9s
Then we need to create the secrets. At this step it's expected that you have a .env
file with the correct values set. The only thing you need to keep in mind is that we need to remove the "
from the values, e.g., DATABASE_URL="postgres://testuser:test@database:5435/storage"
should be DATABASE_URL=postgres://testuser:test@database:5435/storage
.
kubectl create secret generic secrets --from-env-file=.env
Then you can start the minikube cluster:
minikube start
Then you can apply the kubernetes manifests:
kubectl apply -k kube_files/
To restart the services you can run:
kubectl rollout restart deployment
or
kubectl delete deployment --all