This repository contains all Keepers used in the Spark ecosystem to executed automated transactions. Powered by Gelato.
- Make sure to have
yarn
andnode
installed - Run
yarn install
to install dependencies - Run
yarn test
to run tests - Set all necessary environment variables
- Run
yarn gelato:deploy
to deploy all of the keepers
In order to install the project and run scripts you need to have node
and yarn
installed on your machine. There are no specific versioning requirements, but version node 20.12.2
and yarn 1.22.19
are proved to be working.
yarn install
Create your own .env
file and complete it with these settings
ALCHEMY_ID= # API key used to create forked test environments
GELATO_KEYSTORE_PATH= # Absolute path to a keystore wallet that will manage your deployments.
GELATO_PASSWORD_PATH= # Absolute path to a keystore wallet password
GELATO_KEEPERS_SLACK_WEBHOOK_URL= # Slack Webhook URL that will be used by the keepers to send notifications
GELATO_KEEPERS_ETHERSCAN_API_KEY= # Paid Etherscan API key that will be used by the keepers to query historical gas prices
Additionally if you don't have these variables set in your environment, add:
MAINNET_RPC_URL= # Complete Ethereum Mainnet RPC URL that will be used for keeper deployments
GNOSIS_CHAIN_RPC_URL= # Complete Gnosis Chain RPC URL that will be used for keeper deployments
Alternatively, if there is no keystore on the machine the scripts are running on, you can use plain private key:
GELATO_PRIVATE_KEY= # A private key (it's usage is going to prioritized over accessing the keystore)
Additionally make sure to define all of the env variables used in keeper secrets the scripts/configs
files.
In order to run all the tests, run:
yarn test
Alternatively, in order to run only part of the tests, run on of the following:
yarn test:mainnet # Tests on mainnet fork for keepers operating on mainnet
yarn test:gnosis # Tests on gnosis chain fork for keepers operating on gnosis chain
yarn test:base # Tests on base chain fork for keepers operating on base chain
yarn test:utils # Tests of utils, not executed on forked environment
Deployment of a keeper consists of 2 subsequent steps.
- The code of the keeper has to be uploaded to IPFS
- An on-chain instance of a keeper running a code deployed at a specific IPFS address with proper arguments needs to be created and secrets needed to operate the keeper need to be set.
In order to spin-up operating instances of keepers, run the following command (make sure to have all of the environment variables properly set):
yarn gelato:deploy
First the script will upload all code to IPFS and determine for which keeper types to run the actual deployment script. The deployment will not be triggered if a given IPFS deployment combined with a configuration is already deployed. Otherwise, the deployment will be performed.
The exact number of instances, their names, arguments, and configuration are defined in config files. The arguments used to spin-up the keeper are passed along with the deployment transaction and can only be changed by redeploying the keeper instance. Secrets are set in separate transactions, after the initial deployment and their value can be changed at any time via UI or a script.
Any time the script is run, all previously running keepers are retired, except keepers that have the same code logic (IPFS hash) and configuration.
In order to force the redeployment of the keeper, use -f
of --forceRedeploy
flag.
In order to perform a script dry run, without actually performing any actions, use -d
or --dryRun
flag.
The script will ask for a confirmation before every deployment. If a fully automated run is needed, run it with -n
or --noConfirm
flag.
You can check description of all of the options by running the script with -h
or --help
flag.
Keeper | Trigger(s) and Config | Logic |
---|---|---|
Cap Automator | Cron job of 1 hour | Performs a local static execution of exec in the capAutomator to see if there is a resulitng change in supplyCap or borrowCap for that market. Compiles all calls to be made to the capAutomator and compiles a summary of messages to send to #spark-info in Slack outlining which reserves were updated and for what amounts. |
D3M Ticker | Cron job of 1 hour. | Performs a local static execution on the d3mHub to see if there is a resulting change in art for the SparkLend ilk in the vat .If there is a change, performs call to d3mHub to update art and sends a message in #spark-info with relevant info. |
Governance Executor | Runs of cron job of 1 hour. Currently deployed on Gnosis. |
Calls getActionsSetCount on the executor, iterates through all actions, determines if there are any pending actions that are past the timelock and returns governance actions that are able to be executed.Executes governance actions and sends a messages to #spark-info with the spell address that was executed, as well as the domain. |
Kill Switch | Cron job of 5 minutes. Also runs on event AnswerUpdated in oracle aggregators. Currently deployed for STETH/ETH aggregator. |
Calls latestAnswer on the given oracle and gets the oracleThreshold in the killSwitchOracle .If the killswitch hasn't been triggered and the answer is below the threshold, it calls trigger on the killSwitchOracle .Sends Slack messages to #spark-info with info about which markets were triggered, along with the answer and threshold. |
Meta Morpho | Cron job of 1 hour. | Calls pendingCap for all market IDs and determines if there is a pendingCap that is past the timelock and can be accepted.Calls acceptCap for all applicable market and sends Slack messages to #spark-info with info about the markets that changed and the resulting cap. |
XChain DSR Oracle | Cron job of 5 minutes. Also runs on event LogNote in the PauseProxy .Currently calling forwarders on Arbitrum, Base, and Optimism. rho is considered stale after 1 week. |
Calls all forwarder addresses on mainnet with getLastSeenPotData . Checks to see if dsr has changed, or if rho is stale.If either of these conditions are true, it calls refresh on the respective forwarder and sends Slack messages to #spark-info outlining which forwarder was refreshed and for what reason. |
In order to make sure that your keystore paths are configured correctly run:
yarn gelato:checkKey
This script will try to access the key that variables GELATO_KEYSTORE_PATH
and GELATO_PASSWORD_PATH
point to and will print the address that was decrypted.
In order to list all currently active Keepers run:
yarn gelato:list
This command will list all of the currently running Keepers' names and ids. An example output:
== All Mainnet active tasks ==
* 0x86a3b5ace771d3f72c3d503f404feda8498e619e779b9ff8cf704c1bb29f3c72 (Cap Automator <2024-07-05 12:39:41>)
* 0xcf37bd51007aac73ec064210be4c36350c432041a47832d228d68f0607c5ea43 (D3M Ticker <2024-07-05 13:20:54>)
* 0x588bb33b6febb264874f28910b393c7721058bf57e2facc20172e4c69545a9b5 (Kill Switch [WBTC-BTC] <2024-07-05 13:20:54>)
* 0x28327d6869f95c33a60039adc1bdc2fe8b3d9a99233783d9c2be1aa91cd0705e (Kill Switch [stETH-ETH] <2024-07-05 13:20:54>)
* 0xc5f5efc13488dba28a7b52825ac999e61ef1394364753c5edd52538455057466 (Kill Switch [Time Based] <2024-07-05 13:20:54>)
* 0x2fa983a886a1be37c1c3cab81f0a7a8b53554f86969f39f877d5e02874514fd4 (Meta Morpho Cap Updater <2024-07-05 13:20:54>)
* 0xb64de6a9f653235f3dd88792534fbb57a15929049468c0b376f6f0cd5426b8b2 (XChain Oracle Ticker [Event Based] <2024-07-05 13:20:54>)
* 0x2cf5644493187b2790093ff7c881d73087fb876630193c78c23ac55daee405ab (XChain Oracle Ticker [Time Based] <2024-07-05 13:20:54>)
== All Gnosis active tasks ==
* 0x95664d04fbc188fe4e322c40adeea7e8b58703360553db7676e4a0ec799dcd6c (Governance Executor [Gnosis] <2024-07-05 13:20:54>)