-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Inscriptions Service #2279
Draft
Ayiga
wants to merge
20
commits into
release-rogue
Choose a base branch
from
ts/enh/inscriptions-service
base: release-rogue
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Add Inscriptions Service #2279
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The initial implementation of the Inscriptions service is one that allows for the configuration and handling of the various inscriptions service. This implementation allows for the publishing of new inscriptions with a rate limiter and a capacity in order to try and prevent overburdening the sequencer nodes. This implementation allows for the streaming of new inscriptions to users that connect to the service. Incoming submitted inscriptions are signed by the user and verified by the service being submitted to the mempool for sequencing. Sequenced inscriptions are signed by the service and verified when processing incoming blocks.
The inscriptions service needs to be able to recover it's state when restarting the service. As a result, there needs to be a persistence layer that can track where the service currently is in terms of processing requests. This adds a backed persistence layer to the service in terms of postgres, with a wrapping local cache to cache the block height.
The block stream has been exiting prematurely due to reasons that are not easily trackable. When this state is achieved, recovery does not occur, and the service must be restarted. In order to better understand the nature of these errors, and to improve the exit conditions of this service, the logic for retrieving and consuming the block stream has been adjusted. If a block comes in that precedes the expected block, a info log will be performed. If the sender is determined to be closed, a panic will ensue.
This is failing fairly consistently, as the inscription in question hasn't even been written by the time this call gets invoked.
The pending inscription population on start could get stuck in a deadlock due to attempting to write into a channel that is not actively being consumed. This change relocates where this load occurs in order to allow it to be processed passively so that the service doesn't just stall on launch.
The inscription submission is currently linear and as a result cannot churn through enough inscriptions at once. This change attempts to batch the inscriptions, as many as it can up to the threshold, currently 10, and then submits them in parallel using join_all.
This only reason that this errors is due to an early client disconnect. When that happens it kills the rest of the client message processing for any other or new connecting clients.
For each of the retrieved inscriptions we attempt to record it within the persistence object. Retrieving the persistence object requires acquiring a read-lock on the data_state object. With its current implementation, this read lock is acquired for each inscription, which unnecessarily acquires the read lock in order to acquire the persistence. This has been re-written to acquire the persistence object once, and then each inscription is then persisted.
In order to close out the inscriptions demo, the ending page requires us being able to retrieve the list of inscriptions for the user to see his / her contribution to the block chain. This change adds the endpoint to retrieve a list of the most recent inscriptions that the given wallet address made to the mainnet block chain.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR:
Tracks the overall progress of the implementation of the Inscriptions service. It is specifically targeting a merge into
release-rogue
as that is what it was branched from. Therelease-rogue
branch was targeted explicitly to ensure compatability. This is a DRAFT PR to ensure that it doesn't get directly merged.This adds the
inscriptions
sub-crate into the sequencer repo for convenience of writing. It is largely copied from thenode-metrics
implementation with state and message adjustments to work specifically for the inscriptions API itself.This service is meant to be used with the inscriptions front-end ui demo. Much of the implementation has been implemented quickly without a lot of time for testing or documentation due to time constraints.
How to Test:
This service has a few environment variables that are important to be specified in order for things to be able to be run without issue. Some of these environment variables should have special care taken when specifying them in order to ensure that the behavior can be run and verified in a reasonable time frame.
Environment Variables
The environment variables for this service are utilized to configure specific behavior, generally around targetting specific entries and data sources.
Persistence
This service utilizes
postgres
andsqlx
to store all of its state. It is important to have a postgres instance running, and configured properly in order for this service to run.Before configuring the environment variables to run the service, the database needs to be setup and configured. With a properly running
postgres
issue it is important to initialize thepostgres
database utilizing thesqlx-cli
:https://github.com/launchbadge/sqlx/blob/main/sqlx-cli/README.md
the
DATABASE_URL
environment variable NEEDS to be set for thesqlx-cli
to be able to run. After that, ensure that you run this command, with the environment variable setup correctly:Then these environment variables can be set in order to run migrations and make things work:
These environment variables have default values with the idea that the postgres server is accessible via
localhost
as if running in adocker
container running locally.Sourcing Data
This service sources all of its data from the availability API's block stream endpoint. This endpoint needs to be specified:
This expected value is meant to be the base version endpoint.
Example:
https://query.decaf.testnet.espresso.network/v0/
Additionally the variable
ESPRESSO_INSCRIPTIONS_MINIMUM_BLOCK_HEIGHT
SHOULD be specified in order to ensure that we don't start consuming all blocks from block height0
(which is the default).The first block on
decaf
that has an inscription with the default namespace id is453728
.The data coming in will be scanned for transactions that match the namespace and appear to be signed by the private key of the service. Otherwise they get ignored.
The key for the service can be setup with a BIP39 mnemonic:
The namespaceID can be changed with:
Submitting Data
The data will be submitted to a mempool of some sort. It is necessary to specify a URL endpoint to hit for submitting data. This endpoint matches the configuration of the sequencer service. You can target either public mempool submit endpoints or a private (the builder's) endpoint.
Examples:
https://query.decaf.testnet.espresso.network/v0/submit
https://builder.decaf.testnet.espresso.network/v0/txn_submit
Submissions get rate limited, and buffered to a configurable rate and buffer size using these environment variables: