Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Inscriptions Service #2279

Draft
wants to merge 20 commits into
base: release-rogue
Choose a base branch
from
Draft

Conversation

Ayiga
Copy link
Member

@Ayiga Ayiga commented Nov 12, 2024

This PR:

Tracks the overall progress of the implementation of the Inscriptions service. It is specifically targeting a merge into release-rogue as that is what it was branched from. The release-rogue branch was targeted explicitly to ensure compatability. This is a DRAFT PR to ensure that it doesn't get directly merged.

This adds the inscriptions sub-crate into the sequencer repo for convenience of writing. It is largely copied from the node-metrics implementation with state and message adjustments to work specifically for the inscriptions API itself.

This service is meant to be used with the inscriptions front-end ui demo. Much of the implementation has been implemented quickly without a lot of time for testing or documentation due to time constraints.

How to Test:

This service has a few environment variables that are important to be specified in order for things to be able to be run without issue. Some of these environment variables should have special care taken when specifying them in order to ensure that the behavior can be run and verified in a reasonable time frame.

Environment Variables

The environment variables for this service are utilized to configure specific behavior, generally around targetting specific entries and data sources.

Persistence

This service utilizes postgres and sqlx to store all of its state. It is important to have a postgres instance running, and configured properly in order for this service to run.

Before configuring the environment variables to run the service, the database needs to be setup and configured. With a properly running postgres issue it is important to initialize the postgres database utilizing the sqlx-cli:

https://github.com/launchbadge/sqlx/blob/main/sqlx-cli/README.md

the DATABASE_URL environment variable NEEDS to be set for the sqlx-cli to be able to run. After that, ensure that you run this command, with the environment variable setup correctly:

sqlx database create

Then these environment variables can be set in order to run migrations and make things work:

  • POSTGRES_HOST
  • POSTGRES_PORT
  • POSTGRES_DATABASE
  • POSTGRES_USER
  • POSTGRES_PASSWORD

These environment variables have default values with the idea that the postgres server is accessible via localhost as if running in a docker container running locally.

Sourcing Data

This service sources all of its data from the availability API's block stream endpoint. This endpoint needs to be specified:

  • ESPRESSO_INSCRIPTIONS_BLOCK_STREAM_SOURCE_BASE_URL

This expected value is meant to be the base version endpoint.

Example:
https://query.decaf.testnet.espresso.network/v0/

Additionally the variable ESPRESSO_INSCRIPTIONS_MINIMUM_BLOCK_HEIGHT SHOULD be specified in order to ensure that we don't start consuming all blocks from block height 0 (which is the default).

  • ESPRESSO_INSCRIPTIONS_MINIMUM_BLOCK_HEIGHT

The first block on decaf that has an inscription with the default namespace id is 453728.

The data coming in will be scanned for transactions that match the namespace and appear to be signed by the private key of the service. Otherwise they get ignored.

The key for the service can be setup with a BIP39 mnemonic:

  • ESPRESSO_INSCRIPTIONS_SIGNER_MNEMONIC

The namespaceID can be changed with:

  • ESPRESSO_INSCRIPTIONS_NAMESPACE_ID
Submitting Data

The data will be submitted to a mempool of some sort. It is necessary to specify a URL endpoint to hit for submitting data. This endpoint matches the configuration of the sequencer service. You can target either public mempool submit endpoints or a private (the builder's) endpoint.

  • ESPRESSO_INSCRIPTIONS_SUBMIT_BASE_URL

Examples:
https://query.decaf.testnet.espresso.network/v0/submit
https://builder.decaf.testnet.espresso.network/v0/txn_submit

Submissions get rate limited, and buffered to a configurable rate and buffer size using these environment variables:

  • ESPRESSO_INSCRIPTIONS_PUT_INSCRIPTION_BUFFER_SIZE
  • ESPRESSO_INSCRIPTIONS_PUT_INSCRIPTIONS_PER_SECOND

The initial implementation of the Inscriptions service is
one that allows for the configuration and handling of the
various inscriptions service.

This implementation allows for the publishing of new
inscriptions with a rate limiter and a capacity in order to
try and prevent overburdening the sequencer nodes.

This implementation allows for the streaming of
new inscriptions to users that connect to the service.

Incoming submitted inscriptions are signed by the user
and verified by the service being submitted to the
mempool for sequencing.

Sequenced inscriptions are signed by the service and
verified when processing incoming blocks.
The inscriptions service needs to be able to recover it's state when
restarting the service.  As a result, there needs to be a persistence layer
that can track where the service currently is in terms of processing
requests.

This adds a backed persistence layer to the service in terms of postgres,
with a wrapping local cache to cache the block height.
The block stream has been exiting prematurely due to reasons that are not
easily trackable.  When this state is achieved, recovery does not occur, and
the service must be restarted.

In order to better understand the nature of these errors, and to improve the
exit conditions of this service, the logic for retrieving and consuming the
block stream has been adjusted.

If a block comes in that precedes the expected block, a info log will be
performed.

If the sender is determined to be closed, a panic will ensue.
This is failing fairly consistently, as the inscription in question hasn't even
been written by the time this call gets invoked.
The pending inscription population on start could get stuck in a deadlock
due to attempting to write into a channel that is not actively being
consumed.

This change relocates where this load occurs in order to allow it to
be processed passively so that the service doesn't just stall on launch.
The inscription submission is currently linear and as a result cannot churn through
enough inscriptions at once.

This change attempts to batch the inscriptions, as many as it can up to the threshold,
currently 10, and then submits them in parallel using join_all.
This only reason that this errors is due to an early client disconnect.  When that
happens it kills the rest of the client message processing for any other or new
connecting clients.
For each of the retrieved inscriptions we attempt to record it within the persistence
object. Retrieving the persistence object requires acquiring a read-lock on the data_state
object. With its current implementation, this read lock is acquired for each inscription,
which unnecessarily acquires the read lock in order to acquire the persistence.

This has been re-written to acquire the persistence object once, and then each inscription
is then persisted.
In order to close out the inscriptions demo, the ending page requires us being able to
retrieve the list of inscriptions for the user to see his / her contribution to the
block chain.

This change adds the endpoint to retrieve a list of the most recent inscriptions that
the given wallet address made to the mainnet block chain.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant