Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Partial fanout #1468

Open
5 tasks
ch1bo opened this issue Jun 10, 2024 · 6 comments
Open
5 tasks

Partial fanout #1468

ch1bo opened this issue Jun 10, 2024 · 6 comments
Labels
amber ⚠️ Medium complexity or partly unclear feature L1 Affects the on-chain protocol of Hydra 💬 feature A feature on our roadmap
Milestone

Comments

@ch1bo
Copy link
Collaborator

ch1bo commented Jun 10, 2024

Why

The current naiive, but correct implementation (see #145) of representing L2 UTxO in the Head validators is limiting the number of UTXOs supported in a Hydra Head (currently about 80 ada-only outputs, see benchmarks)

This is an all-or-nothing approach and the fanout can only succeed if all UTxOs are produced in a single transaction. This means, there is no way to finalize a head which exceeds these limits!

Within this feature, we want to overcome this limitation by enabling individual (subsets of) outputs to be realized onto the main chain after the contestation deadline passed.

This is similar to #190, but allows for individual UTxOs to be realized and hence can even work around situations where some parts of the state are not compatible with the L1 (e.g. min UTxO value not met)

What

  • The hydra-node when asked to Fanout a head can fanout any size of UTxO set with one or more (partial) fanout transactions

  • Out of scope: asking hydra-node deliberatly picking a subset of UTxO to fanout.

  • Head protocol fanout transactions allow arbitrary subset of closed UTxO to be realized onto the main chain.

    • As before is only possible after the contestation deadline passed.
    • There is a maximum of outputs producible by a single fanout transaction.
    • Any number of fanout transactions can be sequenced in the protocol (as long as there are outputs left to fanout).
  • Maintain offchain performance in the same order of magnitude

How

Idea: We update the digest mechanism used in collect, (increment/decrement) and close transactions in such a way to allow for exclusion proofs in fanout transactions.

Using BLS accumulators as researched and experimented by @perturbing in https://github.com/perturbing/plutus-accumulator and https://hackmd.io/@CjIlIbTxRqWOCpWzxuWmkQ/BybaUlSN0

Possible tasks:

  • Write a (pending) test that covers finalizing "big heads"
    • Ideally from "very outside", e.g. ModelSpec where the test just says Close and Fanout
  • Change fanout off-chain code to be partially

TBD: Off-chain performance is still to be benchmarked and might impact snapshot signing performance on the L2

@ch1bo ch1bo added the 💭 idea An idea or feature request label Jun 10, 2024
@noonio noonio added this to the 1.0.0 milestone Jul 15, 2024
@ch1bo ch1bo added the L1 Affects the on-chain protocol of Hydra label Jul 17, 2024
@ch1bo
Copy link
Collaborator Author

ch1bo commented Jul 17, 2024

@colll78 is this related to how you aim to do things in Midgard?

@colll78
Copy link
Contributor

colll78 commented Jul 26, 2024

@colll78 is this related to how you aim to do things in Midgard?

I reached out to Thomas a few weeks ago to discuss the potential of using the BLS accumulator for the Midgard inbox / outbox contracts.

To have an optimistic rollup truly inherit L1 security, you need to have a mechanism through which users can transact on the L2 directly via L1 transactions (as opposed to submitting the tx to the L2 validator network). To achieve this, you have an inbox contract that users can send transactions to on the L1, and any state commitments (containing batches of incoming blocks from the L2) must include the transactions in the inbox in their commitment (emptying the inbox in the process). The state commitment is invalidated if it does not contain all the pending transactions from the inbox. Currently, the inbox stores these pending txs in a sparse merkle tree and the process of guaranteeing inclusion involves providing a one-by-one proof of inclusion of each inbox transaction in the incoming state commitment (which updates each inbox entry to prevent re-use of the same membership proof). The BLS accumulator can likely be used here to vastly speed up this process with the batched membership proofs. Current focus is on implementing fraud proof validators so this is on the backburner for now.

@ch1bo ch1bo mentioned this issue Aug 7, 2024
12 tasks
@ch1bo ch1bo added 💬 feature A feature on our roadmap amber ⚠️ Medium complexity or partly unclear feature and removed 💭 idea An idea or feature request labels Aug 28, 2024
@ch1bo
Copy link
Collaborator Author

ch1bo commented Sep 20, 2024

@perturbing created the optimized rust accumulator and already demonstrated using it from haskell here

@perturbing
Copy link
Contributor

perturbing commented Oct 18, 2024

A small update, for prover times (on my beefy machine), for sets of the following sizes, proof creation over G2 takes roughly

set size      1_000: 11.04 ms +- 202.9 μs
set size     10_000: 77.61 ms +- 1.463 ms
set size    100_000: 683.5 ms +- 6.646 ms
set size  1_000_000: 7.288 s  +- 56.36 ms
set size 10_000_000: 94.38 s  +- 1.420 s

To run such a test yourself for different sizes, change the 1_000 to the set size you want here and use nix run .#bindings-test to compile :)

PS: there is a known issue in linking the rust -> C -> haskell code on mac (I do not own a mac, so could not reproduce it yet).

@noonio
Copy link
Contributor

noonio commented Oct 24, 2024

Note: hydra-doom requires around 2-3 ms creation time for snapshots; (below 10 UTxOs); what do the benchmarks looks like in that domain?

@perturbing
Copy link
Contributor

perturbing commented Oct 24, 2024

@noonio

It depends on the machine ofc, but on my beefy laptop

benchmarking proof calculations/snapshot time
time                 457.0 μs   (454.3 μs .. 460.0 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 457.0 μs   (455.5 μs .. 458.6 μs)
std dev              5.233 μs   (4.304 μs .. 6.610 μs)

So, that sounds doable, though this an isolated measurment of only the core functionality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
amber ⚠️ Medium complexity or partly unclear feature L1 Affects the on-chain protocol of Hydra 💬 feature A feature on our roadmap
Projects
Status: Later
Development

No branches or pull requests

4 participants