You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Maurer et al's sub-transaction mappings model quantifies the relatedness of inputs and outputs of a transaction to each other based on an assumption that there are no net transfers within a transaction.
LaurentMT's Boltzmann metric is closely related prior art, omitting a the distinction of derived vs. non-derived sub-transaction metrics, a correction which makes the resulting quantities more conservative. Maurer et al's definition ignores mining fees, and for both practical and adversarial modeling reasons it is appropriate to generalize from a search perspective to an approximation or optimization one and take into account the fee amount paid by each proposed sub-transaction.
Payjoin transactions directly undermine the assumption of no net-transfers, but in a multiparty / cut through setting, a weakened version of the assumption can still be useful to the adversary: especially in conjunction with other attacks, since a cluster of two users among many and a hidden payment amount, especially two users who are likely to interact repeatedly is still a lot more revealing.
Additionally, there is a common misconception (specifically in regards to wasabi 2 & cashfusion's transaction structures) that generating sub-transaction mappings is computationally intractable for large transactions, but this is not true in general for the same reasons that the knapsack cryptosystem is considered insecure (while NP complete in general, many specific subset sum instances are tractable)
For improving privacy we therefore wish to know if fine-grained sub-transaction mappings are underdetermined, i.e. that there are many possible sub-transaction mappings in the space of partitions.
Exhaustive Approach
A brute force exhaustive approach requires enumerating all partitions of a set, which are counted by the Bell nunmbers. The 42nd Bell number is ~~ 2^128, so even with the most efficient implementation this would be intractable for larger transactions.
cja repository
I have reviewed this implementation, and feel that it would be unwise to depend on it except when comparing in benchmarks and to reuse its tests.
This is because the code is quite inflexible, and also complicated (I found it hard to read) and inefficient (c.f. payjoin/cja#2), having being designed for offline computation of simulated data it. It seems like it would take a substantial effort to refactor it into production quality code.
However, it is able to enumerate partitions and supports arbitrary filter conditions so technically it is more general than what is described in the paper and can suffice for sufficiently small transactions.
set-partitions crate
A much simpler / more readable approach would be to simply combine the txins and txouts, foregoing some of tactics of the cja code for simplicity.
I have reviewed the set-partitions crate and I found the code to be of high quality, but there haven't been changes since the initial release. The algorithm for enumerating partitions has been described in Knuth's The Art of Computer Programming volume 3, and is well understood and efficient. This algorithm allows the sums to be updated with O(1) work instead of being recomputed repeatedly, and the library supports implementing this behavior through a trait.
Using this library all partitions of the combined set of inputs and outputs can simply be filtered. Adding the constraint that each part must contain at least one of each kind of set member should reproduce the behavior of the cja code, with asymptotically worse complexity, but given some of the unexpected computational complexity in the cja implementation I would not be surprised if this was actually more scalable.
set-partitions with better pruning
tAoCP also describes a related algorithm which allows enumerating partitions with a given number of parts, as opposed to all partitions.
This would allow a more logical order of enumeration (most refined to least refined), and would also allow partitioning inputs and outputs separately, which would recover any of the theoretical/asymptotic improvements of the cja's nested iterators approach. Since parts need to be balanced, this also suggests a much more efficient approach of enumerating permutations of parts whose sum is within some neighborhood, as opposed to recursively solving subset sum,
subset_sum / dpss project
cja solves subset sum by brute force, but Bellman's algorithm can do it in O(nm) with dynamic programming.
This crate also seems to support matching of partitions directly, but I did not yet understand that variation algorithm and how it works, or have time to review the code.
Lower bound
An exhaustive enumeration that proceeds from the most refined to the least refined partitions and stops after some pre-determined number of iterations can obtain a (possibly very loose) lower bound on the number of non-derived sub-transaction mappings.
Approximate approach with sumset size as a proxy
The literature on subset sum is vast. Some of the latest papers includes an efficient decision algorithm that can determine the sumset size, and/or approximate or compute it explicitly when it is sparse, using a convolution based approach. There are a number of rust crates for doing sparse polynomial multiplication using FFTs, and also seem promising.
A simpler approach would be to randomly partition the set and use Bellman's DP algorithm. If sumset size grows rapidly, and the sumsets of disjoint subsets have non-trivial intersections, sub-transactions should be easy to find.
Efficient lookup in cost function
For most of the above algorithms and the existing implementations the main consideration is offline computation, but the last approximation has applications for optimizing in the online setting, i.e. when trying to decide efficiently among many potential possibilities, or incorporating new information during transaction construction.
First, calculate the k-th sumset (not the right term, but i mean the sumset formed by subsets up to size k) of the (rounded) input values of other users coins, and store that in a (sparse, possibly approximate) lookup table.
Then for each set of candidate outputs (e.g. when considering low Hamming weight values, or output splitting as in the knapsack mixing strategies), look up all entries in the sumset of the outputs. Since this output set is small, the entire sumset can be enumerated. Since the input sumset approximation is given explicitly, a windowing function can be implemented for roughly the same cost as an exact lookup. If all or most values or at least combinations of values can be represented by other users coins, the sub-transaction mapping of the final transaction will be under-determined with respect to the optimizing user's coins.
The text was updated successfully, but these errors were encountered:
that goes into a bit more detail onto sub-txn model and its symmetries, as well as other potential sumset approximation approaches (generating functions and linear sketches)
In another channel, @nothingmuch suggested "sing the set_partitions crate to do a KISS implementation of cja partition enumeration, and validate it against its test suite"
I believe that means implement the partitions module including validating against the same .json test vectors.
Maurer et al's sub-transaction mappings model quantifies the relatedness of inputs and outputs of a transaction to each other based on an assumption that there are no net transfers within a transaction.
LaurentMT's Boltzmann metric is closely related prior art, omitting a the distinction of derived vs. non-derived sub-transaction metrics, a correction which makes the resulting quantities more conservative. Maurer et al's definition ignores mining fees, and for both practical and adversarial modeling reasons it is appropriate to generalize from a search perspective to an approximation or optimization one and take into account the fee amount paid by each proposed sub-transaction.
Payjoin transactions directly undermine the assumption of no net-transfers, but in a multiparty / cut through setting, a weakened version of the assumption can still be useful to the adversary: especially in conjunction with other attacks, since a cluster of two users among many and a hidden payment amount, especially two users who are likely to interact repeatedly is still a lot more revealing.
Additionally, there is a common misconception (specifically in regards to wasabi 2 & cashfusion's transaction structures) that generating sub-transaction mappings is computationally intractable for large transactions, but this is not true in general for the same reasons that the knapsack cryptosystem is considered insecure (while NP complete in general, many specific subset sum instances are tractable)
For improving privacy we therefore wish to know if fine-grained sub-transaction mappings are underdetermined, i.e. that there are many possible sub-transaction mappings in the space of partitions.
Exhaustive Approach
A brute force exhaustive approach requires enumerating all partitions of a set, which are counted by the Bell nunmbers. The 42nd Bell number is ~~ 2^128, so even with the most efficient implementation this would be intractable for larger transactions.
cja repository
I have reviewed this implementation, and feel that it would be unwise to depend on it except when comparing in benchmarks and to reuse its tests.
This is because the code is quite inflexible, and also complicated (I found it hard to read) and inefficient (c.f. payjoin/cja#2), having being designed for offline computation of simulated data it. It seems like it would take a substantial effort to refactor it into production quality code.
However, it is able to enumerate partitions and supports arbitrary filter conditions so technically it is more general than what is described in the paper and can suffice for sufficiently small transactions.
set-partitions crate
A much simpler / more readable approach would be to simply combine the txins and txouts, foregoing some of tactics of the cja code for simplicity.
I have reviewed the set-partitions crate and I found the code to be of high quality, but there haven't been changes since the initial release. The algorithm for enumerating partitions has been described in Knuth's The Art of Computer Programming volume 3, and is well understood and efficient. This algorithm allows the sums to be updated with O(1) work instead of being recomputed repeatedly, and the library supports implementing this behavior through a trait.
Using this library all partitions of the combined set of inputs and outputs can simply be filtered. Adding the constraint that each part must contain at least one of each kind of set member should reproduce the behavior of the cja code, with asymptotically worse complexity, but given some of the unexpected computational complexity in the cja implementation I would not be surprised if this was actually more scalable.
set-partitions with better pruning
tAoCP also describes a related algorithm which allows enumerating partitions with a given number of parts, as opposed to all partitions.
This would allow a more logical order of enumeration (most refined to least refined), and would also allow partitioning inputs and outputs separately, which would recover any of the theoretical/asymptotic improvements of the cja's nested iterators approach. Since parts need to be balanced, this also suggests a much more efficient approach of enumerating permutations of parts whose sum is within some neighborhood, as opposed to recursively solving subset sum,
subset_sum / dpss project
cja solves subset sum by brute force, but Bellman's algorithm can do it in O(nm) with dynamic programming.
This crate also seems to support matching of partitions directly, but I did not yet understand that variation algorithm and how it works, or have time to review the code.
Lower bound
An exhaustive enumeration that proceeds from the most refined to the least refined partitions and stops after some pre-determined number of iterations can obtain a (possibly very loose) lower bound on the number of non-derived sub-transaction mappings.
Approximate approach with sumset size as a proxy
The literature on subset sum is vast. Some of the latest papers includes an efficient decision algorithm that can determine the sumset size, and/or approximate or compute it explicitly when it is sparse, using a convolution based approach. There are a number of rust crates for doing sparse polynomial multiplication using FFTs, and also seem promising.
A simpler approach would be to randomly partition the set and use Bellman's DP algorithm. If sumset size grows rapidly, and the sumsets of disjoint subsets have non-trivial intersections, sub-transactions should be easy to find.
Efficient lookup in cost function
For most of the above algorithms and the existing implementations the main consideration is offline computation, but the last approximation has applications for optimizing in the online setting, i.e. when trying to decide efficiently among many potential possibilities, or incorporating new information during transaction construction.
First, calculate the k-th sumset (not the right term, but i mean the sumset formed by subsets up to size k) of the (rounded) input values of other users coins, and store that in a (sparse, possibly approximate) lookup table.
Then for each set of candidate outputs (e.g. when considering low Hamming weight values, or output splitting as in the knapsack mixing strategies), look up all entries in the sumset of the outputs. Since this output set is small, the entire sumset can be enumerated. Since the input sumset approximation is given explicitly, a windowing function can be implemented for roughly the same cost as an exact lookup. If all or most values or at least combinations of values can be represented by other users coins, the sub-transaction mapping of the final transaction will be under-determined with respect to the optimizing user's coins.
The text was updated successfully, but these errors were encountered: