Improving scalability, performance & privacy #44
Replies: 1 comment 3 replies
-
This is a super cool idea and I want to plant the seed of another optional feature which we probably can't use until the network becomes stronger: Stateless SPV Proofs. https://medium.com/summa-technology/cross-chain-auction-technical-f16710bfe69f The concept is, the server can send the client the block headers themselves, and the security assumption is that it is cheaper to be honest and use your hash power to mine HNS than to try and attack a light client with a fake Urkel proof and SSL cert. That would mean the client wouldn't have to connect to the HNS p2p network at all, but can still verify the data they are being served. The payload from the server would include:
What's interesting about this is we can imagine a client that has never used HNS before or ever seen any HNS blockchain data at all request a website and validate everything without syncing anything. The trick is: it will have to verify "orphan" block headers. Say, block headers at height 100000-100012 but nothing before that. Unfortunately, the network hashrate is not strong enough yet for this to be secure. I did the math below. An attacker with a $5,000 ASIC can produce one "fake" block header for only about $180 in electricity costs, in about 30 days. For sure, scaling that up to twelve fake headers in a practical amount of time makes such an attack much more expensive and less reasonable, but it probably means for now, light clients should continue verifying all the block headers directly from the p2p network ;-) Consider an attacker with a single Goldshell HS-5, offering 2.7 TH/s at 2.65 kW/h. Block 100000 has PoW bits # exponent is first byte minus three
exp = 0x19 - 3
# exponent in bits
bits = 256 ** exp
# mantissa is remaining three bytes
mantissa = 0x02d05d
# target is mantissa (bytes 2-4) times exponent
target = 0x02d05d * bits
# number of hashes is how ever many random 256 bit numbers we need to test
# to get one lower than than the target (~6.5e18 hashes)
hashes = (2**256) / target
# time based on ASIC's hashrate (~910 hours)
hashrate = 2.7e12
hours = hashes / hashrate / 60 / 60
# power consumption
kw = 2.65 * hours
# price assuming $0.10 per kWh ($187.73)
price = kw * 0.10 |
Beta Was this translation helpful? Give feedback.
-
It would be great if Handshake needed fewer public full nodes. It's not clear how many full nodes we need to serve 1 million, 10 million, or 100 million users. Also, there's no incentive for full nodes to serve light clients ....
Requesting proofs
Light clients need to request proofs from full nodes. It would be great if we could avoid this because:
Speaking of latency, we don't just need the urkel proof to verify an HNS site. We have to request the entire DNSSEC chain and the TLSA record which adds latency as well. This brings us to recursion ...
Recursion
It's fine if you want to run a resolver at home using a raspberry pi. Recursive resolvers are designed to be run as servers not embedded in apps and browsers.
Improving scalability, performance & privacy
I talked about this with @pinheadmz a bit. Instead of requesting proofs from full nodes and the DNSSEC chain from DNS, what if the website we're visiting already provided this data? At least an RFC exists for the DNSSEC chain!
Opening https://3b during the TLS handshake, it would send us:
Servers could cache and update these proofs every 6 hours.
What would clients need to do?
Browsers will only use a light client to sync block headers and nothing more! It takes a few seconds for the initial sync, and they would have everything they need to verify any HNS site.
So they can verify the proof and the DNSSEC chain/TLSA record supplied by "3b" during the TLS Handshake without making any network requests!
Benefits with this model:
Cons:
This approach requires client and server-side support.
This model doesn't care about your choice of resolver for A/AAAA lookups. You could use DoH, ODoH, DoH over Tor, DoH over VPN, dnscrypt & anonymized DNS relays, or you can still run a recursive resolver even but browsers & apps DON'T CARE. So you can think of this as a pro or a con.
I'm sharing this here so we can think about it and discuss any issues/flaws with such a model. This is still a bit light on technical details. For example, an attacker could easily strip these TLS extensions. So they must be required by clients, or there's a fallback mechanism to request proofs & DNSSEC chain if they are not available.
ICANN names could use this, but we can't require it as it would break 99.99% of ICANN domains, so they'd need a fallback if we plan to support DANE for such names.
Beta Was this translation helpful? Give feedback.
All reactions