-
Notifications
You must be signed in to change notification settings - Fork 40
ECIP1011 for Replay Attack Fix Using Change in the Signing Process #10
base: master
Are you sure you want to change the base?
Conversation
Future forks might not even need to worry about replay attacks, right? If we leave difficulty bombs behind... |
And fork 1920k is already behind, and it's a well known block. It's too late to introduce any fix for that particular block, it already have several fixes like SafeSplit contract and similar |
Yes I agree. This is why we could even utilise the approach suggested in the ECIP, but just use an arbitrary constant hash for the change. Maybe something that can be easily adjusted without hardcoding, so if a change is required in future it can be executed by users directly - even in the case of light clients. |
I had a detailed chat with @splix on this, around using a dynamic value for the appended value in the signing and validation process. The outcomes of this made sense so I have updated the ECIP based on these discussions. |
The pros of this algorithm are pretty straightforward:
However it does not provide replay prevention within the same chain, or compatible chains (e.g a private chain based off of ETC). Another issue is that the client will have to keep track of all previous and current block hashes in order to sync the blockchain from the beginning. This is especially true for light clients - how do you communicate when the hash change occurs, and how do light clients verify transactions at any given point in time? My proposal suffers from the same issue - dependency on a stable block number that can change. Once you have to maintain more than one block hash, you begin increasing the cost of verifying transactions to the receiver, since the receiver has to iterate over the current and previous hashes prior to recovery. An alternative is to use a pool of block numbers that are frozen at an interval (128 or 256 blocks). The pool could exist inside of a contract. That solves the issue of having a constant, verifiable source of hashes to test against. That still doesn't solve the problem of the receiver having to compute the BTW, I don't agree that we are up against the clock where we need to avoid future-proofing the algorithm against additional forks or cousin chains. We have the opportunity to solve this issue for the foreseeable future. We are close to having an elegant solution, let's spend a bit more time iterating on it. |
@igetgames with the new modification the following is not the case:
In this update we included an approach where validation of a signed transaction can be done with a pool of 7 block header hashes from last 7 days. I am not a big fan of relying on a contract for referencing protocol level parameters. I don't think I can give you a good answer to that, except that if had to write this up in the yellow paper spec (which I am planning to do as we go forward) that would just look very inelegant. But that is just me, and may not be everyone's pragmatic PoV. Yes I agree with you, I don't think we should be driven to compromise on a a good solution, as you say we are close to finding something that would be really cool! |
@avtarsehra and I are discussing this on Slack. We'll update this thread in a bit. |
ECIPs/ECIP1011.md
Outdated
|
||
With this approach, it would ensure that for any future forks the replay attack is resolved after a time lag of ```6 epochss``` (or 24 hours ```~ 6,144 blocks```). | ||
|
||
In this dynamic approach, an option could be used that T_a could also be an arbitrary value ```0```, which can be used by light clients or for constructing off-line transactions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this make off-line tx's later susceptible to replay attacks if there is another fork?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes offline transactions would be susceptible to relay attacks. But the arbitrary constant can be easy to modify and chosen at point of a fork. However, I can't see how offline wallets would automatically manage forks without some manual change no matter what approach we go with.
ECIPs/ECIP1011.md
Outdated
4. Validation can then take place with up to 1 weeks of values: ```t0, t0-24, t0-48 ... t0-168```, which implies at most 7 checks would need to be performed for transactions that would be delayed by 7 days. | ||
|
||
With this approach, it would ensure that for any future forks the replay attack is resolved after a time lag of ```6 epochss``` (or 24 hours ```~ 6,144 blocks```). | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this. Is this intended to prevent delayed transactions that were waiting in the txpool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this approach would make delayed transactions invalid. We can increase the period of validity by increasing how far back a check would take place.
I'm happy with your suggestion. Dynamic signing variable is a nice to have from my view, but I agree it needs more analysis and simulations to find the optimal parameters. I will continue working on it as I think it can be used in some way... |
Would you be OK with going forward with client implementation using the most recent block hash? That value is trivially accessible. If you come up with another solution for the dynamic variable then it will be easy to plug in at that point. |
Yes definitely that would be ok. If it is a fixed value it doesn't matter which is used, so most recent block hash would be good. The other can be null (as you mentioned) for offline client support. |
I agree with @igetgames on the dynamic signing var -- this is a separate spec that changes the current functionality, and I don't see the rationale for it. The general procedure of changing the signing process looks like a good solution to me. |
I'm good with that. I will remove the last section in the ECIP connected to the dynamic signing variable. |
@splix @elaineo @igetgames I have made the update as discussed above. Just for reference I have left the below part in, but in the main body included some of the points from above. On a related note, we will need to get back to devs for parity etc and discuss timelines regarding parity changes. If everyone is ok with what we got we can share the ECIP for the diff bomb delay, and one for the replay fix. We will need to include our timelines i.e. when we planning to go live. |
This is to be stacked with the difficulty bomb so it will not cause unnecessary interruptions with older clients? |
light clients and hardware wallets shouldn't be affected by the change as block header hash can be 0 if the dynamic variable is used. Wallet addresses would be unaffected but will be unable to send any transactions with old clients after the switch. Enough time needs to be given to the exchanges and miners to upgrade their systems for a smooth transition. the clients can be released with the fix asap and then the trigger block can engage both changes at once globally. |
All clients are affected. Hashing an additional 32 bytes of '0' will make the transaction hash differ on clients that only hash the existing transaction structure.
I would agree except that replay attacks are an ongoing and critical issue. The implementation can be written to trigger on a specific block sometime in November (giving enough time for Parity and other implementations to apply and test, and for announcements to be made), and future-proofed because we know the difficulty bomb delay will occur at 3000000. The code will simply check the block height to retrieve the most recent fork hash. An advantage of fixing replay attacks sooner than later is that it will demonstrate that the Classic developers can coordinate critical fixes and roll them out in a timely fashion. |
This is very true. Is the plan to issue this as a eip as well? Losing ledger, trezor, and what ever else will be unfortunate. Also cew and cew-cx will need a patch |
@igetgames I don't think a November trigger is realistic. I would love to have the replay fixed before ETH, but would prefer to spend more time having our clients fully tested and getting it propagated to everyone with plenty of advance notice. |
@realcodywburns I definitely think we should issue this as an EIP, but I would want @avtarsehra to sign off on it first. @elaineo Understood. |
Is this PR ready to be merged? |
Looks like ETH will have a hard fork in about 2 weeks (block 2,642,462) to implement this: ethereum/EIPs#155 |
@elaineo yes that's great they are doing it. But would also be good for ETC to implement our own version, so if ETC forks we can prevent replay issues between ETC and other chains that may emerge. We are using this as a test for new dev's. @splix and I were supposed to have a call with them today but unfortunately it had to be cancelled at the last minute. I will follow-up on that tomorrow and provide an update. |
do we want our own version? if the changes are compatible except for the CHAIN_ID, then it will be easier to pull upstream changes. |
Yes that is a good point. We were mainly using as a dev test as it utilises some key skills we wanted to identify. |
@avtarsehra maybe we should merge it with status Rejected? for history and reference |
See the draft version of the ECIP1011 here:
https://github.com/avtarsehra/ECIPs/blob/master/ECIPs/ECIP1011.md
I beleive changing the signing process would be the simplest and cleanest method for fixing this. While future proofing is useful, I personally feel that a robust, secure, clean and quick fix right now that can be upgraded later would be the best option. This approach allows you to use a constant value, which can be made into a dynamic value at a later stage. But I think trying to implement a solution that can cover all future forks would be overkill, and would require reference to the blockchain thus minimising the possibilities for creating offline transactions.