S No | Agenda | Summary |
---|---|---|
188.1 | Returndata in Transaction Receipts: | Charles proposed adding a feature to the Execution API that allows users to access the “returndata” from transactions. |
The discussion highlighted the need for standardized ways to retrieve returndata across different clients. | ||
No consensus was reached during the meeting, and further discussions are encouraged on GitHub. | ||
188.2 | Minimum Miner Tip Requirement: | Geth developer Péter Szilágyi raised concerns about the default minimum priority tip requirement enforced by the Geth client. |
Users reported that blocks built with Geth exclude transactions with low or no priority tips, impacting block dynamics. | ||
Suggestions included lowering the default minimum priority tip and considering it as a percentage of the base fee. | ||
Tim Beiko emphasized that the priority tip should prioritize transaction inclusion, not serve as a fee. | ||
188.3 | Pectra Devnet and EIPs: | EIP 3074 was removed from Pectra Devnet 0, and EIP 7702 will be included in Pectra Devnet 1. Update EIP-7600: Add 7702, remove 3074 ethereum/EIPs#8591 |
The Pectra fork scope was discussed, along with Portal Network integration for history expiry. |
Tim Beiko 5:18: Welcome everyone we're live for ACDE number 188. There was a ton of stuff on the agenda today we'll do our best to get through all of it. I wanted to make sure there's a couple small technical things we cover at the very start so that if we get in the Pectra weed for most of the calls we've at least gone through them. So Charles had a execution API PR that he wanted to get attention on and then Peter had a concern around the minor tip requirement as we can do the two of them and then last week all the L1 client teams got together to work on interop for Pectra PeerDAS in Verkle. So I've linked an update that we published on the agenda I can give a couple minutes overview of what went down there. And then most of the call we have a bunch of Pectra related things to discuss. So we probably make sense to start with where we're at on Devnet zero. There were some spec changes done to 2935. I think everything's been merged now but just want to make sure that we're all on the same page for this. And then a lot of the CFI and proposed EIPs for the fork had some updates. So we can cover all of those, specifically the account abstraction stuff around 7702. I know there's like an in person event happening now that is discussing this too. So there's like a short window on the call where we can discuss this and then Reth and the devops teams both shared what their preferences for the fork scope should be. So hopefully we can wrap up on that see if we want to make any changes to Pectra scope. And then lastly portal Network. So we there were some discussions that interrupt about history expiry and if and how that would relate to portal. So Piper is on to give kind of a quick overview of our things are out there. Yes to kick us off Charles are you here and if so can you give a bit of context on the return data PR that you have?
Charles C 7:36: Hey for context my name is Charles. I'm the lead maintainer of the Vyper language and also I maintain titanoboa which is an interpreter for Vyper and also a framework for deploying contracts and interacting with the network. So this I think is like a feature that every framework designer runs into which is when you ask for eth get transaction receipt you don't get the return data. And so I made a pull request to the what is this called execution API specs and to request that return data be optionally included in the receipt in the RPC call. I discussed this also with some people from Reth and they said that this can increase the Cost the base cost of the endpoint and it could an alternative is to have a separate endpoint like eth get return data which gets you the return data which is a new endpoint gives you the return data for a transaction. And that also works. I'm not as big of a fan of it because there's some race conditions like you can get the transaction receipt. And then there's a reorg so when you could call eth get return data you get a different return data or there's other more practical issues like usually you're calling eth get transaction receipt in a retry Loop. And then you finally get the receipt and then the server goes down which is actually really common among RPC providers. So you get the receipt but then you have no return data and then you have to like do another retry Loop and it's just a lot easier if it if you get everything back in one call. And yeah I was wondering if I could get some feedback from client teams and like figure out what is reasonable to implement.
Tim Beiko 9:57: Thanks anyone have some quick feedback on this? Yeah Danno?
Danno Ferrin 10:04: Besu optionally stores the revert reason which is basically the same data as return data just on a failed call. So it does have the wired to supported but as is mentioned it does take more storage data because you got to store all the return data for every single call. And so it is doable but I don't. I'm wondering if the revert reason is a standard API if that's just something besu did at a customer request years ago.
Charles C 10:35: So also the way I expect it out is like the client is allowed to return nil or null or whatever if it doesn't have the return data or it doesn't feel like recomputing it. So it's kind of a request to return it on a best efforts basis and like my hope is that eventually everybody would just start returning it but there's like a great path.
Tim Beiko 11:06:Any other comments now or otherwise people want to take this to the PR.
Charles C 11:18: Parody traces these are non-standard I mean right now what everybody does is like they either ask for the parody trace or debug Trace transaction or Reth eth call resimulation doesn't work because you get the return data. I mean the state could be different at the end of the block so that's like a yeah eth call is like not correct actually. But yeah the tracers don't work they're not or they do work but every client supports a different version of it. There's a lot of variation between clients in exactly how it returns it. And it again suffers from this race condition thing where you ask for the return data and then sorry you ask for the transaction receipt and then you ask for the trace and then honestly like God knows what you get back. And it's just not universally supported. Got it. Merek?
Marek 12:22: Yes so the problem is that clients do not store this data and we clients have this data but after a processing block. And there is no networking so we cannot simply add it as other fields. It would require consensus changes or our clients can store it after execution but only your local client will have that when process the block. So I'm thinking about more effective way than debug Trace transaction and what comes to my mind is as it was mentioned eth call or parody traces.
Charles C 13:10: So yeah I understand that it's not in the receipt database. So I guess clients would need to like store in an auxiliary database or something. And it's like yeah right not part of the consensus but as I already mentioned there's like issues with the parody Trace. Sina says support adding Tx level state for eth call that also works. But my question is like why can eth call return the return data but not Eth get transaction receipt.
Tim Beiko 13:54: Okay I guess yeah we do have a lot of things to cover today. And just does it make sense to continue this conversation on the pr because it doesn't seem like there's a clear consensus on like do we want to implement this and how? Lukasz you want to have a final.
Lukasz 14:11: So why eth call can return and get transaction receipt might have a problem because we are obliged to serve get transaction receipt but we are not obliged to serve eth call if we don't have the state right. So we might if we prune state already. We won't be able to serve eth call. So that's the reason right.
Charles C 14:39: Right but so that's why it's optional in the PR. The call doesn't return nil if it doesn't happen to. We can take it offline my question is just like who what can we do to like fix this because it's like a major Thorn for people who are using RPCs.
Tim Beiko 15:00: So there's a good comment in the chat by Lightclient around just like trying to frame the motivation better in the PR because it didn't seem as big of an issue reading that in the on the call. So yeah I think now that people are aware if we can rework the PR a bit discuss this Async there over the next couple weeks. And yeah if we need to bring it up on the call again we we can but hopefully we can resolve this Async.
Lukasz 15:29: If I can have one last thing maybe we want ETF call but for traces that that would only return this without doing thebig tracing. And returning bunch of data and just returning the result.
Charles C 15:45: Yeah that was the alternative proposed by the Reth team which is eth get return data and that also works. Although I prefer the single call workflow single RPC call workflow.
Tim Beiko 15:59: Okay yeah thanks Charles for bringing this on next up at Peter. You had thread around the Miner Tip behavior and I know there was a lot of back and forth in the agenda issue about this. Yeah you want to give a bit of context on your original thread and then the conversation that happened hopefully Peter's yeah. Peter's here we cannot hear you if you are speaking though. Oh I think now we can.
Péter Szilágyi 16:45: Can you hear me now? Okay it's really weird the microphone is using some other anyway. Here so I will be kind of short because I don't really think this deserves too much time. Basically the issue that was brought up is that Geth has pretty much since forever had this notion of controlling the minimum gas price that Miners require transactions to have. And in the proof of work fee 1559 world that was basically all the transaction fees went to the Miners. And then Miners just chose what they want as a minimum fee later after 1559 was introduced most of the fees were burned but 1559 still has this notion of a miner tip or a priority fee which goes to the Miner and way back then the suggestion was that we should reduce the the tip to something like 1 gwei because that seemed fair. And basically everybody as far ran with that and as time passed merge came. At some point I'm not entirely sure when maybe it was the merge for sooner or later there was some regression in Geth where this minimum tip enforcement was not passed from the from whatever subsystem to the miner. So basically this led to Geth pretty much accepting arbitrary tips for transactions as long as they met the base fee of course. And of course the tips were all ordered by by their magnitude. So if whenever there was some big congestion the the higher tips were always preferred and probably because of that we kind of never really saw this bug that smaller tips are also accepted. And a couple months back when I was refactoring the transaction pool I noticed that this tip wasn't really passed to the miner. So I fixed it and we the release which all of a sudden again started enforcing the minimum 1 gigabyte priority fees. Now the concern was raised that this 1 Gwei is inappropriate and it is causing problems for the network. And which basically for this my suggestion was that we should bring it up to at All Core Devs calls because yeah for me it was kind of a surprise. And the big question is that so this is kind of a client specific configuration. So any signer validator can control this via CLI flag and being just a client configuration there's Zero Effect on consensus. However given that there seems to be an effect on the network itself. I wanted to just bring it up as if the consensus is that this value is inappropriate for whatever reason. Then we're more than happy to to change it and before I give the the word to anybody else. Just one more thing I wanted to bring up is that basically we kind of can easily agree that enforcing 1 Gwei is might be too big in the current climate where basically the base Fee is around 5 Gwei so it's not proportional. So I don't really have any issues with with lowering this fee. However I kind of feel that having the local miners accept 0 tips has thorny other issue where basically MEV miners can take a 0 tip transaction and they have a chance to actually get some money out of it but naive local validators or local miners do not have MEV extraction. So they can only really rely on the tips to have to mean basically to have a meaningful reward for including a transaction. So while I agree that we don't want to mess with market onditions. I think we also should take into account that local validator don't necessarily behave the same way as MEV extractors and we should somehow pick something that makes sense for both the network perspective but also from a local block producers perspective. Anyway that's a short thing so.
Tim Beiko 21:33: Thank you. I think Tomasz has his hand up first.
Tomasz 21:39: Hi few things. So I think it's totally fine for Geth to set it to 1 Gwei. It's not unreasonable. I agree that it's clear it was there it was removed. I remember that from the old times and it's also reasonable for the local validators to have this set to 1 Gwei and not accept 0 Gwei transactions because then you can sure I mean we avoid spam because base fee is there anyway. But you don't justify the inclusion entirely from the perspective of whoever builds the block. And they put some effort into building the block well when they are validators they are paid the reward for the block construction. So maybe they're Justified for trying to include as many transactions as possible and clean the mempool. So those different views but I think still Geth can set the default especially as this is just a default and you can change it to something else. I not necessarily agree towards the negative sentiment toward the entire MEV Construction within the context of this conversation, but this is broader topic and it's might be that we just have slightly different definitions here. But yeah I'm definitely on Peter’s side but not necessarily also would suggest other clients to apply exactly the same defaults. It may make sense for Netherminds but we shouldn't have this in the spec or protocol design. Thanks.
Tim Beiko 23:15: Thank you. Ansgar? Yeah Peter first.
Péter Szilágyi 23:19: Just one very very quick reaction. Basically I'm not really looking for to have a consensus on how all clients should behave rather I would like Geth not to do anything where it's obviously bad for the network. So it's like the consensus should only be on Behavior that's kind of considered healthy for the network not further specifications.
Tim Beiko 23:45: Got it. Ansgar?
Ansgar 23:50: Yeah specifically on that point I want to say that I don't think that the arguments I saw them in the kind of discussion for in the issue for today that the concerns around the Network Health. I don't think are valid I think basically Geth having a 1 Gwei or whatever they want default is fine. For Network Health I just wanted to find out because Barnabas couldn't be on the call today. He looked into this a while ago from one economic point of view. And I'll post his right up in the chat but he was basically arguing that it basically the the rational long-term outcome would be for even for soloi builders to set the minimum as low as possible. So like more one way than 1 Gwei. But again I think defaults are not hurtful in this case so if Geth wants to choose a higher default I think that that is reasonable. I don't I just the only thing I would say in the past with proof of work the tip was used to also basically account for the timing problems that would be caused you would release the block would have more more chance of being uncut like now with the slots if you build at home. If you solo build a block they aren't really these timing concerns anymore. They're only really for like MEV extracting a Builders. So it kind of the cost for the Builder are lower now. So basically you'd only have to account for the little bit of extra compute that you're spending on building the block. So there might be an argument to at least lower the default below 1 Gwei but I don't think it there any Network Health Concerns.
Tim Beiko 25:22: Thanks Lukasz?
Lukasz 25:25: So from the network perspective right I don't see how if like let's say 80% of network is building with MEV how setting this in in clients would help with the network. It will then cause even more of this small blocks followed by full gas blocks from the MEV Builder. So unless we are we get MEV builders on board or we do something in protocol then I don't see this as being helpful to the network in any way.
Tim Beiko 26:08: Got it Fabio?
Fabio 26:13: Okay. So my point is that I would like to see some research on the effect of changing this even if a coordinated way or just by the staker that the decide which is the value to set. Seeing that because I see some possible side effect on this, for example the impact on the burning of the base fee since there will be less burned fee as a result of user switching to some of the value of the base fee as priority fee. So there will be more inflation in this case and also any case since you have also if the transaction has 0 priority fee it will reduce the amount of the currency. So we like to see in case if there are any study about which will be the prevision about that, thanks.
Tim Beiko 27:37: So I don't think it's relate like I don't think addressing if and how the base fee gets burnt is kind of related to this because if we send part of the base fee to the miner it's just like saying you know the base fee was that amount lower. And just from a conceptual perspective thinking of the tip as this is what the minor gets is you know it's probably the best way to do it. And so yeah it does feel pretty unrelated to like how much we burn because yeah.
Fabio 28:14: Yeah if I can quickly explain better. I mean if someone has a target of spending 3 Gwei if we set the priority feet to be 1 Gwei. They will instead of using a base Fee of 3 Gwei they will use a base Fee of 2 Gwei and a priority fee on 1 Gwei and so there will be less burnt fee. This is my one of the possible outcomes. I'm not sure of it. I would like to see some research about that.
Tim Beiko 28:51: I think there's already a fair bit of research I can maybe pull it off after the call but like when we were working on 1959 we did have a lot of research around like should we give the base fee to the miners. And if so how much and whatnot? So I at this point I'm pretty convinced that yeah whatever we give to the miners is effectively you can think of it as just having the base fee being that much lower and it just makes the base fee a less reliable Oracle for like supply and demand. So yeah I think it it is valuable to like agree on what is the standard tip that we send. And you know do we want to standardize that regardless of whether that tip comes from the base fee or if it comes from like the priority fee? Yeah Ben I think you have your hand up then Peter.
Ben Adams 29:47: The suggestion on the base fee is because then that's sort of enshrined in the you know that's already the Oracle price in terms of it's still going to be whatever it's going to be and then you can have the priority fee be that I.E a priority yeah sorry that's all. I mean I don't mean giving all of the base fee but just like 1% or something which would encourage the miner to include transactions in the block because it's going to be hard otherwise.
Tim Beiko 30:31: That's the whole reason we effectively have the priority fee is to separate those two concepts and the way you know it's exposing the transactions is we say like what's your priority fee what's your max fee and the max fee is effectively what you're saying the max fee is like how much is a user willing to pay for their transaction to be included and then the priority fee is how much are they willing to send to the miner and to get back the difference if the base fee is not high enough but the the max fee per gas effectively this value already.
Ben Adam 31:03: Yeah but the miner is getting what 164th of the block reward for building.
Tim Beiko 31:13: Yeah that's a different thing.
Ben Adams 31:15: So you get more for testing than block building but you do get the priority fees on top and if it's not an enshrined that it's higher than zero then since most of the blocks is MEV. And MEV does get a value for most of the transactions will become zero fee transactions.
Tim Beiko 31:36: No because different Builders compete through the relay to send the block. And I think so the question is given that because we have external MEV Builders can effectively create these zero tip blocks that pay the validator through some other way. Do we want to enforce a minimum on the network even if it breaks with this mechanism? I think that's kind of the point Peter was trying to figure out. It's like even if we we it is possible to still pay the minor a different way. Should the clients have a minimum and if so what that minimum should be? I don't know Peter, does this represent your concern correctly?
Péter Szilágyi 32:20: Yeah so basically if you want to push this issue to the very limit. Then the worst case scenario that can happen is that let's say the base fee goes up and there aren't enough transactions to fill a full block. But there are multiple 0 tip transactions. So basically transactions that pay the base fee but transactions that give absolutely 0 to the miner. Now is the miner expected to include those even though it gets absolutely nothing for it or is the miner not expected to include those and this is kind of philosophical decision because from our perspective from the Miner's perspective why would it include anything that pays exactly 0. And from a ethereum network perspective I mean what's the harm in cramming them in if they fit in. And that's why it's it's not really a very very clear cut scenario and that's why personally I'm leaning towards having a default that's permissive enough to not cause any issues but not too permissive to really go to this extreme.
Tomasz 33:39: So I have a few comments, maybe Peter for you directly. I think that I see some potential negative effects of that if you take some assumptions. So imagine that everything everybody runs Geth with this settings that as default 1 Gwei. They didn't change it and there's no MEV then it means that practically we moved 1 Gwei of Base fee to the priority tip which simply redirects the burn to the validators. It's not necessarily valuable problematic but it also doesn't feel like it makes sense. Now assume that there is 80% running Geth nodes with this setting and 20% running the other client as solo validators with zero set then it means that the Geth validators actually decreasing are decreasing the propagation of the transactions that the other validators consider as valid because because the geth nodes will be a discarding those below 1 Gwei transactions from the network and other validators will have less chance of getting the valid transactions propagated.
Péter Szilágyi 34:43: That is not the case. So way way back that was the case that gas transaction pool was linked to the fee that the miner was doing but currently it is not linked. So currently get the only basically we are our transaction pool is enforcing 1 wei to avoid spam other other than that basically everything gets propagated independent of what you set your miners to.
Tomasz 35:08: But the the problem is that I think that even if this is not happening then imagine that all the gass are running with 1 Gwei default and 100% of the networks is it Gas and then there is potentially a attack on the network when you fill the transaction with the zero priority fee transactions that are not discarded by Geth but never included by the validator in the block so you have practically and you can set them in a way that they they seem like the best paying transactions in the sense of Max.
Péter Szilágyi 35:49: I guess because the best paying transaction is the priority fee. So Geth will geth transaction pool always orders everything by the priority fee that it actually pays.
Tomasz 36:04: So everything addressed. Thanks.
Tim Beiko 36:11: Okay so I guess it seems like people are generally fine with geth selling the amount that it wants even though it's not 0 and yeah I think at this point probably makes sense to move on to other topics but is there any last comments on this?
Péter Szilágyi 36:36: I guess for my takeaway is that if there's no nobody sees really a very hard reason for Geth setting it to 0 then we will probably think about this a bit and either set it to 001 or 00 basically either 100th or 1,000 of a giga wei. I'm not entirely sure what which one the idea is to be basically want a good enough default and then clients can just go wild with or node operators can go wild on top.
Tim Beiko 37:12: Yeah that makes sense.
Péter Szilágyi 37:13: Thank you for your time.
Tim Beiko 37:14: And I think yeah maybe one last thing on that there were these analyses done during 1559 about what the amount should be based on like the un cold rate and whatnot. So obviously it's probably much much lower post merge but I assume you can do a similar analysis where you know how much the incremental gas usage creates like a a risk of being basically un cold and that might be a way that you can set the value. Yeah.
Péter Szilágyi 37:44: Well I personally I would pick a much simpler heuristic. Basically saying that if below some number basically it just doesn't make sense to have a transaction. I mean if adding a transaction is going to pay you 0.000000001 Cent then yeah it just doesn't make sense. So I think we'll try to pick a number where economically it's not a rounding error. It's something that's meaningful and then people can go from there.
Tim Beiko 38:20: Got it. Perfect. Yeah thanks for bringing this up. Moving on. So because we're already kind of late in the call I won't spend too much time on this but we did have an interop event with client teams last week. I posted a recap of it on the Ethereum Foundation blog and then I linked this on the All Core Dev’s agenda. Yeah there was lot of product questionnaire made on devnet zero, Verkle PeerDAS and a bunch of other things. We'll discuss those topics specifically throughout this call and others. Yeah I think probably makes more sense to just dive into Pectra directly than discuss interop a unless there's anything else people want to bring up about it. Okay. So next up I guess on the Pectra front. So on Devnet zero we did launch a first version of it during interop. Yeah, is someone on the devops team here to give a quick update on this.
Paritosh 39:34: Yeah so we launched Devnet zero on the during interop and I'll link the spec sheet for Devnet zero on the in chat in a minute. We have all the clients joined in but there have been some changes that were already proposed for Devnet one. So we haven't done intensive testing on for example 3074 and we should discuss. Probably we will be doing that async as to all the chain sets from that will be active from Devnet 0 to Devnet 1 and then do a relaunch once clients are ready. There's one active buck that still needs looking into and fixing but otherwise the network has in general been stable.
CFI'd & proposed EIPs updates](https://ethereum-magicians.org/t/pectra-network-upgrade-meta-thread/16809)
Tim Beiko 40:17: Awesome any other comments thoughts? Okay so yeah talking about Devnet 1. It probably makes sense to discuss just what we want the scope of Pectra to be updates on the different EIPs and then that will kind of lay the groundwork for Devnet 1. One of the things that did get a bunch of changes during interop was EIP 2935 which especially with regards to the black Hash opcode. It seems like everything has been merged but were there any outstanding questions or concern around 2935? Okay great. So then I think that was the only one with significant spec changes that happened during interop 3074 and 7702. It is kind of a whole different EIP so we can discuss that separately. And so okay on the topic of like stuff that's been CFI’d or proposed for the Fork one of the big ones is EOF so there's been an update shared on that as well basically by the solidity team that they would support it and ipsilon shared a proof of concept for solidity to support it. Does anyone on the call have any questions, comments, concerns about EOF or any specific updates?
Georgios 42:02: Do we have anyone from Solidity here? They said that they had it ready or that they would be supportive rather or EOF?
Tim Beiko 42:09: Daniel, the solidity lead posted in the R&D Discord earlier today that their team was very supportive but I don't think that he can make it on the call. Yeah I pasted the message in the agenda but let me just paste it in the zoom chat here.
Danno Ferrin 42:28: Yeah this is Danno Ferrin. Sorry my internet literally just went down so I missed some EOF conversation. But the numbers that the ipsilon Team put up are better than the numbers I saw with what solidity had for the big EOF. I was seeing 3 to 5% we're seeing 6% code size reduction and call gas usage down by 9% which is bigger numbers than I was hoping for. So I guess you know once again we're asking for you to be included in the Prague Bernie Sanders. Thanks.
Georgios 43:02: Same from the Reth side.
Marius 43:10: I can speak on our progress from the guest side. We had an issue basically we I implemented the a non optimal algorithm for the stake verification. And I started implementing the new algorithm but that also means that basically most of my progress is mood. So I start a new it's almost implemented but it fails like basically like half of the tests. The spec still isn't in a place where I would say it would be easy to implement for someone that starts from nothing. I think at this point no other client starts from nothing. So basically everyone has at least something implemented. For us it's still a pretty long way to go to be to the Future complete. But even then I'm not super convinced that our implementation is actually correct or will be correct because it's such a huge change and has a lot of edge cases. And as I said in the during interop I only implemented it like I just took this back and try to implement it. I have not really thought about what I'm actually Implementing. So from the breakout rooms I was in also the one yesterday I believe it feels like the Epsilon team and Charles and I guess Danno have really thought about the implications of changing the opcodes in the way they are changed. So but yeah I don't at this point I cannot say if this is like if they missed something or not at this point I kind of have to believe them that they made the right decisions for all of the changes that they are proposing.It kind of depends on when we are trying to schedule Pectra but from my point of view adding EOF would definitely delay the Pectra for which I don't like but that's just my personal opinion. That's it.
Tim Beiko 46:21: Thanks. Charles? Yeah I assume this is right to EOF?
Charles 46:27: Yeah I posted my kind of current position on EOF in the ethereum magician thread. Basically you know I've been helping out and contributing to the spec the last year or two. And I'm supportive of anything that improves EVM execution there's a couple spots in the spec which I think could still be improved if we had more time. I think that format would be improved if we use variable length quantities which I've been going on about for the last few months because it's more future proof. And I guess the consensus currently is that you know we don't have enough time to change the format at all. And kind of just want to you know make it known that I raised this as a concern so that when we can't it works hard to like upgrade EOF to allow larger code in the future. Yeah it could be an issue the other thing is that EOF by Design which is it's a bit deeper EOF by Design doesn't have the same Global code section design as EIP 2315. so it necessarily forces the compiler to make some trade offs and with some recent improvements the validation and also spec they're not that large but they are trade-offs. And think ideally this could use some more thinking. I also think that EOF is nice like quality of life things a lot of really nice quality of life things. I think that solidity is seeing you know really good movements partially because EOF has things that are specifically or you know kind of targeted towards Solidity covent actually and my position is I think it would be nice to have a a dedicated EOF fork or something. So that there's a little bit more time to finish some of these things. I also think you know I see your hand up and I'm guessing you know that I can already hear Danno saying you know we don't we can't get everything perfect in seconds but I think that user safety and I've been pushing a lot on recent you or sorry ACD calls for either non-reentrant opcode or improvements to transing storage pricing. And these are user safety things that I think should also be in Pectra and if there's a conflict between these I would prefer to see EOF go in its own fork. And we're able to get trans and storage pricing improvements sooner.
EIP 3074/7702 Update EIP-7702: refine based on discussions EIPs#8561
TIm Beiko 49:36: Thanks. Yeah Danno, I know you have your hand up but I do want to make sure we get through the rest and assuming we move forward with EOF. We'll discuss the specifics later. The other big one that's been heavily iterated on in the past couple weeks is EIP 7702 which is meant as a replacement for 3074. So 3074 is still sort of what it's officially included in the fork but it does seem like there's been a growing consensus around 7702 from both the people who supported 3074 and the people who had issues with it. Yeah I don't know Ansgar, Vitalik, Lightclient the three of you had been working on this. I don't know if either of you can give an update on where things are at. And yeah can take it from there.
Ansgar 50:30: Yeah I mean I can give a quick update just. So basically there have been some discussions over last while it interop opened in different other places on the internet as well. And there's been some iteration on the EIP. I think right now lightclient has a PR pending with updates to the EIP. I can link that in chat in a second here we go. And so basically I can briefly go the The Count variant of the EIP. So basically you would specify the address of the implementation you want to use for your account in the transaction body. And not the code itself you would also sign over that address. It would not run any init code at the beginning of the transaction, just copy the code in that address over. Chain ID would be possible to specify but optional. Nonce would be possible to specify but optional. There would be no opcode restrictions including no restrictions on self-destruct and including no restrictions on storage rights meaning that storage would also be persistent after the end of the transaction. And for now there would be no initial permanent upgrade capability so no flag or anything from the start at least to have the code persist in the EOA after the end of the transaction. Although of course the idea would be to add that at a later for there have been like some of these are pretty uncontroversial some other ones are still like there's pretty active conversation on I'm not sure to what extent it makes sense to have that conversation live here on All Core Devs. But so basically for example to mention draw from the 4337 team. I've seen people be quite vocal about saying that we should have chain ID commitments be mandatory instead of basically optional. And also I remember having having some conversations with the Aragon team specifically around them preferring to have a more explicit split of versions that are revocable or like basically are committing to non Chain ID versus the ones that are not so we could basically have two completely separate signature types. One of them basically for a version that is with nonce and with chain ID mandatory and one version that is basically completely without nonce and chainID that way we could have a more clean split. So these are like still some of the questions that a little bit in flux. But we are overall converging more and more on like a specific version of the EIP. Oh yeah that's kind of my take.
Tim Beiko 53:13: Thank you. Any anyone else have thoughts? Oh Andrew?
Andrew 53:26: Yeah that's what we discussed at the interop. So there probably there are several scenarios how this EIP can be used. If it's a milestone towards permanently migrating to a Smart contract then it makes sense to have non-revocable signatures. But also this EIP can be used for more Emeral things like batch transactions or sponsored transactions. And in that scenario you want to play safely and have revocability and so yes we discussed with Ansgar it would be good to have two types of signatures like one very generic without nonce and chainID and the other with a chainID and nonce to separate those two use cases. And I think I would replace 3074 with this 7702 in Pectra but I would only CFI’d it until we kind of we agree on the revocability design because to my mind that's very important and I don't want to commit to this EIP like 100% until the revocability question is resolved.
Tim Beiko 54:58: Thanks. Ahmad?
Ahmad 55:01: Yeah just to clarify when he said the Virgin signature because Marius asked in chat this means that there will be two Magics for this type of message and message signature one magic will mean that it has a nonce because nonce cannot start with zero. Whereas the other magic will have no nonce. So one magic with nonce non magic without nonce. So if one someone wants to sign something without a nonce they can do that. And if they want specify the nonce they can do that as well and with chain ID we decided to go with zero for not specifying a chain ID and a number for specifying a chain ID. And like this gives maximum flexibility to the developers to implement multiple patterns at the same time allowing the user to have all the security guarantees that it would like to that he like the users that they would like to have on their accounts. Yeah.
Tim Beiko 56:11: Thanks. Daniel?
Daniel 56:15: Yeah I just wanted to add something to what Andrew said regarding the revocability. So I initially was also very much against not having revocability in the EIP but after some feedback from the metamask team. It seems they feel confident that they can still manage the security of the use of funds without it. So they for example are talking with other wallet providers to implement some sort of a permissioning system. I think they also want to do something like adding modular permissions on the I think similar to 4337. So before we are adding maybe more complexity to the EIP but adding more functionality we also should seek feedback from the wallet teams If they think that's a good idea or not or if this is something that they will use. Because the the worst case would be that we are adding comp complexity to the protocol and afterwards nobody's using it anyways. So I think we should do this first before we make decision on our own.
Tim Beiko 57:28: Got it thanks. Do Vitalik and then Andrew again.
Vitalik 57:33: Yeah I mean I think one important consideration to keep in mind is that a lot of the revocability and chain ID considerations regarding 7702 are extremely similar to the ones for 3074 which makes sense. Because the whole design rationale for 7702 is to basically be a something that provides the functionality that's parallel to 3074. But in a more like smart contract forward compatible way and so given that I think it's just valuable to remember that like there has been existing discussion about the like trade offs that has very similar reasoning and like the switch of EIP is that we're pushing is not a reason to like restart that discussion from zero. and there's value in representing things that have already been discussed over the last couple of years.
Tim Beiko 58:34:Yes Andrew do you have another comment or with your hand still left.
Andrew 58:39: Yes I do so I totally agree that we should check with wallet providers. It's feasible and for them and they're happy with it. But I am a bit worried that if we decide to rely on exclusively on wallet providers for a revocability or white listing or whatnot. My preference is to have something built in the protocol instead of yeah relying on only relying on wallet providers.
Tim Beiko 59:24: And can you can someone please remind me what what we did for 3074? Like did we not have this trust assumption? Oh right okay the nonce increment. Got it thanks. Derek?
Derek Chiang 59:47: Yeah so I think I think in 3074 the last finalized version says that every time you increment an nonce you'll be revoking all outstanding all messages right. You know but but I do want to remind everyone that in our last call and also just like in all the conversations that that have happened since the last call I think like all the stakeholders like all the relevant parties like whether it's the Core Devs or the 4337 authors or the world teams such as metamask. As far as the 4337 account Builders such as myself I think we are all aligned. That is it's very important to support the longstanding authorization use case right. You know they're like it's very important to make sure that it's not the case that every time the user makes a transaction that like all outstanding revocations. I mean that all outstanding authorizations will be revoked. I think everyone has already aligned on that whether it's in the last call or in the subsequent online conversations that is very important that because the some of like the most powerful use cases of account abstraction whether it's adding new signers to your accounts or whether it's things like transaction delegation permissions session Keys. Those all depends on having a way to keep authorizations alive despite the user sending transactions. So I think the only thing we are not aligned on right now is the mechanism that enables that right. So I think like there's proposals for you know things like introducing a Max Nonce mechanism or there's the proposal that I think that Matt made in the latest updates about introducing like optional basically just making nonce optional. So personally I'm fine with whatever you know but I just want to suggest again that all the relevant stakeholders have already expressed that it's very important to enable the longstanding all those cases.
Tim Beiko 1:01:53: Yeah thank you. Let's do Vitalik and then Eric.
Vitalik 1:01:58: Yeah I mean I think this is starting to get a bit zoomed in but one thing that's important to also think about is like there is a particular way of using 7702 that a lot some people have in mind for some of these longstanding things which is basically that wallets only ever sign one message where the one section message is permitting a standardized proxy. And then the intention is for like any use case that's not like literally if emerald could just be built on top of that. And the proxy would have its own upgrading mechanism. So I think one of the side questions that's worth discussing is basically like to what extent there is alignment on that approach right because you know. If there is then like yes like making unrevokable complete unrevoke ability becomes something that's much safer but then like if not then there's a more discussion. But I mean I think in general right this is definitely starting to get into some of the rabbit hole areas. And so I think as a meta Point like realistically it's this stuff is going to get resolved through like one or two breakouts. And that's like getting everyone to agree on things in the next five minutes of this ACDE.
Tim Beiko 1:03:30: Yeah I think I would agree with that I guess one thing is we should schedule a breakout to like get this resolved it is worth highlighting that if we can't get to a solution then like the sort of status quo outcome is we don't ship anything. So I guess yeah maybe one thing to figure out is just when we want to have breakout. I know there's some in-person meetings happening in the next week or so. On this stuff, if someone who has like context on the whole 7702 discussion can propose some appropriate times for a breakout on Discord, that would be great. We should ideally have a final spec before the next All Core Devs. I think if we don't then it becomes very hard to actually to actually you know include this in the EIP and then yeah there's two comments in the chat now about this like it seems like everyone would be in favor of swapping 3074 for 7702 assuming we have a final spec. I don't know if we're finalised enough to include 7702 in implementations now. But does anyone oppose removing 3074 CF’Ing in 7702 and then on the next call we can potentially include 7702 if we have a spec? Yeah I guess that Ansgar point. Okay so people would rather not to remove until we switch over. And I guess the yeah I think it was Aragon who raised the concerns with the revocation like on your end. Aragon would you prefer to include this and remove it at a future date if we if don't find a final spec. So that we can actually get started on the implementations.
Andrew 1:05:34: Yeah from my point of view it's fine to CFI it and start working on it and include it into the next Devnet but yeah I mean if we fail to find a solution dramatically yeah then later we should exclude it.
Tim Beiko 1:05:55: Okay so I think then what this would look like is we CFI 7702 included in Devnet 1 and then remove 3074 and assume that yeah we can figure out the issues with 7702 for now. Yeah okay and there's actually there's a final proposal by Justin is like keep both right now. Is it simpler for client teams to keep both in the next Devnet or is it simpler to just do 7702.
Georgios 1:06:36: Can we discuss with th Async?
Tim Beiko 1:06:38: Yeah okay. Well I guess yes let's move forward with 7702.
Marius 1:06:43: So yeah so just one small thing we didn't actually test 3074 and devnet 0 because Pari forbid me to do it. Because we was so afraid that the devnet will completely break. So I would propose removing 3074. So we can just go wild with the fuzzer and test.
Tim Beiko 1:07:08: So okay let's do that let's remove 3074. I had to know my oh sorry were you going to say something else okay remove 3074 add 7702 have yeah have that part of the next Devnet. And then if and create basically basically have a breakout in the next two weeks to iron out the spec issues and I think Ansgar to your last comment I and based on the Aragon stuff it's probably simpler to you CFI 7702 and then include it once the spec issues are finished but we also have it as part of Devnet 1. That feels like the best compromise to move forward actually get the work done. And also make sure we have a final spec that people are happy with before we commit to it further. Anything else on this before we move forward. Okay thanks everyone for all the thoughts and comments here. Yeah so please Ansgar Lightclient someone else figure out the time to schedule the a breakout or the 7702 breakout based on the burn in meetings but we should aim to have all this wrapped up before the next All Core Devs. Yes okay and then okay there's three other EIPs that had updates that are a bit shorter. So if we can get through those in like five minutes. Hopefully we can then spend some time discussing the fork scoping. So 7623 Tony are you on the car.
Tony 1:08:58: Yes I am. I will be very quick. So 7623 the latest update included reducing the gas cost from 1248 to 1040 this would mean that even more even less users are impacted. Talking with people it feels like there is some demand to ship that in Pectra. Maybe we can do that already in the next Devnet because it's a rather small change. There were some discussions on Eth Magicians. So Martin from Geth also voiced that concern about this weird Market that it might create but yeah we could resolve those those concerns and Martin is now also for the EIP.
Tim Beiko 1:09:40: Awesome thanks. Any question otherwise we move on to the next one okay the other one is 7212 the RIP/EIP with the R1 precompile. I forget who was supposed to give an update on this?
Ulas 1:10:01: Hello I will give the update and I will be very quick for that and I wanted to remind the current situation about the RIP. As you remember the first the EIP has moved to an RIP status and be implemented by some of the rollups and it has been CFI’d to ACD’s before. And I will be talking about the current rollups that implemented and also talked about the current client implementations. I'm also posting at the chat recently polygon has gone to mainnet with this pre-compiled contract and the casing is on testnet and will be base soon on Mainnet and in optimism and op secps including base and Kakarot zkEVM has merged to their repos is on governance to implement the proposal in the next hard fork and in the client implementations Geth Arigon Besu and Reth client implementations are ready and polygon and the other optimistic rollups are using this reference implementation Arigon is following is implementation and also use it on the polygon. And I have just seen that best has a draft PR about the issue and I think it's also ready and the Reth implementation has been worked on Ethereum fellowship. And I guess it's also ready and I have just seen that Nethermind team is currently working on this EIP and I'm not aware of any implementation in Ethereum JS and that's all. I want to mention for now and if there are any question about current status or CFI status I would love to answer for them.
Tim Beiko 1:12:03: Okay thank you. Okay and I guess the last one was SSZ. I don't thin Etan is on the call but he posted an update on the agenda I don't know if anyone has any questions comments about it. Yeah Cayman.
Fork scope & timing Reth PoV
Cayman 1:12:36: I can just yeah I think this makes sense to discuss more in the context of increasing the scope or nonincreasing the scope of the fork. So I think we should delay the discussion until then.
Tim Beiko 1:12:50: Yeah that is the next thing on the agenda and maybe to start this so I know Reth put together a document with a bunch of different options and then the devops team as well. So maybe yeah maybe Reth you want to start and then we'll hear from devops and go from there.
Georgios Konstantopoulos 1:13:12: Yep so I'll share the doc in the zoom chat in case people haven't read it. I would appreciate if people would have a chance to read it (https://docs.google.com/document/d/1IfOnozIhp93qkqZ7Jt-jVUvcdRDAv7HOdP4lnsOpu74/edit) . The next time we have such conversation. I'm trying to answer three simple questions which is when do we ship Prague? If we ship Prague later on should we extend the scope and B and C if we choose to extend the scope? What should we include? I think the first topic that I would like to discuss is should we ship it in 2024 or should we ship it in Q1 2025? In my experience people haven't been shipped hard Forks in november and December in particular. Because in December people go on vacation and this time because people will be in Devcon and my understanding was that people don't want to be doing work while they're at Devcon. Now I don't know I'm just raising that into presence. I know that we ship when ready, that's how definitely we ship. So if people are down for that s ure but you know like just bear that in mind and my understanding is that indeed people don't want to be fixing a consensus issue at Devcon. So I'm just targeting everything that and beyond that my thinking is that if we were not to ship it in 2024 basically if there is any kind of scope change allowed there's room for one very very important thing that is PeerDAS. I'm not a CL developer and we don't run this CL Team. We do know that demand is arriving and we do know that the EL and CL updates are coupled as EIP 4844 is defined right now. What does this mean? It means that there's no room for a CL only hard Fork from the EL because the EL has a constant it says a blob count. So if you want to do a blob count increase you need to do it on the CL and the EL and that's a problem. The final thing on why I think on why PeerDAS is important here versus just increasing the blob count is that and somebody from CL should confirm that is that due to how 4844 works and how blobs and the block proposal or attestation are coupled. There's already high bandwidth requirements so there's some blocking that happens during the CL process and even if you increased blobs you wouldn't be able to have a solo Staker or a low resources mode operator to be able to not miss slots. So it seems like a problem in any case to increase blobs and it seems like it's worth including PeerDAS to get around that. And to the extent that we do that for CL it gives us plenty of time to also do it on EL and in that case there's a question of okay what should go then and then we would pitch the scope for EOF which we think deserves a whole conversation for it and we wish we had Time. So that's where we're at. So the TLDR is let's ship Praque in Q1 on the CL side. Let's do PeerDAS on the EL side. Let's do EOF including whatever else we've talked about for 7702. And I would love to also hear the EOF devops take on this from our point of view. Small Fork is kind of a meme. I don't know that this has happened properly ever. So it seems worth being present to that and not saying it is something we could do. It seems worth being very explicit about what our expectations are here.
Tim Beiko 1:17:09: Thank you and yeah I before we hear from the yeah devop team. I will plus one the like small Fork comment like there's a couple chat comments of like you know we can do just a quick fork with PeerDAS and EOF before you know Verkle or Osaka. Then we should just call that thing and assume that we've agreed to do more work and separate it into a separate Fork. And if it ships quicker great but like it's probably a bad process to like assume we would only do this work if it's like a quote unquote quick Fork. Yeah Barnabas or Pari from the devop side?
Paritosh 1:17:47: Yeah I can go. So we've also posted our thoughts on what we on what the fork could look like and I think one of the differences is that we just wanted to Define all of the options that we thought were viable and then towards the end just specify which option we prefer. But also want to point out that we as a team could not even come to consensus as to what option we prefer. So yeah take that with what you will I'll just start with a bit of thoughts and concerns that we had as well as counterpoints to them. The first one is that right now we've been looking at most EIPs as just testing the single EIP. We haven't necessarily looked into testing the edge cases of one EIP with interactions with edge cases of another EIP. A simple example is we're changing how deposits work but we're also changing what a validator is with MAXEB. So we're going to have to stress test both of those edge cases in combination and not just in isolation. So that I imagine would increase the scope of testing but one of the points that Pava made as a Counterpoint to that is that irrespective if we ship it together or if we ship it separately. We're still going to be doing Edge case testing so it doesn't make a difference if it's in one fork or over two forks. I think me personally I would prefer it in two forks purely to reduce the risk but I think that comes into the mega Fork versus the smaller Fork discussion again. Another point that we wanted to bring up was that in all of the discussion so far. We haven't included EIP 4844s. I don't know what the status is of that if it's happening fully out of protocol if there's some sections in protocol and out of protocol. And I think we also should be aware of how many research efforts we have that we want to ship so if there's a subset of the team working on 4844s a subset working on Verkle a subset working on PeerDAS. That's a lot of subsets for small teams. Yeah and like Ansgar said we just wanted to put all the options out there and see what people thought and then use it as a starting point for a discussion.
Tim Beiko 1:19:57: Yeah thank you. This is really helpful. Yeah any other teams aside from devops Reth has thoughts on how we Should I approach this?
Georgios 1:20:10: It from the Reth side we generally defer to the devops team as the testers here. yeah I think we've said we've shared our view in writing.
Draganrakita 1:20:23: Yeah in general I would say the TF as it stands now could be ship in 2024 because the I took just two months to implement it. I know there is a lot of testing needs to be done, some of test already written but EVM is the component that's well defined. All they have testing suits and Frameworks that we can reuse. So I think the framing that EOF is the big change and will take like a year to implement is not correct because I heard some discussions that relate to that.
Tim Beiko 1:21:00: So I don't know if it's like a year but there's some comments. My guess about like it would not ship in 2024 and I think yeah like obviously we have many clients but like yeah we have many clients and we need kind of them all to ship the fork. So I think you know we should assume if we do EOF then we are probably delaying past 2024. And you know it's great if we are quicker than expected in you know reship in 2024 but usually we're actually slower than we expect with these things. Andrew?
Andrew 1:21:34: Yeah so I think if we include EOF into Pectra it will realistically delay it by maybe three months. And got it, but I'd like to say that we it's better if we decided either on this call or the next one because we need to plan our Pectra work. So we shouldn't like prolong it indefinitely the decision.
Tim Beiko 1:22:11: Yeah thanks.
Draganrakita 1:22:14: Just said for discussion both the Geth and Arigon don't have implementation. So it's just estimation that they're going that EOF is going to push Prague for 2025. It's like it does increase the scope but in my opinion it's not that big.
Tim Beiko 1:22:39: Got it. Let's do Yasic and then Guillaume.
Arnetheduck 1:22:50: Yeah I just wanted to mention two things. One is for a stable container which basically the way we were thinking about it in Kenya was that we could introduce it for the things that already change. This would be good for users because they like they only need to perform one smart contract upgrade then. And then in the future they can have their proofs and everything forwards compatible. So that's kind of like a stable container light way of including it Pectra for 4844s. The way I think about it at least is that a lot of it we can ship without a hard Fork including experimental testnets and things like this. So I think it's worthwhile to start working on it Now. Ship the parts that are shippable without a hard fork. And then when it's stable we can formally like include you know user documentation basically that the network all the nodes on the network will no longer be holding block history.
Tim Beiko 1:24:07: Got it. Yeah Guillaume?
Guillaume 1:24:13: Yeah that's actually not a bad EOF I mean I do think like the rest of the Geth team that is going to push stuff to 2025. Traditionally we have done the testing but I would say that's not a reason perse to stop EOF. The question I had because something you said Tim did you you just say that you were already considering moving osaka yet sorry Verkle yet another Fork. That's something that wasn't clear from to.
Tim Beiko 1:24:45: What I said is that if the decision is there should be a fork after Pectra that has PeerDAS and EOF. We shouldn't call Pectra 2 and lie to ourselves that like it's a quick fork and we're going to be able to ship it in two months. We should assume that it's going to be another full fork and the consequence will be delaying Verkle yes. I think this is like the this is like the failure mode we get in when there's more stuff we want to do than what we can actually do. So we you know try to like squeeze things between two forks but effectively they are two real forks and you know the naming should reflect that rather than like making us feel better that you know it's only Petra 2. And it's not actually Osaka in practice it's another Fork.
Guillaume 1:25:31: But from what I understand from Pari's proposition at least proposal sorry is that he's talking about an a seal only fork which could perfectly cohabit with Verkle and so whatever doesn't make the cut in Prague should make the cut in Amsterdam instead.
Tim Beiko 1:25:51: Sure like yeah whatever like I guess my point is much more if we have something on the EL between now and the next Fork we should just call that the next Fork it doesn't matter like you know what it is we just shouldn't like assume that we can just sneak in like mini Forks between forks because that's just realistically not going to happen. Yeah Andrew and then Alex.
Andrew 1:26:17: Yeah I think between Verkle and EOF. I think Verkle is more important and we already committed to shipping vehicle in Osaka. So if we don't include EOF into Pectra I don't think that we should have kind of a Pectra 2 Fork. We then we should postpone EOF to Amsterdam.
Tim Beiko 1:26:40: Yeah Thanks. Alex?
Alex Beregsz 1:26:45: Yeah sorry if the mic is a bit noisy. It's kind of connected to what everybody said. I kind of feel like this group on All Core Devs has maybe a tiny bit less visibility on the application layer. And what kind of benefits people using the EVM could feel with something like EOF. Obviously all the discussions here extremely important. And Tackle different parts but I kind of feel like maybe the visibility of developers experiences is less understood by this group. And I do have a question because I kind of feel like what Andrew said if EOF right now wouldn't be included in Pectra then I don't really see that it would ever be included because then having another Fork with EOF there wouldn't be just like a one or two or three month delay to Pectra that would be a full fork with you know however many months that takes. And that in that case that would extremely delay work because of that I kind of feel like this is the only opportunity to to Really introduce EOF maybe at the cost of one or two or three months delays. I don't think it would be introduced otherwise. And I do feel that we have really good momentum right now with all the implementations. So it feels like this is the correct time to do it. I think that that's my summary.
Tim Beiko 1:28:20: Thanks. I do want to make sure we keep at least a couple minutes for Piper. So we can do Guillaume and then Marius. And then wrap this up and have you a few minutes to at least talk about the history expiry.
Guillaume 1:28:37: Yeah so I'll be quick. I think doing like it doesn't matter when you do EOF either EOF is important and adds value which I think it does the rest team for example in Kenya has made a fairly compelling point about what they want to do to achieve with it. So I don't think that you don't schedule EOF in Prague it kills EOF because if that was the case then e is pointless which I'm convinced it's not.
Tim Beiko 1:29:03: Tanks. Marius?
Marius 1:29:05: I wanted to say exactly the same thing if we think this is the only that EOF can ever happen then it's not a good future and we should just drop it. Like there's no point in shipping something if we don't if we're not convinced that we can also ship it 6 months later. And it's still important. So yeah do what with that whatever you want but I think I also think EOF is a good feature. And it should be shipped at some point I do like the thesis of doing it on L2's first they can do it right now because they have the code because we are writing the code for them. So I don't really see the argument of L2's cannot build this because we basically built this for them already. So they could roll it out if they wanted to.
Tim Beiko 1:30:05: Okay I guess it does yeah it doesn't seem like we're going to resolve all of this in the next two minutes. I think as I said in the chat like PeerDAS feels like the most significant bit here and it'd be valuable to discuss on the CL call next week. How we want to Prior PeerDAS and or just like a normal blob increase and then kind of structure everything else based out of there. Assuming we do that it would be really valuable for teams to think through you know the different options that the devops teams put out. And you know other options potentially of what they want to prioritize. So that on next week's CL call we can make a decision about PeerDAS blob counts and whatnot. And then use this to inform what we do on the EL side and it does feel like you know there's not a burning desire there's not like the same level of urgency I guess on anything EL related than there is on potentially the The Blob count. And yeah I Joo Dragon if you can put your comments in the chat just because I want to make sure we leave at least a couple minutes for the history expiry stuff. Yeah, TLDR, I think teams thinking about this in the next week discussing on the CL call next week specifically related to PeerDAS and what that implies and then two weeks from now on ACDE. We make a final call about the scope of Pectra from the EL side. And in the meantime teams can obviously work on Devnet 1 which would include the same EIPs as we had module the 3074, 7702 switch. Yeah Piper?
Piper Merriam 1:31:55: Everybody how's my mic? I've been playing catch-up this last week since I wasn't there for the breakout on for 4444s and History Expiry. I've been looking over all the notes and documents and trying to find out you know what happened there. And where what EL teams are interested in we're excited to collaborate with you guys on this in the notes and things that I've seen a lot of destion of a minimal portal spec. And I'm curious if anybody has anything specific that they've identified in our spec that they are you know wanting to cut out. I haven't gotten anything specific yet and if there isn't anything specific I'm wondering if we can stop using that terminology. I'm pretty sure that our history specification does exactly what you guys want it to do. My questions I guess are kind of like what do EL teams need from us to move forward with this. We're definitely like here and ready to answer questions ready to show EL teams around that sort of thing. There's documents posted in the history XPR channel on the R&D Discord. I encourage you guys to take a look at those if you're on an EL team and you're going to be heading up implementation of this. I encourage you to come say hello to our client teams and things like that and introduce yourself. let us know what team you're working on or what client you work on. And we are having an in-person event next week in Prague May 28th through May 30th and if you are going to be diving into some of this stuff and you are able to make it to that. It would be a great way to on board fast if you to come please DM me on telegram or wherever. And I can get you guys added to that. And I know we're mostly out of time here but if there's any questions or anything around this that anybody wants to raise. I'm here for that.
Tim Beiko 1:34:12: Thank you. Yeah we are already a minute past but yeah Yasek and then Andrew we can do your comments and then wrap up.
Arnetheduck 1:34:24: Yeah so when we talked about minimal we talked about the block history and not so much the other components like clinton state. So that's what minimal refers to like block history?
Peter Merriam 1:34:40: So we already do this in our specs and so there's nothing that actually needs to be cut or changed for clients to be able to implement only history. So that is supported in the existing Network client designs.
Arnetheduck 1:35:01: Yeah and the discussion was mainly around the Lightclient protocol as well whether that is the dependency or not and it's not really from an execution point of view it's more if you're running a full portal node that doesn't have access to any other consensus Mechanism.
Tim Beiko 1:35:22: Thanks. Andrew?
Andrew 1:35:26: Yeah I just had the same comment as that we want some kind of mini portal spec for blog history only because As I understood Vitalik's worries that we don't want to rely exclusively on archive nodes for Block history so as far as I understood the idea is that non archive nodes commit to store a certain chunk of block history via the mini portal.
Peter 1:36:00: Yeah our specs are history data like the history network is history data only. There is no minification that needs to happen here we don't do transaction hash lookups in that Network it is strictly block history and to the best of my knowledge it does exactly what you guys want it to be. So we can keep talking about it but the concept of like a minified version of the spec. I'm wondering if we can drop that language unless somebody has something specific in our specs that they've identified that is that's extra. So looking forward to answering more questions we're here and ready for it. We got a network ready for you guys.
Marius 1:36:42: So we already kind of have like a networking layer and as far as I understand Portal is built on a different networking stake. Do we need this different networking stake to be implemented in po notes in order to join or would it make sense for portal to adopt the current networking spec stake that we have.
Peter 1:37:09: Portal does not work on dev P2P you need a DHT. For this and it is why we've built on the networking stake that we have built on building it on a fundamentally different network stack is going to require somebody to come in and solve a whole bunch of problems. So if somebody else really wants to do that work they can but that isn't portal.
Tim Beiko 1:37:34: Thanks okay I think this is a good spot to wrap up. We're already a few minutes over time. Thank you Piper for Coming on. Thanks everybody for all the conversation. And again I'll emphasize like please try to think through the scoping for the fork in the next week so that we can have a good discussion on PeerDAS and everything else on the CL call and then finalize the scope on the EL side on the next ACDE. Yeah that's it for today. Thanks everybody.
- Tim Beiko
- Pooja Ranjan
- Kevaundray
- Mathew Smith
- Anders Holmbjerg
- Ignacio
- Ahmad Mazem Bitar
- Piper Merriam
- Trent
- Terence
- Justin Florentine
- Lightclient
- Enrico Del Fante
- Mikhail Kalinin
- Saulius Grigaitis
- Matt Nelson
- Cayman
- Guillaume
- Draganrakita
- Dankran Fiest
- Ben Edginigton
- Justin Traglia
- Francesco
- Peter
- Dan Cline
- Damian
- Mehdi Aouadi
- Matthias
- Nflaig
- Ansgar Dietrichs
- Mario Vega
- Jullan Rachman
- Paritosh
- Stokes
- Roman
- Hadrien Croubosis
- James He
- Marek
- Spencer Tb
- Sina Mahmoodi
- Gajinder
- Marius
- Lukasz Rozmez
- Georgios Konstantop
- Andrew Ashikhmin
- Ben Adams
- Fabio Di Fabio
- Charles C
- Antony Denyer
- Radoslaw Zagorowicz
- Gary Schulte
- Danno Ferrin
- Carl Beekhuizen
- Peter Szilagyi
- Daniel Lehmer
- Tomasz Stanczak
- Vitalik