diff --git a/README.md b/README.md index ad112fcdd4..b21cf36e17 100644 --- a/README.md +++ b/README.md @@ -1,318 +1,35 @@ -## BNB Smart Chain +[bsc readme](README.original.md) -The goal of BNB Smart Chain is to bring programmability and interoperability to BNB Beacon Chain. In order to embrace the existing popular community and advanced technology, it will bring huge benefits by staying compatible with all the existing smart contracts on Ethereum and Ethereum tooling. And to achieve that, the easiest solution is to develop based on go-ethereum fork, as we respect the great work of Ethereum very much. +# BSC Builder -BNB Smart Chain starts its development based on go-ethereum fork. So you may see many toolings, binaries and also docs are based on Ethereum ones, such as the name “geth”. +This project implements the BEP-322: Builder API Specification for BNB Smart Chain. -[![API Reference]( -https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667 -)](https://pkg.go.dev/github.com/ethereum/go-ethereum?tab=doc) -[![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/z2VpC455eU) +This project represents a minimal implementation of the protocol and is provided as is. We make no guarantees regarding its functionality or security. -But from that baseline of EVM compatible, BNB Smart Chain introduces a system of 21 validators with Proof of Staked Authority (PoSA) consensus that can support short block time and lower fees. The most bonded validator candidates of staking will become validators and produce blocks. The double-sign detection and other slashing logic guarantee security, stability, and chain finality. +See also: https://github.com/bnb-chain/BEPs/pull/322 -Cross-chain transfer and other communication are possible due to native support of interoperability. Relayers and on-chain contracts are developed to support that. BNB Beacon Chain DEX remains a liquid venue of the exchange of assets on both chains. This dual-chain architecture will be ideal for users to take advantage of the fast trading on one side and build their decentralized apps on the other side. **The BNB Smart Chain** will be: +# Usage -- **A self-sovereign blockchain**: Provides security and safety with elected validators. -- **EVM-compatible**: Supports all the existing Ethereum tooling along with faster finality and cheaper transaction fees. -- **Interoperable**: Comes with efficient native dual chain communication; Optimized for scaling high-performance dApps that require fast and smooth user experience. -- **Distributed with on-chain governance**: Proof of Staked Authority brings in decentralization and community participants. As the native token, BNB will serve as both the gas of smart contract execution and tokens for staking. +Builder-related settings are configured in the `config.toml` file. The following is an example of a `config.toml` file: -More details in [White Paper](https://www.bnbchain.org/en#smartChain). - -## Key features - -### Proof of Staked Authority -Although Proof-of-Work (PoW) has been approved as a practical mechanism to implement a decentralized network, it is not friendly to the environment and also requires a large size of participants to maintain the security. - -Proof-of-Authority(PoA) provides some defense to 51% attack, with improved efficiency and tolerance to certain levels of Byzantine players (malicious or hacked). -Meanwhile, the PoA protocol is most criticized for being not as decentralized as PoW, as the validators, i.e. the nodes that take turns to produce blocks, have all the authorities and are prone to corruption and security attacks. - -Other blockchains, such as EOS and Cosmos both, introduce different types of Deputy Proof of Stake (DPoS) to allow the token holders to vote and elect the validator set. It increases the decentralization and favors community governance. - -To combine DPoS and PoA for consensus, BNB Smart Chain implement a novel consensus engine called Parlia that: - -1. Blocks are produced by a limited set of validators. -2. Validators take turns to produce blocks in a PoA manner, similar to Ethereum's Clique consensus engine. -3. Validator set are elected in and out based on a staking based governance on BNB Beacon Chain. -4. The validator set change is relayed via a cross-chain communication mechanism. -5. Parlia consensus engine will interact with a set of [system contracts](https://docs.bnbchain.org/docs/learn/system-contract) to achieve liveness slash, revenue distributing and validator set renewing func. - - -### Light Client of BNB Beacon Chain - -To achieve the cross-chain communication from BNB Beacon Chain to BNB Smart Chain, need introduce a on-chain light client verification algorithm. -It contains two parts: - -1. [Stateless Precompiled contracts](https://github.com/bnb-chain/bsc/blob/master/core/vm/contracts_lightclient.go) to do tendermint header verification and Merkle Proof verification. -2. [Stateful solidity contracts](https://github.com/bnb-chain/bsc-genesis-contract/blob/master/contracts/TendermintLightClient.sol) to store validator set and trusted appHash. - -## Native Token - -BNB will run on BNB Smart Chain in the same way as ETH runs on Ethereum so that it remains as `native token` for BSC. This means, -BNB will be used to: - -1. pay `gas` to deploy or invoke Smart Contract on BSC -2. perform cross-chain operations, such as transfer token assets across BNB Smart Chain and BNB Beacon Chain. - -## Building the source - -Many of the below are the same as or similar to go-ethereum. - -For prerequisites and detailed build instructions please read the [Installation Instructions](https://geth.ethereum.org/docs/getting-started/installing-geth). - -Building `geth` requires both a Go (version 1.20 or later) and a C compiler (GCC 5 or higher). You can install -them using your favourite package manager. Once the dependencies are installed, run - -```shell -make geth -``` - -or, to build the full suite of utilities: - -```shell -make all -``` - -If you get such error when running the node with self built binary: -```shell -Caught SIGILL in blst_cgo_init, consult /bindinds/go/README.md. ``` -please try to add the following environment variables and build again: -```shell -export CGO_CFLAGS="-O -D__BLST_PORTABLE__" -export CGO_CFLAGS_ALLOW="-O -D__BLST_PORTABLE__" -``` - -## Executables - -The bsc project comes with several wrappers/executables found in the `cmd` -directory. - -| Command | Description | -| :--------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| **`geth`** | Main BNB Smart Chain client binary. It is the entry point into the BSC network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It has the same and more RPC and other interface as go-ethereum and can be used by other processes as a gateway into the BSC network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI page](https://geth.ethereum.org/docs/interface/command-line-options) for command line options. | -| `clef` | Stand-alone signing tool, which can be used as a backend signer for `geth`. | -| `devp2p` | Utilities to interact with nodes on the networking layer, without running a full blockchain. | -| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://docs.soliditylang.org/en/develop/abi-spec.html) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://geth.ethereum.org/docs/dapp/native-bindings) page for details. | -| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. | -| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug run`). | -| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://ethereum.org/en/developers/docs/data-structures-and-encoding/rlp)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). | - -## Running `geth` - -Going through all the possible command line flags is out of scope here (please consult our -[CLI Wiki page](https://geth.ethereum.org/docs/fundamentals/command-line-options)), -but we've enumerated a few common parameter combos to get you up to speed quickly -on how you can run your own `geth` instance. - -### Hardware Requirements - -The hardware must meet certain requirements to run a full node on mainnet: -- VPS running recent versions of Mac OS X, Linux, or Windows. -- IMPORTANT 3 TB(Dec 2023) of free disk space, solid-state drive(SSD), gp3, 8k IOPS, 500 MB/S throughput, read latency <1ms. (if node is started with snap sync, it will need NVMe SSD) -- 16 cores of CPU and 64 GB of memory (RAM) -- Suggest m5zn.6xlarge or r7iz.4xlarge instance type on AWS, c2-standard-16 on Google cloud. -- A broadband Internet connection with upload/download speeds of 5 MB/S - -The requirement for testnet: -- VPS running recent versions of Mac OS X, Linux, or Windows. -- 500G of storage for testnet. -- 4 cores of CPU and 16 gigabytes of memory (RAM). - -### Steps to Run a Fullnode - -#### 1. Download the pre-build binaries -```shell -# Linux -wget $(curl -s https://api.github.com/repos/bnb-chain/bsc/releases/latest |grep browser_ |grep geth_linux |cut -d\" -f4) -mv geth_linux geth -chmod -v u+x geth - -# MacOS -wget $(curl -s https://api.github.com/repos/bnb-chain/bsc/releases/latest |grep browser_ |grep geth_mac |cut -d\" -f4) -mv geth_macos geth -chmod -v u+x geth -``` - -#### 2. Download the config files -```shell -//== mainnet -wget $(curl -s https://api.github.com/repos/bnb-chain/bsc/releases/latest |grep browser_ |grep mainnet |cut -d\" -f4) -unzip mainnet.zip - -//== testnet -wget $(curl -s https://api.github.com/repos/bnb-chain/bsc/releases/latest |grep browser_ |grep testnet |cut -d\" -f4) -unzip testnet.zip -``` - -#### 3. Download snapshot -Download latest chaindata snapshot from [here](https://github.com/bnb-chain/bsc-snapshots). Follow the guide to structure your files. - -Note: if you can not download the chaindata snapshot and want to sync from genesis, you have to generate the genesis block first, you have already get the genesis.json in Step 2. -So just run: -``` shell -## It will init genesis with Hash-Base Storage Scheme by default. -geth --datadir init ./genesis.json - -## It will init genesis with Path-Base Storage Scheme. -geth --datadir --state.scheme path init ./genesis.json -``` -#### 4. Start a full node -```shell -./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0 - -## It is recommand to run fullnode with `--tries-verify-mode none` if you want high performance and care little about state consistency -## It will run with Hash-Base Storage Scheme by default -./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0 --tries-verify-mode none +[Eth.Miner.Bidder] +Enable = true +Account = {{BUILDER_ADDRESS}} +DelayLeftOver = {{DELAY_LEFT_OVER}} -## It runs fullnode with Path-Base Storage Scheme. -## It will enable inline state prune, keeping the latest 90000 blocks' history state by default. -./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0 --tries-verify-mode none --state.scheme path +[[Eth.Miner.Bidder.Validators]] +Address = {{VALIDATOR_ADDRESS}} +URL = {{VALIDATOR_URL}} +... ``` -#### 5. Monitor node status - -Monitor the log from **./node/bsc.log** by default. When the node has started syncing, should be able to see the following output: -```shell -t=2022-09-08T13:00:27+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=177 mgas=17.317 elapsed=31.131ms mgasps=556.259 number=21,153,429 hash=0x42e6b54ba7106387f0650defc62c9ace3160b427702dab7bd1c5abb83a32d8db dirty="0.00 B" -t=2022-09-08T13:00:29+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=251 mgas=39.638 elapsed=68.827ms mgasps=575.900 number=21,153,430 hash=0xa3397b273b31b013e43487689782f20c03f47525b4cd4107c1715af45a88796e dirty="0.00 B" -t=2022-09-08T13:00:33+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=197 mgas=19.364 elapsed=34.663ms mgasps=558.632 number=21,153,431 hash=0x0c7872b698f28cb5c36a8a3e1e315b1d31bda6109b15467a9735a12380e2ad14 dirty="0.00 B" -``` - -#### 6. Interact with fullnode -Start up `geth`'s built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interface/javascript-console), -(via the trailing `console` subcommand) through which you can interact using [`web3` methods](https://web3js.readthedocs.io/en/) -(note: the `web3` version bundled within `geth` is very old, and not up to date with official docs), -as well as `geth`'s own [management APIs](https://geth.ethereum.org/docs/rpc/server). -This tool is optional and if you leave it out you can always attach to an already running -`geth` instance with `geth attach`. - -#### 7. More - -More details about [running a node](https://docs.bnbchain.org/docs/validator/fullnode) and [becoming a validator](https://docs.bnbchain.org/docs/validator/create-val) - -*Note: Although some internal protective measures prevent transactions from -crossing over between the main network and test network, you should always -use separate accounts for play and real money. Unless you manually move -accounts, `geth` will by default correctly separate the two networks and will not make any -accounts available between them.* - -### Configuration - -As an alternative to passing the numerous flags to the `geth` binary, you can also pass a -configuration file via: - -```shell -$ geth --config /path/to/your_config.toml -``` - -To get an idea of how the file should look like you can use the `dumpconfig` subcommand to -export your existing configuration: - -```shell -$ geth --your-favourite-flags dumpconfig -``` - -### Programmatically interfacing `geth` nodes - -As a developer, sooner rather than later you'll want to start interacting with `geth` and the -BSC network via your own programs and not manually through the console. To aid -this, `geth` has built-in support for a JSON-RPC based APIs ([standard APIs](https://ethereum.github.io/execution-apis/api-documentation/) -and [`geth` specific APIs](https://geth.ethereum.org/docs/interacting-with-geth/rpc)). -These can be exposed via HTTP, WebSockets and IPC (UNIX sockets on UNIX based -platforms, and named pipes on Windows). - -The IPC interface is enabled by default and exposes all the APIs supported by `geth`, -whereas the HTTP and WS interfaces need to manually be enabled and only expose a -subset of APIs due to security reasons. These can be turned on/off and configured as -you'd expect. - -HTTP based JSON-RPC API options: - -* `--http` Enable the HTTP-RPC server -* `--http.addr` HTTP-RPC server listening interface (default: `localhost`) -* `--http.port` HTTP-RPC server listening port (default: `8545`) -* `--http.api` API's offered over the HTTP-RPC interface (default: `eth,net,web3`) -* `--http.corsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced) -* `--ws` Enable the WS-RPC server -* `--ws.addr` WS-RPC server listening interface (default: `localhost`) -* `--ws.port` WS-RPC server listening port (default: `8546`) -* `--ws.api` API's offered over the WS-RPC interface (default: `eth,net,web3`) -* `--ws.origins` Origins from which to accept WebSocket requests -* `--ipcdisable` Disable the IPC-RPC server -* `--ipcapi` API's offered over the IPC-RPC interface (default: `admin,debug,eth,miner,net,personal,txpool,web3`) -* `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it) - -You'll need to use your own programming environments' capabilities (libraries, tools, etc) to -connect via HTTP, WS or IPC to a `geth` node configured with the above flags and you'll -need to speak [JSON-RPC](https://www.jsonrpc.org/specification) on all transports. You -can reuse the same connection for multiple requests! - -**Note: Please understand the security implications of opening up an HTTP/WS based -transport before doing so! Hackers on the internet are actively trying to subvert -BSC nodes with exposed APIs! Further, all browser tabs can access locally -running web servers, so malicious web pages could try to subvert locally available -APIs!** - -### Operating a private network -- [BSC-Deploy](https://github.com/bnb-chain/node-deploy/): deploy tool for setting up both BNB Beacon Chain, BNB Smart Chain and the cross chain infrastructure between them. -- [BSC-Docker](https://github.com/bnb-chain/bsc-docker): deploy tool for setting up local BSC cluster in container. - - -## Running a bootnode - -Bootnodes are super-lightweight nodes that are not behind a NAT and are running just discovery protocol. When you start up a node it should log your enode, which is a public identifier that others can use to connect to your node. - -First the bootnode requires a key, which can be created with the following command, which will save a key to boot.key: - -``` -bootnode -genkey boot.key -``` - -This key can then be used to generate a bootnode as follows: - -``` -bootnode -nodekey boot.key -addr :30311 -network bsc -``` - -The choice of port passed to -addr is arbitrary. -The bootnode command returns the following logs to the terminal, confirming that it is running: - -``` -enode://3063d1c9e1b824cfbb7c7b6abafa34faec6bb4e7e06941d218d760acdd7963b274278c5c3e63914bd6d1b58504c59ec5522c56f883baceb8538674b92da48a96@127.0.0.1:0?discport=30311 -Note: you're using cmd/bootnode, a developer tool. -We recommend using a regular node as bootstrap node for production deployments. -INFO [08-21|11:11:30.687] New local node record seq=1,692,616,290,684 id=2c9af1742f8f85ce ip= udp=0 tcp=0 -INFO [08-21|12:11:30.753] New local node record seq=1,692,616,290,685 id=2c9af1742f8f85ce ip=54.217.128.118 udp=30311 tcp=0 -INFO [09-01|02:46:26.234] New local node record seq=1,692,616,290,686 id=2c9af1742f8f85ce ip=34.250.32.100 udp=30311 tcp=0 -``` - -## Contribution - -Thank you for considering helping out with the source code! We welcome contributions -from anyone on the internet, and are grateful for even the smallest of fixes! - -If you'd like to contribute to bsc, please fork, fix, commit and send a pull request -for the maintainers to review and merge into the main code base. If you wish to submit -more complex changes though, please check up with the core devs first on [our discord channel](https://discord.gg/bnbchain) -to ensure those changes are in line with the general philosophy of the project and/or get -some early feedback which can make both your efforts much lighter as well as our review -and merge procedures quick and simple. - -Please make sure your contributions adhere to our coding guidelines: - - * Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting) - guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)). - * Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary) - guidelines. - * Pull requests need to be based on and opened against the `master` branch. - * Commit messages should be prefixed with the package(s) they modify. - * E.g. "eth, rpc: make trace configs optional" - -Please see the [Developers' Guide](https://geth.ethereum.org/docs/developers/geth-developer/dev-guide) -for more details on configuring your environment, managing project dependencies, and -testing procedures. +- `Enable`: Whether to enable the builder. +- `Account`: The account address to unlock of the builder. +- `DelayLeftOver`: Submit bid no later than `DelayLeftOver` before the next block time. +- `Validators`: A list of validators to bid for. + - `Address`: The address of the validator. + - `URL`: The URL of the validator. ## License diff --git a/README.original.md b/README.original.md new file mode 100644 index 0000000000..ad112fcdd4 --- /dev/null +++ b/README.original.md @@ -0,0 +1,325 @@ +## BNB Smart Chain + +The goal of BNB Smart Chain is to bring programmability and interoperability to BNB Beacon Chain. In order to embrace the existing popular community and advanced technology, it will bring huge benefits by staying compatible with all the existing smart contracts on Ethereum and Ethereum tooling. And to achieve that, the easiest solution is to develop based on go-ethereum fork, as we respect the great work of Ethereum very much. + +BNB Smart Chain starts its development based on go-ethereum fork. So you may see many toolings, binaries and also docs are based on Ethereum ones, such as the name “geth”. + +[![API Reference]( +https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667 +)](https://pkg.go.dev/github.com/ethereum/go-ethereum?tab=doc) +[![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/z2VpC455eU) + +But from that baseline of EVM compatible, BNB Smart Chain introduces a system of 21 validators with Proof of Staked Authority (PoSA) consensus that can support short block time and lower fees. The most bonded validator candidates of staking will become validators and produce blocks. The double-sign detection and other slashing logic guarantee security, stability, and chain finality. + +Cross-chain transfer and other communication are possible due to native support of interoperability. Relayers and on-chain contracts are developed to support that. BNB Beacon Chain DEX remains a liquid venue of the exchange of assets on both chains. This dual-chain architecture will be ideal for users to take advantage of the fast trading on one side and build their decentralized apps on the other side. **The BNB Smart Chain** will be: + +- **A self-sovereign blockchain**: Provides security and safety with elected validators. +- **EVM-compatible**: Supports all the existing Ethereum tooling along with faster finality and cheaper transaction fees. +- **Interoperable**: Comes with efficient native dual chain communication; Optimized for scaling high-performance dApps that require fast and smooth user experience. +- **Distributed with on-chain governance**: Proof of Staked Authority brings in decentralization and community participants. As the native token, BNB will serve as both the gas of smart contract execution and tokens for staking. + +More details in [White Paper](https://www.bnbchain.org/en#smartChain). + +## Key features + +### Proof of Staked Authority +Although Proof-of-Work (PoW) has been approved as a practical mechanism to implement a decentralized network, it is not friendly to the environment and also requires a large size of participants to maintain the security. + +Proof-of-Authority(PoA) provides some defense to 51% attack, with improved efficiency and tolerance to certain levels of Byzantine players (malicious or hacked). +Meanwhile, the PoA protocol is most criticized for being not as decentralized as PoW, as the validators, i.e. the nodes that take turns to produce blocks, have all the authorities and are prone to corruption and security attacks. + +Other blockchains, such as EOS and Cosmos both, introduce different types of Deputy Proof of Stake (DPoS) to allow the token holders to vote and elect the validator set. It increases the decentralization and favors community governance. + +To combine DPoS and PoA for consensus, BNB Smart Chain implement a novel consensus engine called Parlia that: + +1. Blocks are produced by a limited set of validators. +2. Validators take turns to produce blocks in a PoA manner, similar to Ethereum's Clique consensus engine. +3. Validator set are elected in and out based on a staking based governance on BNB Beacon Chain. +4. The validator set change is relayed via a cross-chain communication mechanism. +5. Parlia consensus engine will interact with a set of [system contracts](https://docs.bnbchain.org/docs/learn/system-contract) to achieve liveness slash, revenue distributing and validator set renewing func. + + +### Light Client of BNB Beacon Chain + +To achieve the cross-chain communication from BNB Beacon Chain to BNB Smart Chain, need introduce a on-chain light client verification algorithm. +It contains two parts: + +1. [Stateless Precompiled contracts](https://github.com/bnb-chain/bsc/blob/master/core/vm/contracts_lightclient.go) to do tendermint header verification and Merkle Proof verification. +2. [Stateful solidity contracts](https://github.com/bnb-chain/bsc-genesis-contract/blob/master/contracts/TendermintLightClient.sol) to store validator set and trusted appHash. + +## Native Token + +BNB will run on BNB Smart Chain in the same way as ETH runs on Ethereum so that it remains as `native token` for BSC. This means, +BNB will be used to: + +1. pay `gas` to deploy or invoke Smart Contract on BSC +2. perform cross-chain operations, such as transfer token assets across BNB Smart Chain and BNB Beacon Chain. + +## Building the source + +Many of the below are the same as or similar to go-ethereum. + +For prerequisites and detailed build instructions please read the [Installation Instructions](https://geth.ethereum.org/docs/getting-started/installing-geth). + +Building `geth` requires both a Go (version 1.20 or later) and a C compiler (GCC 5 or higher). You can install +them using your favourite package manager. Once the dependencies are installed, run + +```shell +make geth +``` + +or, to build the full suite of utilities: + +```shell +make all +``` + +If you get such error when running the node with self built binary: +```shell +Caught SIGILL in blst_cgo_init, consult /bindinds/go/README.md. +``` +please try to add the following environment variables and build again: +```shell +export CGO_CFLAGS="-O -D__BLST_PORTABLE__" +export CGO_CFLAGS_ALLOW="-O -D__BLST_PORTABLE__" +``` + +## Executables + +The bsc project comes with several wrappers/executables found in the `cmd` +directory. + +| Command | Description | +| :--------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **`geth`** | Main BNB Smart Chain client binary. It is the entry point into the BSC network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It has the same and more RPC and other interface as go-ethereum and can be used by other processes as a gateway into the BSC network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI page](https://geth.ethereum.org/docs/interface/command-line-options) for command line options. | +| `clef` | Stand-alone signing tool, which can be used as a backend signer for `geth`. | +| `devp2p` | Utilities to interact with nodes on the networking layer, without running a full blockchain. | +| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://docs.soliditylang.org/en/develop/abi-spec.html) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://geth.ethereum.org/docs/dapp/native-bindings) page for details. | +| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. | +| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug run`). | +| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://ethereum.org/en/developers/docs/data-structures-and-encoding/rlp)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). | + +## Running `geth` + +Going through all the possible command line flags is out of scope here (please consult our +[CLI Wiki page](https://geth.ethereum.org/docs/fundamentals/command-line-options)), +but we've enumerated a few common parameter combos to get you up to speed quickly +on how you can run your own `geth` instance. + +### Hardware Requirements + +The hardware must meet certain requirements to run a full node on mainnet: +- VPS running recent versions of Mac OS X, Linux, or Windows. +- IMPORTANT 3 TB(Dec 2023) of free disk space, solid-state drive(SSD), gp3, 8k IOPS, 500 MB/S throughput, read latency <1ms. (if node is started with snap sync, it will need NVMe SSD) +- 16 cores of CPU and 64 GB of memory (RAM) +- Suggest m5zn.6xlarge or r7iz.4xlarge instance type on AWS, c2-standard-16 on Google cloud. +- A broadband Internet connection with upload/download speeds of 5 MB/S + +The requirement for testnet: +- VPS running recent versions of Mac OS X, Linux, or Windows. +- 500G of storage for testnet. +- 4 cores of CPU and 16 gigabytes of memory (RAM). + +### Steps to Run a Fullnode + +#### 1. Download the pre-build binaries +```shell +# Linux +wget $(curl -s https://api.github.com/repos/bnb-chain/bsc/releases/latest |grep browser_ |grep geth_linux |cut -d\" -f4) +mv geth_linux geth +chmod -v u+x geth + +# MacOS +wget $(curl -s https://api.github.com/repos/bnb-chain/bsc/releases/latest |grep browser_ |grep geth_mac |cut -d\" -f4) +mv geth_macos geth +chmod -v u+x geth +``` + +#### 2. Download the config files +```shell +//== mainnet +wget $(curl -s https://api.github.com/repos/bnb-chain/bsc/releases/latest |grep browser_ |grep mainnet |cut -d\" -f4) +unzip mainnet.zip + +//== testnet +wget $(curl -s https://api.github.com/repos/bnb-chain/bsc/releases/latest |grep browser_ |grep testnet |cut -d\" -f4) +unzip testnet.zip +``` + +#### 3. Download snapshot +Download latest chaindata snapshot from [here](https://github.com/bnb-chain/bsc-snapshots). Follow the guide to structure your files. + +Note: if you can not download the chaindata snapshot and want to sync from genesis, you have to generate the genesis block first, you have already get the genesis.json in Step 2. +So just run: +``` shell +## It will init genesis with Hash-Base Storage Scheme by default. +geth --datadir init ./genesis.json + +## It will init genesis with Path-Base Storage Scheme. +geth --datadir --state.scheme path init ./genesis.json +``` +#### 4. Start a full node +```shell +./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0 + +## It is recommand to run fullnode with `--tries-verify-mode none` if you want high performance and care little about state consistency +## It will run with Hash-Base Storage Scheme by default +./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0 --tries-verify-mode none + +## It runs fullnode with Path-Base Storage Scheme. +## It will enable inline state prune, keeping the latest 90000 blocks' history state by default. +./geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --history.transactions 0 --tries-verify-mode none --state.scheme path +``` + +#### 5. Monitor node status + +Monitor the log from **./node/bsc.log** by default. When the node has started syncing, should be able to see the following output: +```shell +t=2022-09-08T13:00:27+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=177 mgas=17.317 elapsed=31.131ms mgasps=556.259 number=21,153,429 hash=0x42e6b54ba7106387f0650defc62c9ace3160b427702dab7bd1c5abb83a32d8db dirty="0.00 B" +t=2022-09-08T13:00:29+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=251 mgas=39.638 elapsed=68.827ms mgasps=575.900 number=21,153,430 hash=0xa3397b273b31b013e43487689782f20c03f47525b4cd4107c1715af45a88796e dirty="0.00 B" +t=2022-09-08T13:00:33+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=197 mgas=19.364 elapsed=34.663ms mgasps=558.632 number=21,153,431 hash=0x0c7872b698f28cb5c36a8a3e1e315b1d31bda6109b15467a9735a12380e2ad14 dirty="0.00 B" +``` + +#### 6. Interact with fullnode +Start up `geth`'s built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interface/javascript-console), +(via the trailing `console` subcommand) through which you can interact using [`web3` methods](https://web3js.readthedocs.io/en/) +(note: the `web3` version bundled within `geth` is very old, and not up to date with official docs), +as well as `geth`'s own [management APIs](https://geth.ethereum.org/docs/rpc/server). +This tool is optional and if you leave it out you can always attach to an already running +`geth` instance with `geth attach`. + +#### 7. More + +More details about [running a node](https://docs.bnbchain.org/docs/validator/fullnode) and [becoming a validator](https://docs.bnbchain.org/docs/validator/create-val) + +*Note: Although some internal protective measures prevent transactions from +crossing over between the main network and test network, you should always +use separate accounts for play and real money. Unless you manually move +accounts, `geth` will by default correctly separate the two networks and will not make any +accounts available between them.* + +### Configuration + +As an alternative to passing the numerous flags to the `geth` binary, you can also pass a +configuration file via: + +```shell +$ geth --config /path/to/your_config.toml +``` + +To get an idea of how the file should look like you can use the `dumpconfig` subcommand to +export your existing configuration: + +```shell +$ geth --your-favourite-flags dumpconfig +``` + +### Programmatically interfacing `geth` nodes + +As a developer, sooner rather than later you'll want to start interacting with `geth` and the +BSC network via your own programs and not manually through the console. To aid +this, `geth` has built-in support for a JSON-RPC based APIs ([standard APIs](https://ethereum.github.io/execution-apis/api-documentation/) +and [`geth` specific APIs](https://geth.ethereum.org/docs/interacting-with-geth/rpc)). +These can be exposed via HTTP, WebSockets and IPC (UNIX sockets on UNIX based +platforms, and named pipes on Windows). + +The IPC interface is enabled by default and exposes all the APIs supported by `geth`, +whereas the HTTP and WS interfaces need to manually be enabled and only expose a +subset of APIs due to security reasons. These can be turned on/off and configured as +you'd expect. + +HTTP based JSON-RPC API options: + +* `--http` Enable the HTTP-RPC server +* `--http.addr` HTTP-RPC server listening interface (default: `localhost`) +* `--http.port` HTTP-RPC server listening port (default: `8545`) +* `--http.api` API's offered over the HTTP-RPC interface (default: `eth,net,web3`) +* `--http.corsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced) +* `--ws` Enable the WS-RPC server +* `--ws.addr` WS-RPC server listening interface (default: `localhost`) +* `--ws.port` WS-RPC server listening port (default: `8546`) +* `--ws.api` API's offered over the WS-RPC interface (default: `eth,net,web3`) +* `--ws.origins` Origins from which to accept WebSocket requests +* `--ipcdisable` Disable the IPC-RPC server +* `--ipcapi` API's offered over the IPC-RPC interface (default: `admin,debug,eth,miner,net,personal,txpool,web3`) +* `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it) + +You'll need to use your own programming environments' capabilities (libraries, tools, etc) to +connect via HTTP, WS or IPC to a `geth` node configured with the above flags and you'll +need to speak [JSON-RPC](https://www.jsonrpc.org/specification) on all transports. You +can reuse the same connection for multiple requests! + +**Note: Please understand the security implications of opening up an HTTP/WS based +transport before doing so! Hackers on the internet are actively trying to subvert +BSC nodes with exposed APIs! Further, all browser tabs can access locally +running web servers, so malicious web pages could try to subvert locally available +APIs!** + +### Operating a private network +- [BSC-Deploy](https://github.com/bnb-chain/node-deploy/): deploy tool for setting up both BNB Beacon Chain, BNB Smart Chain and the cross chain infrastructure between them. +- [BSC-Docker](https://github.com/bnb-chain/bsc-docker): deploy tool for setting up local BSC cluster in container. + + +## Running a bootnode + +Bootnodes are super-lightweight nodes that are not behind a NAT and are running just discovery protocol. When you start up a node it should log your enode, which is a public identifier that others can use to connect to your node. + +First the bootnode requires a key, which can be created with the following command, which will save a key to boot.key: + +``` +bootnode -genkey boot.key +``` + +This key can then be used to generate a bootnode as follows: + +``` +bootnode -nodekey boot.key -addr :30311 -network bsc +``` + +The choice of port passed to -addr is arbitrary. +The bootnode command returns the following logs to the terminal, confirming that it is running: + +``` +enode://3063d1c9e1b824cfbb7c7b6abafa34faec6bb4e7e06941d218d760acdd7963b274278c5c3e63914bd6d1b58504c59ec5522c56f883baceb8538674b92da48a96@127.0.0.1:0?discport=30311 +Note: you're using cmd/bootnode, a developer tool. +We recommend using a regular node as bootstrap node for production deployments. +INFO [08-21|11:11:30.687] New local node record seq=1,692,616,290,684 id=2c9af1742f8f85ce ip= udp=0 tcp=0 +INFO [08-21|12:11:30.753] New local node record seq=1,692,616,290,685 id=2c9af1742f8f85ce ip=54.217.128.118 udp=30311 tcp=0 +INFO [09-01|02:46:26.234] New local node record seq=1,692,616,290,686 id=2c9af1742f8f85ce ip=34.250.32.100 udp=30311 tcp=0 +``` + +## Contribution + +Thank you for considering helping out with the source code! We welcome contributions +from anyone on the internet, and are grateful for even the smallest of fixes! + +If you'd like to contribute to bsc, please fork, fix, commit and send a pull request +for the maintainers to review and merge into the main code base. If you wish to submit +more complex changes though, please check up with the core devs first on [our discord channel](https://discord.gg/bnbchain) +to ensure those changes are in line with the general philosophy of the project and/or get +some early feedback which can make both your efforts much lighter as well as our review +and merge procedures quick and simple. + +Please make sure your contributions adhere to our coding guidelines: + + * Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting) + guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)). + * Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary) + guidelines. + * Pull requests need to be based on and opened against the `master` branch. + * Commit messages should be prefixed with the package(s) they modify. + * E.g. "eth, rpc: make trace configs optional" + +Please see the [Developers' Guide](https://geth.ethereum.org/docs/developers/geth-developer/dev-guide) +for more details on configuring your environment, managing project dependencies, and +testing procedures. + +## License + +The bsc library (i.e. all code outside of the `cmd` directory) is licensed under the +[GNU Lesser General Public License v3.0](https://www.gnu.org/licenses/lgpl-3.0.en.html), +also included in our repository in the `COPYING.LESSER` file. + +The bsc binaries (i.e. all code inside of the `cmd` directory) is licensed under the +[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html), also +included in our repository in the `COPYING` file. diff --git a/build/ci.go b/build/ci.go index 627ee38000..48da2821a9 100644 --- a/build/ci.go +++ b/build/ci.go @@ -360,7 +360,7 @@ func doLint(cmdline []string) { // downloadLinter downloads and unpacks golangci-lint. func downloadLinter(cachedir string) string { - const version = "1.52.2" + const version = "1.55.2" csdb := build.MustLoadChecksums("build/checksums.txt") arch := runtime.GOARCH diff --git a/consensus/beacon/consensus.go b/consensus/beacon/consensus.go index 14764c08e3..f99d4a6e49 100644 --- a/consensus/beacon/consensus.go +++ b/consensus/beacon/consensus.go @@ -326,6 +326,11 @@ func (beacon *Beacon) verifyHeaders(chain consensus.ChainHeaderReader, headers [ return abort, results } +// NextInTurnValidator return the next in-turn validator for header +func (beacon *Beacon) NextInTurnValidator(chain consensus.ChainHeaderReader, header *types.Header) (common.Address, error) { + return common.Address{}, errors.New("not implemented") +} + // Prepare implements consensus.Engine, initializing the difficulty field of a // header to conform to the beacon protocol. The changes are done inline. func (beacon *Beacon) Prepare(chain consensus.ChainHeaderReader, header *types.Header) error { diff --git a/consensus/clique/clique.go b/consensus/clique/clique.go index 2f4873a8c2..ab010c0f1f 100644 --- a/consensus/clique/clique.go +++ b/consensus/clique/clique.go @@ -498,6 +498,11 @@ func (c *Clique) verifySeal(snap *Snapshot, header *types.Header, parents []*typ return nil } +// NextInTurnValidator return the next in-turn validator for header +func (c *Clique) NextInTurnValidator(chain consensus.ChainHeaderReader, header *types.Header) (common.Address, error) { + return common.Address{}, errors.New("not implemented") +} + // Prepare implements consensus.Engine, preparing all the consensus fields of the // header for running the transactions on top. func (c *Clique) Prepare(chain consensus.ChainHeaderReader, header *types.Header) error { diff --git a/consensus/consensus.go b/consensus/consensus.go index 709622ce34..b90a9d5fb1 100644 --- a/consensus/consensus.go +++ b/consensus/consensus.go @@ -91,6 +91,9 @@ type Engine interface { // rules of a given engine. VerifyUncles(chain ChainReader, block *types.Block) error + // NextInTurnValidator return the next in-turn validator for header + NextInTurnValidator(chain ChainHeaderReader, header *types.Header) (common.Address, error) + // Prepare initializes the consensus fields of a block header according to the // rules of a particular engine. The changes are executed inline. Prepare(chain ChainHeaderReader, header *types.Header) error @@ -154,4 +157,5 @@ type PoSA interface { GetFinalizedHeader(chain ChainHeaderReader, header *types.Header) *types.Header VerifyVote(chain ChainHeaderReader, vote *types.VoteEnvelope) error IsActiveValidatorAt(chain ChainHeaderReader, header *types.Header, checkVoteKeyFn func(bLSPublicKey *types.BLSPublicKey) bool) bool + SetValidator(validator common.Address) } diff --git a/consensus/ethash/consensus.go b/consensus/ethash/consensus.go index 118dee839c..f1653edef1 100644 --- a/consensus/ethash/consensus.go +++ b/consensus/ethash/consensus.go @@ -475,6 +475,11 @@ var FrontierDifficultyCalculator = calcDifficultyFrontier var HomesteadDifficultyCalculator = calcDifficultyHomestead var DynamicDifficultyCalculator = makeDifficultyCalculator +// NextInTurnValidator return the next in-turn validator for header +func (ethash *Ethash) NextInTurnValidator(chain consensus.ChainHeaderReader, header *types.Header) (common.Address, error) { + return common.Address{}, errors.New("not implemented") +} + // Prepare implements consensus.Engine, initializing the difficulty field of a // header to conform to the ethash protocol. The changes are done inline. func (ethash *Ethash) Prepare(chain consensus.ChainHeaderReader, header *types.Header) error { diff --git a/consensus/parlia/parlia.go b/consensus/parlia/parlia.go index f2ae2d86f1..6ee406dd83 100644 --- a/consensus/parlia/parlia.go +++ b/consensus/parlia/parlia.go @@ -159,8 +159,10 @@ var ( // SignerFn is a signer callback function to request a header to be signed by a // backing account. -type SignerFn func(accounts.Account, string, []byte) ([]byte, error) -type SignerTxFn func(accounts.Account, *types.Transaction, *big.Int) (*types.Transaction, error) +type ( + SignerFn func(accounts.Account, string, []byte) ([]byte, error) + SignerTxFn func(accounts.Account, *types.Transaction, *big.Int) (*types.Transaction, error) +) func isToSystemContract(to common.Address) bool { return systemContracts[to] @@ -904,7 +906,7 @@ func (p *Parlia) assembleVoteAttestation(chain consensus.ChainHeaderReader, head // Prepare vote address bitset. for _, valInfo := range snap.Validators { if _, ok := voteAddrSet[valInfo.VoteAddress]; ok { - attestation.VoteAddressSet |= 1 << (valInfo.Index - 1) //Index is offset by 1 + attestation.VoteAddressSet |= 1 << (valInfo.Index - 1) // Index is offset by 1 } } validatorsBitSet := bitset.From([]uint64{uint64(attestation.VoteAddressSet)}) @@ -929,6 +931,16 @@ func (p *Parlia) assembleVoteAttestation(chain consensus.ChainHeaderReader, head return nil } +// NextInTurnValidator return the next in-turn validator for header +func (p *Parlia) NextInTurnValidator(chain consensus.ChainHeaderReader, header *types.Header) (common.Address, error) { + snap, err := p.snapshot(chain, header.Number.Uint64(), header.Hash(), nil) + if err != nil { + return common.Address{}, err + } + + return snap.inturnValidator(), nil +} + // Prepare implements consensus.Engine, preparing all the consensus fields of the // header for running the transactions on top. func (p *Parlia) Prepare(chain consensus.ChainHeaderReader, header *types.Header) error { @@ -942,7 +954,7 @@ func (p *Parlia) Prepare(chain consensus.ChainHeaderReader, header *types.Header } // Set the correct difficulty - header.Difficulty = CalcDifficulty(snap, p.val) + header.Difficulty = CalcDifficulty(snap, header.Coinbase) // Ensure the extra data has all it's components if len(header.Extra) < extraVanity-nextForkHashSize { @@ -1795,6 +1807,17 @@ func (p *Parlia) GetFinalizedHeader(chain consensus.ChainHeaderReader, header *t return chain.GetHeader(snap.Attestation.SourceHash, snap.Attestation.SourceNumber) } +// SetValidator set the validator of parlia engine +// It is used for builder +func (p *Parlia) SetValidator(val common.Address) { + if val == (common.Address{}) { + return + } + p.lock.Lock() + defer p.lock.Unlock() + p.val = val +} + // =========================== utility function ========================== // SealHash returns the hash of a block prior to it being sealed. func SealHash(header *types.Header, chainId *big.Int) (hash common.Hash) { diff --git a/consensus/parlia/snapshot.go b/consensus/parlia/snapshot.go index ddfb1811fc..0da0929e7c 100644 --- a/consensus/parlia/snapshot.go +++ b/consensus/parlia/snapshot.go @@ -338,6 +338,13 @@ func (s *Snapshot) inturn(validator common.Address) bool { return validators[offset] == validator } +// inturnValidator returns the validator at a given block height. +func (s *Snapshot) inturnValidator() common.Address { + validators := s.validators() + offset := (s.Number + 1) % uint64(len(validators)) + return validators[offset] +} + func (s *Snapshot) enoughDistance(validator common.Address, header *types.Header) bool { idx := s.indexOfVal(validator) if idx < 0 { diff --git a/core/blockchain.go b/core/blockchain.go index 0d6974c1dc..cd0bdfbee2 100644 --- a/core/blockchain.go +++ b/core/blockchain.go @@ -30,6 +30,7 @@ import ( exlru "github.com/hashicorp/golang-lru" "golang.org/x/crypto/sha3" + "golang.org/x/exp/slices" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common/lru" @@ -54,7 +55,6 @@ import ( "github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie/triedb/hashdb" "github.com/ethereum/go-ethereum/trie/triedb/pathdb" - "golang.org/x/exp/slices" ) var ( @@ -2033,7 +2033,7 @@ func (bc *BlockChain) insertChain(chain types.Blocks, setHead bool) (int, error) go throwaway.TriePrefetchInAdvance(block, signer) } - //Process block using the parent state as reference point + // Process block using the parent state as reference point if bc.pipeCommit { statedb.EnablePipeCommit() } diff --git a/core/txpool/blobpool/blobpool.go b/core/txpool/blobpool/blobpool.go index 71cb2cb53f..6c88ee8f2b 100644 --- a/core/txpool/blobpool/blobpool.go +++ b/core/txpool/blobpool/blobpool.go @@ -28,6 +28,9 @@ import ( "sync" "time" + "github.com/holiman/billy" + "github.com/holiman/uint256" + "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/consensus/misc/eip1559" "github.com/ethereum/go-ethereum/consensus/misc/eip4844" @@ -41,8 +44,6 @@ import ( "github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/rlp" - "github.com/holiman/billy" - "github.com/holiman/uint256" ) const ( @@ -1477,7 +1478,7 @@ func (p *BlobPool) SubscribeTransactions(ch chan<- core.NewTxsEvent) event.Subsc // SubscribeReannoTxsEvent registers a subscription of ReannoTxsEvent and // starts sending event to the given channel. -func (pool *BlobPool) SubscribeReannoTxsEvent(ch chan<- core.ReannoTxsEvent) event.Subscription { +func (p *BlobPool) SubscribeReannoTxsEvent(ch chan<- core.ReannoTxsEvent) event.Subscription { panic("not supported") } diff --git a/core/txpool/bundlepool/bundlepool.go b/core/txpool/bundlepool/bundlepool.go new file mode 100644 index 0000000000..9ba3bd2257 --- /dev/null +++ b/core/txpool/bundlepool/bundlepool.go @@ -0,0 +1,361 @@ +package bundlepool + +import ( + "container/heap" + "math/big" + "sync" + "time" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/core" + "github.com/ethereum/go-ethereum/core/state" + "github.com/ethereum/go-ethereum/core/txpool" + "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/event" + "github.com/ethereum/go-ethereum/log" + "github.com/ethereum/go-ethereum/metrics" + "github.com/ethereum/go-ethereum/params" +) + +const ( + // TODO: decide on a good default value + // bundleSlotSize is used to calculate how many data slots a single bundle + // takes up based on its size. The slots are used as DoS protection, ensuring + // that validating a new bundle remains a constant operation (in reality + // O(maxslots), where max slots are 4 currently). + bundleSlotSize = 128 * 1024 // 128KB + + maxMinTimestampFromNow = int64(300) // 5 minutes +) + +var ( + bundleGauge = metrics.NewRegisteredGauge("bundlepool/bundles", nil) + slotsGauge = metrics.NewRegisteredGauge("bundlepool/slots", nil) +) + +// BlockChain defines the minimal set of methods needed to back a tx pool with +// a chain. Exists to allow mocking the live chain out of tests. +type BlockChain interface { + // Config retrieves the chain's fork configuration. + Config() *params.ChainConfig + + // CurrentBlock returns the current head of the chain. + CurrentBlock() *types.Header + + // GetBlock retrieves a specific block, used during pool resets. + GetBlock(hash common.Hash, number uint64) *types.Block + + // StateAt returns a state database for a given root hash (generally the head). + StateAt(root common.Hash) (*state.StateDB, error) +} + +type BundleSimulator interface { + SimulateBundle(bundle *types.Bundle) (*big.Int, error) +} + +type BundlePool struct { + config Config + + bundles map[common.Hash]*types.Bundle + bundleHeap BundleHeap + mu sync.RWMutex + + slots uint64 // Number of slots currently allocated + + simulator BundleSimulator +} + +func New(config Config) *BundlePool { + // Sanitize the input to ensure no vulnerable gas prices are set + config = (&config).sanitize() + + pool := &BundlePool{ + config: config, + bundles: make(map[common.Hash]*types.Bundle), + bundleHeap: make(BundleHeap, 0), + } + + return pool +} + +func (p *BundlePool) SetBundleSimulator(simulator BundleSimulator) { + p.simulator = simulator +} + +func (p *BundlePool) Init(gasTip *big.Int, head *types.Header, reserve txpool.AddressReserver) error { + return nil +} + +func (p *BundlePool) FilterBundle(bundle *types.Bundle) bool { + for _, tx := range bundle.Txs { + if !p.filter(tx) { + return false + } + } + return true +} + +// AddBundle adds a mev bundle to the pool +func (p *BundlePool) AddBundle(bundle *types.Bundle) error { + if p.simulator == nil { + return txpool.ErrSimulatorMissing + } + + if bundle.MinTimestamp > uint64(time.Now().Unix()+maxMinTimestampFromNow) { + return txpool.ErrBundleTimestampTooHigh + } + + price, err := p.simulator.SimulateBundle(bundle) + if err != nil { + return err + } + if price.Cmp(p.minimalBundleGasPrice()) < 0 && p.slots+numSlots(bundle) > p.config.GlobalSlots { + return txpool.ErrBundleGasPriceLow + } + bundle.Price = price + + hash := bundle.Hash() + if _, ok := p.bundles[hash]; ok { + return txpool.ErrBundleAlreadyExist + } + for p.slots+numSlots(bundle) > p.config.GlobalSlots { + p.drop() + } + p.mu.Lock() + defer p.mu.Unlock() + p.bundles[hash] = bundle + heap.Push(&p.bundleHeap, bundle) + p.slots += numSlots(bundle) + + bundleGauge.Update(int64(len(p.bundles))) + slotsGauge.Update(int64(p.slots)) + return nil +} + +func (p *BundlePool) GetBundle(hash common.Hash) *types.Bundle { + p.mu.RUnlock() + defer p.mu.RUnlock() + + return p.bundles[hash] +} + +func (p *BundlePool) PruneBundle(hash common.Hash) { + p.mu.Lock() + defer p.mu.Unlock() + p.deleteBundle(hash) +} + +func (p *BundlePool) PendingBundles(blockNumber uint64, blockTimestamp uint64) []*types.Bundle { + p.mu.Lock() + defer p.mu.Unlock() + + ret := make([]*types.Bundle, 0) + for hash, bundle := range p.bundles { + // Prune outdated bundles + if (bundle.MaxTimestamp != 0 && blockTimestamp > bundle.MaxTimestamp) || + blockNumber > bundle.MaxBlockNumber { + p.deleteBundle(hash) + continue + } + + // Roll over future bundles + if bundle.MinTimestamp != 0 && blockTimestamp < bundle.MinTimestamp { + continue + } + + // return the ones that are in time + ret = append(ret, bundle) + } + + bundleGauge.Update(int64(len(p.bundles))) + slotsGauge.Update(int64(p.slots)) + return ret +} + +// AllBundles returns all the bundles currently in the pool +func (p *BundlePool) AllBundles() []*types.Bundle { + p.mu.RUnlock() + defer p.mu.RUnlock() + bundles := make([]*types.Bundle, 0, len(p.bundles)) + for _, bundle := range p.bundles { + bundles = append(bundles, bundle) + } + return bundles +} + +func (p *BundlePool) Filter(tx *types.Transaction) bool { + return false +} + +func (p *BundlePool) Close() error { + log.Info("Bundle pool stopped") + return nil +} + +func (p *BundlePool) Reset(oldHead, newHead *types.Header) { + p.reset(newHead) +} + +// SetGasTip updates the minimum price required by the subpool for a new +// transaction, and drops all transactions below this threshold. +func (p *BundlePool) SetGasTip(tip *big.Int) { + return +} + +// Has returns an indicator whether subpool has a transaction cached with the +// given hash. +func (p *BundlePool) Has(hash common.Hash) bool { + return false +} + +// Get returns a transaction if it is contained in the pool, or nil otherwise. +func (p *BundlePool) Get(hash common.Hash) *txpool.Transaction { + return nil +} + +// Add enqueues a batch of transactions into the pool if they are valid. Due +// to the large transaction churn, add may postpone fully integrating the tx +// to a later point to batch multiple ones together. +func (p *BundlePool) Add(txs []*txpool.Transaction, local bool, sync bool) []error { + return nil +} + +// Pending retrieves all currently processable transactions, grouped by origin +// account and sorted by nonce. +func (p *BundlePool) Pending(enforceTips bool) map[common.Address][]*txpool.LazyTransaction { + return nil +} + +// SubscribeTransactions subscribes to new transaction events. +func (p *BundlePool) SubscribeTransactions(ch chan<- core.NewTxsEvent) event.Subscription { + return nil +} + +// SubscribeReannoTxsEvent should return an event subscription of +// ReannoTxsEvent and send events to the given channel. +func (p *BundlePool) SubscribeReannoTxsEvent(chan<- core.ReannoTxsEvent) event.Subscription { + return nil +} + +// Nonce returns the next nonce of an account, with all transactions executable +// by the pool already applied on topool. +func (p *BundlePool) Nonce(addr common.Address) uint64 { + return 0 +} + +// Stats retrieves the current pool stats, namely the number of pending and the +// number of queued (non-executable) transactions. +func (p *BundlePool) Stats() (int, int) { + return 0, 0 +} + +// Content retrieves the data content of the transaction pool, returning all the +// pending as well as queued transactions, grouped by account and sorted by nonce. +func (p *BundlePool) Content() (map[common.Address][]*types.Transaction, map[common.Address][]*types.Transaction) { + return make(map[common.Address][]*types.Transaction), make(map[common.Address][]*types.Transaction) +} + +// ContentFrom retrieves the data content of the transaction pool, returning the +// pending as well as queued transactions of this address, grouped by nonce. +func (p *BundlePool) ContentFrom(addr common.Address) ([]*types.Transaction, []*types.Transaction) { + return []*types.Transaction{}, []*types.Transaction{} +} + +// Locals retrieves the accounts currently considered local by the pool. +func (p *BundlePool) Locals() []common.Address { + return []common.Address{} +} + +// Status returns the known status (unknown/pending/queued) of a transaction +// identified by their hashes. +func (p *BundlePool) Status(hash common.Hash) txpool.TxStatus { + return txpool.TxStatusUnknown +} + +func (p *BundlePool) filter(tx *types.Transaction) bool { + switch tx.Type() { + case types.LegacyTxType, types.AccessListTxType, types.DynamicFeeTxType: + return true + default: + return false + } +} + +func (p *BundlePool) reset(newHead *types.Header) { + p.mu.Lock() + defer p.mu.Unlock() + + // Prune outdated bundles + for hash, bundle := range p.bundles { + if (bundle.MaxTimestamp != 0 && newHead.Time > bundle.MaxTimestamp) || + newHead.Number.Cmp(new(big.Int).SetUint64(bundle.MaxBlockNumber)) > 0 { + p.slots -= numSlots(p.bundles[hash]) + delete(p.bundles, hash) + } + } +} + +// deleteBundle deletes a bundle from the pool. +// It assumes that the caller holds the pool's lock. +func (p *BundlePool) deleteBundle(hash common.Hash) { + p.slots -= numSlots(p.bundles[hash]) + delete(p.bundles, hash) +} + +// drop removes the bundle with the lowest gas price from the pool. +func (p *BundlePool) drop() { + p.mu.Lock() + defer p.mu.Unlock() + for len(p.bundleHeap) > 0 { + // Pop the bundle with the lowest gas price + // the min element in the heap may not exist in the pool as it may be pruned + leastPriceBundleHash := heap.Pop(&p.bundleHeap).(*types.Bundle).Hash() + if _, ok := p.bundles[leastPriceBundleHash]; ok { + p.deleteBundle(leastPriceBundleHash) + break + } + } +} + +// minimalBundleGasPrice return the lowest gas price from the pool. +func (p *BundlePool) minimalBundleGasPrice() *big.Int { + for len(p.bundleHeap) != 0 { + leastPriceBundleHash := p.bundleHeap[0].Hash() + if bundle, ok := p.bundles[leastPriceBundleHash]; ok { + return bundle.Price + } + heap.Pop(&p.bundleHeap) + } + return new(big.Int) +} + +// ===================================================================================================================== + +// numSlots calculates the number of slots needed for a single bundle. +func numSlots(bundle *types.Bundle) uint64 { + return (bundle.Size() + bundleSlotSize - 1) / bundleSlotSize +} + +// ===================================================================================================================== + +type BundleHeap []*types.Bundle + +func (h *BundleHeap) Len() int { return len(*h) } + +func (h *BundleHeap) Less(i, j int) bool { + return (*h)[i].Price.Cmp((*h)[j].Price) == -1 +} + +func (h *BundleHeap) Swap(i, j int) { (*h)[i], (*h)[j] = (*h)[j], (*h)[i] } + +func (h *BundleHeap) Push(x interface{}) { + *h = append(*h, x.(*types.Bundle)) +} + +func (h *BundleHeap) Pop() interface{} { + old := *h + n := len(old) + x := old[n-1] + *h = old[0 : n-1] + return x +} diff --git a/core/txpool/bundlepool/config.go b/core/txpool/bundlepool/config.go new file mode 100644 index 0000000000..252004ded5 --- /dev/null +++ b/core/txpool/bundlepool/config.go @@ -0,0 +1,73 @@ +package bundlepool + +import ( + "time" + + "github.com/ethereum/go-ethereum/log" +) + +type Config struct { + PriceLimit uint64 // Minimum gas price to enforce for acceptance into the pool + PriceBump uint64 // Minimum price bump percentage to replace an already existing transaction (nonce) + + GlobalSlots uint64 // Maximum number of bundle slots for all accounts + GlobalQueue uint64 // Maximum number of non-executable bundle slots for all accounts + MaxBundleBlocks uint64 // Maximum number of blocks for calculating MinimalBundleGasPrice + + BundleGasPricePercentile uint8 // Percentile of the recent minimal mev gas price + BundleGasPricerExpireTime time.Duration // Store time duration amount of recent mev gas price + UpdateBundleGasPricerInterval time.Duration // Time interval to update MevGasPricePool +} + +// DefaultConfig contains the default configurations for the bundle pool. +var DefaultConfig = Config{ + PriceLimit: 1, + PriceBump: 10, + + GlobalSlots: 4096 + 1024, // urgent + floating queue capacity with 4:1 ratio + GlobalQueue: 1024, + + MaxBundleBlocks: 50, + BundleGasPricePercentile: 90, + BundleGasPricerExpireTime: time.Minute, + UpdateBundleGasPricerInterval: time.Second, +} + +// sanitize checks the provided user configurations and changes anything that's +// unreasonable or unworkable. +func (config *Config) sanitize() Config { + conf := *config + if conf.PriceLimit < 1 { + log.Warn("Sanitizing invalid txpool price limit", "provided", conf.PriceLimit, "updated", DefaultConfig.PriceLimit) + conf.PriceLimit = DefaultConfig.PriceLimit + } + if conf.PriceBump < 1 { + log.Warn("Sanitizing invalid txpool price bump", "provided", conf.PriceBump, "updated", DefaultConfig.PriceBump) + conf.PriceBump = DefaultConfig.PriceBump + } + if conf.GlobalSlots < 1 { + log.Warn("Sanitizing invalid txpool bundle slots", "provided", conf.GlobalSlots, "updated", DefaultConfig.GlobalSlots) + conf.GlobalSlots = DefaultConfig.GlobalSlots + } + if conf.GlobalQueue < 1 { + log.Warn("Sanitizing invalid txpool global queue", "provided", conf.GlobalQueue, "updated", DefaultConfig.GlobalQueue) + conf.GlobalQueue = DefaultConfig.GlobalQueue + } + if conf.MaxBundleBlocks < 1 { + log.Warn("Sanitizing invalid txpool max bundle blocks", "provided", conf.MaxBundleBlocks, "updated", DefaultConfig.MaxBundleBlocks) + conf.MaxBundleBlocks = DefaultConfig.MaxBundleBlocks + } + if conf.BundleGasPricePercentile >= 100 { + log.Warn("Sanitizing invalid txpool bundle gas price percentile", "provided", conf.BundleGasPricePercentile, "updated", DefaultConfig.BundleGasPricePercentile) + conf.BundleGasPricePercentile = DefaultConfig.BundleGasPricePercentile + } + if conf.BundleGasPricerExpireTime < 1 { + log.Warn("Sanitizing invalid txpool bundle gas pricer expire time", "provided", conf.BundleGasPricerExpireTime, "updated", DefaultConfig.BundleGasPricerExpireTime) + conf.BundleGasPricerExpireTime = DefaultConfig.BundleGasPricerExpireTime + } + if conf.UpdateBundleGasPricerInterval < time.Second { + log.Warn("Sanitizing invalid txpool update BundleGasPricer interval", "provided", conf.UpdateBundleGasPricerInterval, "updated", DefaultConfig.UpdateBundleGasPricerInterval) + conf.UpdateBundleGasPricerInterval = DefaultConfig.UpdateBundleGasPricerInterval + } + return conf +} diff --git a/core/txpool/errors.go b/core/txpool/errors.go index bc26550f78..c3e42bddd7 100644 --- a/core/txpool/errors.go +++ b/core/txpool/errors.go @@ -19,7 +19,7 @@ package txpool import "errors" var ( - // ErrAlreadyKnown is returned if the transactions is already contained + // ErrAlreadyKnown is returned if the transaction is already contained // within the pool. ErrAlreadyKnown = errors.New("already known") @@ -54,4 +54,28 @@ var ( // ErrFutureReplacePending is returned if a future transaction replaces a pending // transaction. Future transactions should only be able to replace other future transactions. ErrFutureReplacePending = errors.New("future transaction tries to replace pending") + + // ErrInBlackList is returned if the sender or to is in black list. + ErrInBlackList = errors.New("sender or to in black list") + + // ErrTxPoolOverflow is returned if the transaction pool is full and can't accept + // another remote transaction. + ErrTxPoolOverflow = errors.New("txpool is full") + + // ErrNoActiveJournal is returned if a transaction is attempted to be inserted + // into the journal, but no such file is currently open. + ErrNoActiveJournal = errors.New("no active journal") + + // ErrSimulatorMissing is returned if the bundle simulator is missing. + ErrSimulatorMissing = errors.New("bundle simulator is missing") + + // ErrBundleTimestampTooHigh is returned if the bundle's MinTimestamp is too high. + ErrBundleTimestampTooHigh = errors.New("bundle MinTimestamp is too high") + + // ErrBundleGasPriceLow is returned if the bundle gas price is too low. + ErrBundleGasPriceLow = errors.New("bundle gas price is too low") + + // ErrBundleAlreadyExist is returned if the bundle is already contained + // within the pool. + ErrBundleAlreadyExist = errors.New("bundle already exist") ) diff --git a/core/txpool/legacypool/journal.go b/core/txpool/legacypool/journal.go index f04ab8fc14..ea53823e03 100644 --- a/core/txpool/legacypool/journal.go +++ b/core/txpool/legacypool/journal.go @@ -23,15 +23,12 @@ import ( "os" "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/core/txpool" "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/rlp" ) -// errNoActiveJournal is returned if a transaction is attempted to be inserted -// into the journal, but no such file is currently open. -var errNoActiveJournal = errors.New("no active journal") - // devNull is a WriteCloser that just discards anything written into it. Its // goal is to allow the transaction journal to write into a fake journal when // loading transactions on startup without printing warnings due to no file @@ -120,7 +117,7 @@ func (journal *journal) load(add func([]*types.Transaction) []error) error { // insert adds the specified transaction to the local disk journal. func (journal *journal) insert(tx *types.Transaction) error { if journal.writer == nil { - return errNoActiveJournal + return txpool.ErrNoActiveJournal } if err := rlp.Encode(journal.writer, tx); err != nil { return err diff --git a/core/txpool/legacypool/legacypool.go b/core/txpool/legacypool/legacypool.go index 42962e4a27..2b8becf083 100644 --- a/core/txpool/legacypool/legacypool.go +++ b/core/txpool/legacypool/legacypool.go @@ -18,7 +18,6 @@ package legacypool import ( - "errors" "math" "math/big" "sort" @@ -56,18 +55,6 @@ const ( txReannoMaxNum = 1024 ) -var ( - // ErrAlreadyKnown is returned if the transactions is already contained - // within the pool. - ErrAlreadyKnown = errors.New("already known") - - // ErrTxPoolOverflow is returned if the transaction pool is full and can't accept - // another remote transaction. - ErrTxPoolOverflow = errors.New("txpool is full") - - ErrInBlackList = errors.New("sender or to in black list") -) - var ( evictionInterval = time.Minute // Time interval to check for evictable transactions statsReportInterval = 8 * time.Second // Time interval to report transaction pool stats @@ -448,7 +435,7 @@ func (pool *LegacyPool) Close() error { } // Reset implements txpool.SubPool, allowing the legacy pool's internal state to be -// kept in sync with the main transacion pool's internal state. +// kept in sync with the main transaction pool's internal state. func (pool *LegacyPool) Reset(oldHead, newHead *types.Header) { wait := pool.requestReset(oldHead, newHead) <-wait @@ -631,7 +618,7 @@ func (pool *LegacyPool) validateTxBasics(tx *types.Transaction, local bool) erro for _, blackAddr := range types.NanoBlackList { if sender == blackAddr || (tx.To() != nil && *tx.To() == blackAddr) { log.Error("blacklist account detected", "account", blackAddr, "tx", tx.Hash()) - return ErrInBlackList + return txpool.ErrInBlackList } } @@ -663,7 +650,7 @@ func (pool *LegacyPool) validateTx(tx *types.Transaction, local bool) error { for _, blackAddr := range types.NanoBlackList { if sender == blackAddr || (tx.To() != nil && *tx.To() == blackAddr) { log.Error("blacklist account detected", "account", blackAddr, "tx", tx.Hash()) - return ErrInBlackList + return txpool.ErrInBlackList } } @@ -715,7 +702,7 @@ func (pool *LegacyPool) add(tx *types.Transaction, local bool) (replaced bool, e if pool.all.Get(hash) != nil { log.Trace("Discarding already known transaction", "hash", hash) knownTxMeter.Mark(1) - return false, ErrAlreadyKnown + return false, txpool.ErrAlreadyKnown } // Make the local flag. If it's from local source or it's from the network but // the sender is marked as local previously, treat it as the local transaction. @@ -767,7 +754,7 @@ func (pool *LegacyPool) add(tx *types.Transaction, local bool) (replaced bool, e // replacements to 25% of the slots if pool.changesSinceReorg > int(pool.config.GlobalSlots/4) { throttleTxMeter.Mark(1) - return false, ErrTxPoolOverflow + return false, txpool.ErrTxPoolOverflow } // New transaction is better than our worse ones, make room for it. @@ -779,7 +766,7 @@ func (pool *LegacyPool) add(tx *types.Transaction, local bool) (replaced bool, e if !isLocal && !success { log.Trace("Discarding overflown transaction", "hash", hash) overflowedTxMeter.Mark(1) - return false, ErrTxPoolOverflow + return false, txpool.ErrTxPoolOverflow } // If the new transaction is a future transaction it should never churn pending transactions @@ -1038,7 +1025,7 @@ func (pool *LegacyPool) addTxs(txs []*types.Transaction, local, sync bool) []err for i, tx := range txs { // If the transaction is known, pre-set the error slot if pool.all.Get(tx.Hash()) != nil { - errs[i] = ErrAlreadyKnown + errs[i] = txpool.ErrAlreadyKnown knownTxMeter.Mark(1) continue } @@ -1062,7 +1049,7 @@ func (pool *LegacyPool) addTxs(txs []*types.Transaction, local, sync bool) []err newErrs, dirtyAddrs := pool.addTxsLocked(news, local) pool.mu.Unlock() - var nilSlot = 0 + nilSlot := 0 for _, err := range newErrs { for errs[nilSlot] != nil { nilSlot++ diff --git a/core/txpool/subpool.go b/core/txpool/subpool.go index f1a89c5719..415be8f4db 100644 --- a/core/txpool/subpool.go +++ b/core/txpool/subpool.go @@ -69,7 +69,7 @@ type AddressReserver func(addr common.Address, reserve bool) error // production, this interface defines the common methods that allow the primary // transaction pool to manage the subpools. type SubPool interface { - // Filter is a selector used to decide whether a transaction whould be added + // Filter is a selector used to decide whether a transaction would be added // to this particular subpool. Filter(tx *types.Transaction) bool @@ -140,3 +140,21 @@ type SubPool interface { // identified by their hashes. Status(hash common.Hash) TxStatus } + +type BundleSubpool interface { + // FilterBundle is a selector used to decide whether a bundle would be added + // to this particular subpool. + FilterBundle(bundle *types.Bundle) bool + + // AddBundle enqueues a bundle into the pool if it is valid. + AddBundle(bundle *types.Bundle) error + + // PendingBundles retrieves all currently processable bundles. + PendingBundles(blockNumber uint64, blockTimestamp uint64) []*types.Bundle + + // AllBundles returns all the bundles currently in the pool. + AllBundles() []*types.Bundle + + // PruneBundle removes a bundle from the pool. + PruneBundle(hash common.Hash) +} diff --git a/core/txpool/txpool.go b/core/txpool/txpool.go index 7d9c2d4e92..14ebba7663 100644 --- a/core/txpool/txpool.go +++ b/core/txpool/txpool.go @@ -40,14 +40,12 @@ const ( TxStatusIncluded ) -var ( - // reservationsGaugeName is the prefix of a per-subpool address reservation - // metric. - // - // This is mostly a sanity metric to ensure there's no bug that would make - // some subpool hog all the reservations due to mis-accounting. - reservationsGaugeName = "txpool/reservations" -) +// reservationsGaugeName is the prefix of a per-subpool address reservation +// metric. +// +// This is mostly a sanity metric to ensure there's no bug that would make +// some subpool hog all the reservations due to mis-accounting. +var reservationsGaugeName = "txpool/reservations" // BlockChain defines the minimal set of methods needed to back a tx pool with // a chain. Exists to allow mocking the live chain out of tests. @@ -70,7 +68,7 @@ type TxPool struct { reservations map[common.Address]SubPool // Map with the account to pool reservations reserveLock sync.Mutex // Lock protecting the account reservations - subs event.SubscriptionScope // Subscription scope to unscubscribe all on shutdown + subs event.SubscriptionScope // Subscription scope to unsubscribe all on shutdown quit chan chan error // Quit channel to tear down the head updater } @@ -304,6 +302,29 @@ func (p *TxPool) Add(txs []*Transaction, local bool, sync bool) []error { return errs } +// AddBundle enqueues a bundle into the pool if it is valid. +func (p *TxPool) AddBundle(bundle *types.Bundle) error { + // Try to find a sub pool that accepts the bundle + for _, subpool := range p.subpools { + if bundleSubpool, ok := subpool.(BundleSubpool); ok { + if bundleSubpool.FilterBundle(bundle) { + return bundleSubpool.AddBundle(bundle) + } + } + } + return errors.New("no subpool accepts the bundle") +} + +// PruneBundle removes a bundle from the pool. +func (p *TxPool) PruneBundle(hash common.Hash) { + for _, subpool := range p.subpools { + if bundleSubpool, ok := subpool.(BundleSubpool); ok { + bundleSubpool.PruneBundle(hash) + return // Only one subpool can have the bundle + } + } +} + // Pending retrieves all currently processable transactions, grouped by origin // account and sorted by nonce. func (p *TxPool) Pending(enforceTips bool) map[common.Address][]*LazyTransaction { @@ -316,6 +337,26 @@ func (p *TxPool) Pending(enforceTips bool) map[common.Address][]*LazyTransaction return txs } +// PendingBundles retrieves all currently processable bundles. +func (p *TxPool) PendingBundles(blockNumber uint64, blockTimestamp uint64) []*types.Bundle { + for _, subpool := range p.subpools { + if bundleSubpool, ok := subpool.(BundleSubpool); ok { + return bundleSubpool.PendingBundles(blockNumber, blockTimestamp) + } + } + return nil +} + +// AllBundles returns all the bundles currently in the pool +func (p *TxPool) AllBundles() []*types.Bundle { + for _, subpool := range p.subpools { + if bundleSubpool, ok := subpool.(BundleSubpool); ok { + return bundleSubpool.AllBundles() + } + } + return nil +} + // SubscribeNewTxsEvent registers a subscription of NewTxsEvent and starts sending // events to the given channel. func (p *TxPool) SubscribeNewTxsEvent(ch chan<- core.NewTxsEvent) event.Subscription { diff --git a/core/types/bid.go b/core/types/bid.go new file mode 100644 index 0000000000..b460a03bb1 --- /dev/null +++ b/core/types/bid.go @@ -0,0 +1,44 @@ +package types + +import ( + "math/big" + "sync/atomic" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/common/hexutil" +) + +// TODO(roshan) refer to validator for all not in miner + +// Bid represents a bid. +type Bid struct { + BlockNumber uint64 `json:"blockNumber"` + ParentHash common.Hash `json:"parentHash"` + Txs []hexutil.Bytes `json:"txs,omitempty"` + GasUsed uint64 `json:"gasUsed"` + GasFee *big.Int `json:"gasFee"` + BuilderFee *big.Int `json:"builderFee"` + + // caches + hash atomic.Value +} + +func (b *Bid) Hash() common.Hash { + if hash := b.hash.Load(); hash != nil { + return hash.(common.Hash) + } + + var h common.Hash + h = rlpHash(b) + + b.hash.Store(h) + return h +} + +// BidArgs represents the arguments to submit a bid. +type BidArgs struct { + // bid + Bid *Bid + // signed signature of the bid + Signature string `json:"signature"` +} diff --git a/core/types/bundle.go b/core/types/bundle.go new file mode 100644 index 0000000000..040bb6621a --- /dev/null +++ b/core/types/bundle.go @@ -0,0 +1,55 @@ +package types + +import ( + "math/big" + "sync/atomic" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/rlp" +) + +type Bundle struct { + Txs Transactions + MaxBlockNumber uint64 + MinTimestamp uint64 + MaxTimestamp uint64 + RevertingTxHashes []common.Hash + + Price *big.Int // for bundle compare and prune + + // caches + hash atomic.Value + size atomic.Value +} + +type SimulatedBundle struct { + OriginalBundle *Bundle + + BundleGasFees *big.Int + BundleGasPrice *big.Int + BundleGasUsed uint64 + EthSentToSystem *big.Int +} + +func (bundle *Bundle) Size() uint64 { + if size := bundle.size.Load(); size != nil { + return size.(uint64) + } + c := writeCounter(0) + rlp.Encode(&c, bundle) + + size := uint64(c) + bundle.size.Store(size) + return size +} + +// Hash returns the bundle hash. +func (bundle *Bundle) Hash() common.Hash { + if hash := bundle.hash.Load(); hash != nil { + return hash.(common.Hash) + } + + h := rlpHash(bundle) + bundle.hash.Store(h) + return h +} diff --git a/eth/api_backend.go b/eth/api_backend.go index 3192823148..9d6169a456 100644 --- a/eth/api_backend.go +++ b/eth/api_backend.go @@ -20,6 +20,7 @@ import ( "context" "errors" "math/big" + "sort" "time" "github.com/ethereum/go-ethereum" @@ -290,6 +291,30 @@ func (b *EthAPIBackend) SendTx(ctx context.Context, signedTx *types.Transaction) return b.eth.txPool.Add([]*txpool.Transaction{{Tx: signedTx}}, true, false)[0] } +func (b *EthAPIBackend) SendBundle(ctx context.Context, bundle *types.Bundle) error { + return b.eth.txPool.AddBundle(bundle) +} + +func (b *EthAPIBackend) BundlePrice() *big.Int { + bundles := b.eth.txPool.AllBundles() + + sort.SliceStable(bundles, func(i, j int) bool { + return bundles[j].Price.Cmp(bundles[i].Price) < 0 + }) + + gasFloor := big.NewInt(b.eth.config.Miner.MevGasPriceFloor) + idx := len(bundles) / 2 + if bundles[idx].Price.Cmp(gasFloor) < 0 { + return gasFloor + } + + return bundles[idx].Price +} + +func (b *EthAPIBackend) UnregisterMevValidator(validator common.Address) { + b.eth.miner.UnregisterMevValidator(validator) +} + func (b *EthAPIBackend) GetPoolTransactions() (types.Transactions, error) { pending := b.eth.txPool.Pending(false) var txs types.Transactions diff --git a/eth/backend.go b/eth/backend.go index 133b2f2a86..7bf15b180e 100644 --- a/eth/backend.go +++ b/eth/backend.go @@ -37,6 +37,7 @@ import ( "github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/state/pruner" "github.com/ethereum/go-ethereum/core/txpool" + "github.com/ethereum/go-ethereum/core/txpool/bundlepool" "github.com/ethereum/go-ethereum/core/txpool/legacypool" "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/vm" @@ -265,9 +266,10 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) { config.TxPool.Journal = stack.ResolvePath(config.TxPool.Journal) } legacyPool := legacypool.New(config.TxPool, eth.blockchain) + bundlePool := bundlepool.New(config.BundlePool) // TODO(Nathan): eth.txPool, err = txpool.New(new(big.Int).SetUint64(config.TxPool.PriceLimit), eth.blockchain, []txpool.SubPool{legacyPool, blobPool}) - eth.txPool, err = txpool.New(new(big.Int).SetUint64(config.TxPool.PriceLimit), eth.blockchain, []txpool.SubPool{legacyPool}) + eth.txPool, err = txpool.New(new(big.Int).SetUint64(config.TxPool.PriceLimit), eth.blockchain, []txpool.SubPool{legacyPool, bundlePool}) if err != nil { return nil, err } @@ -293,7 +295,6 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) { eth.miner = miner.New(eth, &config.Miner, eth.blockchain.Config(), eth.EventMux(), eth.engine, eth.isLocalBlock) eth.miner.SetExtra(makeExtraData(config.Miner.ExtraData)) - // Create voteManager instance if posa, ok := eth.engine.(consensus.PoSA); ok { // Create votePool instance votePool := vote.NewVotePool(eth.blockchain, posa) @@ -324,6 +325,17 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) { } log.Info("Create voteManager successfully") } + + if config.Miner.Bidder.Enable { + bundlePool.SetBundleSimulator(eth.miner) + + builderAccount, err := eth.accountManager.Find(accounts.Account{Address: config.Miner.Bidder.Account}) + if err != nil { + log.Error("Failed to find builder account", "err", err) + return nil, err + } + eth.miner.Worker.Bidder.SetWallet(builderAccount) + } } gpoParams := config.GPO diff --git a/eth/ethconfig/config.go b/eth/ethconfig/config.go index 248b2c5638..eca6a9ea39 100644 --- a/eth/ethconfig/config.go +++ b/eth/ethconfig/config.go @@ -30,6 +30,7 @@ import ( "github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/txpool/blobpool" + "github.com/ethereum/go-ethereum/core/txpool/bundlepool" "github.com/ethereum/go-ethereum/core/txpool/legacypool" "github.com/ethereum/go-ethereum/eth/downloader" "github.com/ethereum/go-ethereum/eth/gasprice" @@ -170,8 +171,9 @@ type Config struct { Miner miner.Config // Transaction pool options - TxPool legacypool.Config - BlobPool blobpool.Config + TxPool legacypool.Config + BlobPool blobpool.Config + BundlePool bundlepool.Config // Gas Price Oracle options GPO gasprice.Config diff --git a/eth/handler.go b/eth/handler.go index b93382402d..421af80889 100644 --- a/eth/handler.go +++ b/eth/handler.go @@ -33,7 +33,6 @@ import ( "github.com/ethereum/go-ethereum/core/monitor" "github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/txpool" - "github.com/ethereum/go-ethereum/core/txpool/legacypool" "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/eth/downloader" "github.com/ethereum/go-ethereum/eth/fetcher" @@ -84,10 +83,16 @@ type txPool interface { // Add should add the given transactions to the pool. Add(txs []*txpool.Transaction, local bool, sync bool) []error + // AddBundle should add the given bundle to the pool. + AddBundle(bundle *types.Bundle) error + // Pending should return pending transactions. // The slice should be modifiable by the caller. Pending(enforceTips bool) map[common.Address][]*txpool.LazyTransaction + // PendingBundles should return pending bundles. + PendingBundles(blockNumber uint64, blockTimestamp uint64) []*types.Bundle + // SubscribeNewTxsEvent should return an event subscription of // NewTxsEvent and send events to the given channel. SubscribeNewTxsEvent(chan<- core.NewTxsEvent) event.Subscription @@ -347,7 +352,7 @@ func newHandler(config *handlerConfig) (*handler, error) { addTxs := func(peer string, txs []*txpool.Transaction) []error { errors := h.txpool.Add(txs, false, false) for _, err := range errors { - if err == legacypool.ErrInBlackList { + if err == txpool.ErrInBlackList { accountBlacklistPeerCounter.Inc(1) p := h.peers.peer(peer) if p != nil { @@ -995,7 +1000,7 @@ func (h *handler) enableSyncedFeatures() { h.acceptTxs.Store(true) // In the bsc scenario, pathdb.MaxDirtyBufferSize (256MB) will be used. // The performance is better than DefaultDirtyBufferSize (64MB). - //if h.chain.TrieDB().Scheme() == rawdb.PathScheme { + // if h.chain.TrieDB().Scheme() == rawdb.PathScheme { // h.chain.TrieDB().SetBufferSize(pathdb.DefaultDirtyBufferSize) - //} + // } } diff --git a/ethclient/ethclient.go b/ethclient/ethclient.go index 0765fd5d49..f8598ab99b 100644 --- a/ethclient/ethclient.go +++ b/ethclient/ethclient.go @@ -52,6 +52,16 @@ func DialContext(ctx context.Context, rawurl string) (*Client, error) { return NewClient(c), nil } +// DialOptions creates a new RPC client for the given URL. You can supply any of the +// pre-defined client options to configure the underlying transport. +func DialOptions(ctx context.Context, rawurl string, opts ...rpc.ClientOption) (*Client, error) { + c, err := rpc.DialOptions(ctx, rawurl, opts...) + if err != nil { + return nil, err + } + return NewClient(c), nil +} + // NewClient creates a client that uses the given RPC client. func NewClient(c *rpc.Client) *Client { return &Client{c} @@ -689,6 +699,11 @@ func (ec *Client) SendTransactionConditional(ctx context.Context, tx *types.Tran return ec.c.CallContext(ctx, nil, "eth_sendRawTransactionConditional", hexutil.Encode(data), opts) } +// BidBlock sends a bid for selection +func (ec *Client) BidBlock(ctx context.Context, args *types.BidArgs) error { + return ec.c.CallContext(ctx, nil, "mev_bidBlock", args) +} + func toBlockNumArg(number *big.Int) string { if number == nil { return "latest" diff --git a/go.mod b/go.mod index fbebfee8ba..8432e14728 100644 --- a/go.mod +++ b/go.mod @@ -50,6 +50,7 @@ require ( github.com/influxdata/influxdb1-client v0.0.0-20220302092344-a9ab5670611c github.com/jackpal/go-nat-pmp v1.0.2 github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e + github.com/json-iterator/go v1.1.12 github.com/julienschmidt/httprouter v1.3.0 github.com/karalabe/usb v0.0.3-0.20230711191512-61db3e06439c github.com/kylelemons/godebug v1.1.0 @@ -161,7 +162,6 @@ require ( github.com/ipfs/go-log/v2 v2.5.1 // indirect github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect github.com/jmhodges/levigo v1.0.0 // indirect - github.com/json-iterator/go v1.1.12 // indirect github.com/juju/ansiterm v0.0.0-20180109212912-720a0952cc2a // indirect github.com/k0kubun/go-ansi v0.0.0-20180517002512-3bf9e2903213 // indirect github.com/kilic/bls12-381 v0.1.0 // indirect diff --git a/internal/ethapi/api_bundle.go b/internal/ethapi/api_bundle.go new file mode 100644 index 0000000000..3677d742e1 --- /dev/null +++ b/internal/ethapi/api_bundle.go @@ -0,0 +1,129 @@ +package ethapi + +import ( + "context" + "errors" + "math/big" + "time" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/common/hexutil" + "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/rpc" +) + +const ( + // MaxBundleAliveBlock is the max alive block for bundle + MaxBundleAliveBlock = 100 + // MaxBundleAliveTime is the max alive time for bundle + MaxBundleAliveTime = 5 * 60 // second + MaxOracleBlocks = 21 + DropBlocks = 3 + + InvalidBundleParamError = -38000 +) + +// PrivateTxBundleAPI offers an API for accepting bundled transactions +type PrivateTxBundleAPI struct { + b Backend +} + +// NewPrivateTxBundleAPI creates a new Tx Bundle API instance. +func NewPrivateTxBundleAPI(b Backend) *PrivateTxBundleAPI { + return &PrivateTxBundleAPI{b} +} + +// SendBundleArgs represents the arguments for a call. +type SendBundleArgs struct { + Txs []hexutil.Bytes `json:"txs"` + MaxBlockNumber rpc.BlockNumber `json:"maxBlockNumber"` + MinTimestamp *uint64 `json:"minTimestamp"` + MaxTimestamp *uint64 `json:"maxTimestamp"` + RevertingTxHashes []common.Hash `json:"revertingTxHashes"` +} + +func (s *PrivateTxBundleAPI) BundlePrice(ctx context.Context) *big.Int { + return s.b.BundlePrice() +} + +// SendBundle will add the signed transaction to the transaction pool. +// The sender is responsible for signing the transaction and using the correct nonce and ensuring validity +func (s *PrivateTxBundleAPI) SendBundle(ctx context.Context, args SendBundleArgs) error { + if len(args.Txs) == 0 { + return newBundleError(errors.New("bundle missing txs")) + } + + if args.MaxBlockNumber == 0 && (args.MaxTimestamp == nil || *args.MaxTimestamp == 0) { + maxTimeStamp := uint64(time.Now().Unix()) + MaxBundleAliveTime + args.MaxTimestamp = &maxTimeStamp + } + + currentHeader := s.b.CurrentHeader() + + if args.MaxBlockNumber != 0 && args.MaxBlockNumber.Int64() > currentHeader.Number.Int64()+MaxBundleAliveBlock { + return newBundleError(errors.New("the maxBlockNumber should not be lager than currentBlockNum + 100")) + } + + if args.MaxTimestamp != nil && args.MinTimestamp != nil && *args.MaxTimestamp != 0 && *args.MinTimestamp != 0 { + if *args.MaxTimestamp <= *args.MinTimestamp { + return newBundleError(errors.New("the maxTimestamp should not be less than minTimestamp")) + } + } + + if args.MaxTimestamp != nil && *args.MaxTimestamp != 0 && *args.MaxTimestamp < currentHeader.Time { + return newBundleError(errors.New("the maxTimestamp should not be less than currentBlockTimestamp")) + } + + if (args.MaxTimestamp != nil && *args.MaxTimestamp > currentHeader.Time+uint64(MaxBundleAliveTime)) || + (args.MinTimestamp != nil && *args.MinTimestamp > currentHeader.Time+uint64(MaxBundleAliveTime)) { + return newBundleError(errors.New("the minTimestamp/maxTimestamp should not be later than currentBlockTimestamp + 5 minutes")) + } + + var txs types.Transactions + + for _, encodedTx := range args.Txs { + tx := new(types.Transaction) + if err := tx.UnmarshalBinary(encodedTx); err != nil { + return err + } + txs = append(txs, tx) + } + + var minTimestamp, maxTimestamp uint64 + + if args.MinTimestamp != nil { + minTimestamp = *args.MinTimestamp + } + + if args.MaxTimestamp != nil { + maxTimestamp = *args.MaxTimestamp + } + + bundle := &types.Bundle{ + Txs: txs, + MaxBlockNumber: uint64(args.MaxBlockNumber), + MinTimestamp: minTimestamp, + MaxTimestamp: maxTimestamp, + RevertingTxHashes: args.RevertingTxHashes, + } + + return s.b.SendBundle(ctx, bundle) +} + +func newBundleError(err error) *bundleError { + return &bundleError{ + error: err, + } +} + +// bundleError is an API error that encompasses an invalid bundle with JSON error +// code and a binary data blob. +type bundleError struct { + error +} + +// ErrorCode returns the JSON error code for an invalid bundle. +// See: https://github.com/ethereum/wiki/wiki/JSON-RPC-Error-Codes-Improvement-Proposal +func (e *bundleError) ErrorCode() int { + return InvalidBundleParamError +} diff --git a/internal/ethapi/api_issue.go b/internal/ethapi/api_issue.go new file mode 100644 index 0000000000..ee90d11033 --- /dev/null +++ b/internal/ethapi/api_issue.go @@ -0,0 +1,53 @@ +package ethapi + +import ( + "context" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/log" +) + +const ( + MevNotRunningError = -38002 +) + +// IssueAPI offers an API for accepting bid issue from validator +type IssueAPI struct { + b Backend +} + +// NewIssueAPI creates a new bid issue API instance. +func NewIssueAPI(b Backend) *IssueAPI { + return &IssueAPI{b} +} + +// IssueArgs represents the arguments for a call. +type IssueArgs struct { + Validator common.Address `json:"validator"` + BidHash common.Hash `json:"bidHash"` + Error BidError `json:"message"` +} + +// BidError is an API error that encompasses an invalid bid with JSON error +// code and a binary data blob. +type BidError struct { + error + Code int +} + +// ErrorCode returns the JSON error code for an invalid bid. +// See: https://github.com/ethereum/wiki/wiki/JSON-RPC-Error-Codes-Improvement-Proposal +func (e *BidError) ErrorCode() int { + return e.Code +} + +func (s *IssueAPI) ReportIssue(ctx context.Context, args IssueArgs) error { + log.Error("received issue", "bidHash", args.BidHash, "message", args.Error.Error(), "code", args.Error.ErrorCode()) + + switch respCode := args.Error.ErrorCode(); respCode { + case MevNotRunningError: + s.b.UnregisterMevValidator(args.Validator) + } + + return nil +} diff --git a/internal/ethapi/api_test.go b/internal/ethapi/api_test.go index 7435616c9c..d189784437 100644 --- a/internal/ethapi/api_test.go +++ b/internal/ethapi/api_test.go @@ -508,6 +508,9 @@ func (b testBackend) SubscribeNewVoteEvent(ch chan<- core.NewVoteEvent) event.Su func (b testBackend) SendTx(ctx context.Context, signedTx *types.Transaction) error { panic("implement me") } +func (b testBackend) SendBundle(ctx context.Context, bundle *types.Bundle) error { + panic("implement me") +} func (b testBackend) GetTransaction(ctx context.Context, txHash common.Hash) (*types.Transaction, common.Hash, uint64, uint64, error) { tx, blockHash, blockNumber, index := rawdb.ReadTransaction(b.db, txHash) return tx, blockHash, blockNumber, index, nil diff --git a/internal/ethapi/backend.go b/internal/ethapi/backend.go index d71d7e8eba..217de4cd1d 100644 --- a/internal/ethapi/backend.go +++ b/internal/ethapi/backend.go @@ -77,6 +77,9 @@ type Backend interface { // Transaction pool API SendTx(ctx context.Context, signedTx *types.Transaction) error + SendBundle(ctx context.Context, bundle *types.Bundle) error + BundlePrice() *big.Int + UnregisterMevValidator(validator common.Address) GetTransaction(ctx context.Context, txHash common.Hash) (*types.Transaction, common.Hash, uint64, uint64, error) GetPoolTransactions() (types.Transactions, error) GetPoolTransaction(txHash common.Hash) *types.Transaction @@ -127,6 +130,12 @@ func GetAPIs(apiBackend Backend) []rpc.API { }, { Namespace: "personal", Service: NewPersonalAccountAPI(apiBackend, nonceLock), + }, { + Namespace: "eth", + Service: NewPrivateTxBundleAPI(apiBackend), + }, { + Namespace: "mev", + Service: NewIssueAPI(apiBackend), }, } } diff --git a/internal/ethapi/transaction_args_test.go b/internal/ethapi/transaction_args_test.go index fc42df3ddb..1590579cd0 100644 --- a/internal/ethapi/transaction_args_test.go +++ b/internal/ethapi/transaction_args_test.go @@ -324,6 +324,7 @@ func (b *backendMock) SubscribeNewVoteEvent(ch chan<- core.NewVoteEvent) event.S return nil } func (b *backendMock) SendTx(ctx context.Context, signedTx *types.Transaction) error { return nil } +func (b *backendMock) SendBundle(ctx context.Context, bundle *types.Bundle) error { return nil } func (b *backendMock) GetTransaction(ctx context.Context, txHash common.Hash) (*types.Transaction, common.Hash, uint64, uint64, error) { return nil, [32]byte{}, 0, 0, nil } diff --git a/les/api_backend.go b/les/api_backend.go index 9ad566f1ad..d948342a8d 100644 --- a/les/api_backend.go +++ b/les/api_backend.go @@ -200,6 +200,16 @@ func (b *LesApiBackend) SendTx(ctx context.Context, signedTx *types.Transaction) return b.eth.txPool.Add(ctx, signedTx) } +func (b *LesApiBackend) SendBundle(ctx context.Context, bundle *types.Bundle) error { + return b.eth.txPool.AddBundle(bundle) +} + +func (b *LesApiBackend) BundlePrice() *big.Int { + return nil +} + +func (b *LesApiBackend) UnregisterMevValidator(validator common.Address) {} + func (b *LesApiBackend) RemoveTx(txHash common.Hash) { b.eth.txPool.RemoveTx(txHash) } diff --git a/light/txpool.go b/light/txpool.go index b792d70b14..1ff44c9b5c 100644 --- a/light/txpool.go +++ b/light/txpool.go @@ -450,6 +450,11 @@ func (pool *TxPool) Add(ctx context.Context, tx *types.Transaction) error { return nil } +// AddBundle enqueues a bundle into the pool if it is valid. +func (pool *TxPool) AddBundle(bundle *types.Bundle) error { + return nil +} + // AddBatch adds all valid transactions to the pool and passes them to // the tx relay backend func (pool *TxPool) AddBatch(ctx context.Context, txs []*types.Transaction) { diff --git a/miner/bidder.go b/miner/bidder.go new file mode 100644 index 0000000000..b589f6bd58 --- /dev/null +++ b/miner/bidder.go @@ -0,0 +1,264 @@ +package miner + +import ( + "context" + "fmt" + "net" + "net/http" + "sync" + "time" + + "github.com/ethereum/go-ethereum/accounts" + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/common/hexutil" + "github.com/ethereum/go-ethereum/consensus" + "github.com/ethereum/go-ethereum/core" + "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/ethclient" + "github.com/ethereum/go-ethereum/log" + "github.com/ethereum/go-ethereum/rlp" + "github.com/ethereum/go-ethereum/rpc" +) + +const maxBid int64 = 3 + +var ( + dialer = &net.Dialer{ + Timeout: time.Second, + KeepAlive: 60 * time.Second, + } + + transport = &http.Transport{ + DialContext: dialer.DialContext, + MaxIdleConnsPerHost: 50, + MaxConnsPerHost: 50, + IdleConnTimeout: 90 * time.Second, + } + + client = &http.Client{ + Timeout: 5 * time.Second, + Transport: transport, + } +) + +type ValidatorConfig struct { + Address common.Address + URL string +} + +type BidderConfig struct { + Enable bool + Validators []ValidatorConfig + Account common.Address + DelayLeftOver uint64 +} + +type Bidder struct { + config *BidderConfig + engine consensus.Engine + chain *core.BlockChain + + validators map[common.Address]*ethclient.Client // validator address -> ethclient.Client + + bestWorks map[int64]*environment + + newBidCh chan *environment + exitCh chan struct{} + + wg sync.WaitGroup + + wallet accounts.Wallet +} + +func NewBidder(config *BidderConfig, engine consensus.Engine, chain *core.BlockChain) *Bidder { + b := &Bidder{ + config: config, + engine: engine, + chain: chain, + validators: make(map[common.Address]*ethclient.Client), + bestWorks: make(map[int64]*environment), + newBidCh: make(chan *environment, 10), + exitCh: make(chan struct{}), + } + + if !config.Enable { + return b + } + + for _, v := range config.Validators { + cl, err := ethclient.DialOptions(context.Background(), v.URL, rpc.WithHTTPClient(client)) + if err != nil { + log.Error("Bidder: failed to dial validator", "url", v.URL, "err", err) + continue + } + + b.validators[v.Address] = cl + } + + if len(b.validators) == 0 { + log.Warn("Bidder: No valid validators") + } + + return b +} + +func (b *Bidder) mainLoop() { + defer b.wg.Done() + + timer := time.NewTimer(0) + defer timer.Stop() + <-timer.C // discard the initial tick + + var ( + bidNum int64 = 0 + bidUntil time.Time + currentHeight = b.chain.CurrentBlock().Number.Int64() + ) + for { + select { + case work := <-b.newBidCh: + if work.header.Number.Int64() > currentHeight { + bidNum = 0 + bidUntil = time.Unix(int64(work.header.Time+b.chain.Config().Parlia.Period-b.config.DelayLeftOver), 0) + currentHeight = work.header.Number.Int64() + + if time.Now().After(bidUntil) { + timer.Reset(0) + } else { + timer.Reset(bidUntil.Sub(time.Now()) / time.Duration(maxBid)) + } + } + if bidNum < maxBid && b.isBestWork(work) { + b.bestWorks[work.header.Number.Int64()] = work + } + case <-timer.C: + go func() { + if b.bestWorks[currentHeight] != nil { + b.bid(b.bestWorks[currentHeight]) + b.bestWorks[currentHeight] = nil + bidNum++ + if bidNum < maxBid && time.Now().Before(bidUntil) { + timer.Reset(bidUntil.Sub(time.Now()) / time.Duration(maxBid-bidNum)) + } + } + }() + case <-b.exitCh: + return + } + } +} + +func (b *Bidder) SetWallet(wallet accounts.Wallet) { + b.wallet = wallet +} + +func (b *Bidder) registered(validator common.Address) bool { + _, ok := b.validators[validator] + return ok +} + +func (b *Bidder) register(validator common.Address, url string) error { + if _, ok := b.validators[validator]; ok { + return fmt.Errorf("validator %s already registered", validator.String()) + } + + cl, err := ethclient.DialOptions(context.Background(), url, rpc.WithHTTPClient(client)) + if err != nil { + log.Error("Bidder: failed to dial validator", "url", url, "err", err) + return err + } + + b.validators[validator] = cl + return nil +} + +func (b *Bidder) unregister(validator common.Address) { + if _, ok := b.validators[validator]; ok { + delete(b.validators, validator) + } +} + +// bid notifies the next in-turn validator the work +// 1. compute the return profit for builder based on realtime traffic and validator commission +// 2. send bid to validator +func (b *Bidder) bid(work *environment) { + var ( + cli = b.validators[work.coinbase] + parent = b.chain.CurrentBlock() + bidArgs *types.BidArgs + ) + + if cli == nil { + log.Info("Bidder: validator not integrated", "validator", work.coinbase) + return + } + + // construct bid from work + { + var txs []hexutil.Bytes + for _, tx := range work.txs { + var txBytes []byte + var err error + txBytes, err = tx.MarshalBinary() + if err != nil { + log.Error("Bidder: fail to marshal tx", "tx", tx, "err", err) + return + } + txs = append(txs, txBytes) + } + + bid := types.Bid{ + BlockNumber: parent.Number.Uint64() + 1, + ParentHash: parent.Hash(), + GasUsed: work.header.GasUsed, + GasFee: work.profit, + Txs: txs, + // TODO: decide builderFee according to realtime traffic and validator commission + } + + signature, err := b.signBid(&bid) + if err != nil { + log.Error("Bidder: fail to sign bid", "err", err) + return + } + + bidArgs = &types.BidArgs{ + Bid: &bid, + Signature: hexutil.Encode(signature), + } + } + + err := cli.BidBlock(context.Background(), bidArgs) + if err != nil { + log.Error("Bidder: bidding failed", "err", err) + return + } + + log.Debug("Bidder: bidding success") + + return +} + +// isBestWork returns the work is better than the current best work +func (b *Bidder) isBestWork(work *environment) bool { + if work.profit == nil { + return false + } + + return b.bestWorks[work.header.Number.Int64()].profit.Cmp(work.profit) < 0 +} + +// signBid signs the bid with builder's account +func (b *Bidder) signBid(bid *types.Bid) ([]byte, error) { + bz, err := rlp.EncodeToBytes(bid) + if err != nil { + return nil, err + } + + return b.wallet.SignData(accounts.Account{Address: b.config.Account}, accounts.MimetypeTextPlain, bz) +} + +// isEnabled returns whether the bid is enabled +func (b *Bidder) isEnabled() bool { + return b.config.Enable +} diff --git a/miner/bundle_cache.go b/miner/bundle_cache.go new file mode 100644 index 0000000000..37cf1ab74a --- /dev/null +++ b/miner/bundle_cache.go @@ -0,0 +1,83 @@ +package miner + +import ( + "sync" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/core/types" +) + +const ( + maxHeaders = 3 +) + +type BundleCache struct { + mu sync.Mutex + entries []*BundleCacheEntry +} + +func NewBundleCache() *BundleCache { + return &BundleCache{ + entries: make([]*BundleCacheEntry, maxHeaders), + } +} + +func (b *BundleCache) GetBundleCache(header common.Hash) *BundleCacheEntry { + b.mu.Lock() + defer b.mu.Unlock() + + for _, entry := range b.entries { + if entry != nil && entry.headerHash == header { + return entry + } + } + newEntry := newCacheEntry(header) + b.entries = b.entries[1:] + b.entries = append(b.entries, newEntry) + + return newEntry +} + +type BundleCacheEntry struct { + mu sync.Mutex + headerHash common.Hash + successfulBundles map[common.Hash]*types.SimulatedBundle + failedBundles map[common.Hash]struct{} +} + +func newCacheEntry(header common.Hash) *BundleCacheEntry { + return &BundleCacheEntry{ + headerHash: header, + successfulBundles: make(map[common.Hash]*types.SimulatedBundle), + failedBundles: make(map[common.Hash]struct{}), + } +} + +func (c *BundleCacheEntry) GetSimulatedBundle(bundle common.Hash) (*types.SimulatedBundle, bool) { + c.mu.Lock() + defer c.mu.Unlock() + + if simmed, ok := c.successfulBundles[bundle]; ok { + return simmed, true + } + + if _, ok := c.failedBundles[bundle]; ok { + return nil, true + } + + return nil, false +} + +func (c *BundleCacheEntry) UpdateSimulatedBundles(result []*types.SimulatedBundle, bundles []*types.Bundle) { + c.mu.Lock() + defer c.mu.Unlock() + + for i, simBundle := range result { + bundleHash := bundles[i].Hash() + if simBundle != nil { + c.successfulBundles[bundleHash] = simBundle + } else { + c.failedBundles[bundleHash] = struct{}{} + } + } +} diff --git a/miner/miner.go b/miner/miner.go index 4db6140803..4baa6a4de0 100644 --- a/miner/miner.go +++ b/miner/miner.go @@ -26,6 +26,7 @@ import ( "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/consensus" + "github.com/ethereum/go-ethereum/consensus/misc/eip1559" "github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/txpool" @@ -54,8 +55,12 @@ type Config struct { Recommit time.Duration // The time interval for miner to re-create mining work. VoteEnable bool // Whether to vote when mining + MevGasPriceFloor int64 `toml:",omitempty"` + NewPayloadTimeout time.Duration // The maximum time allowance for creating a new payload DisableVoteAttestation bool // Whether to skip assembling vote attestation + + Bidder BidderConfig // Bidder configuration } // DefaultConfig contains default settings for miner. @@ -80,7 +85,7 @@ type Miner struct { exitCh chan struct{} startCh chan struct{} stopCh chan struct{} - worker *worker + Worker *worker wg sync.WaitGroup } @@ -93,7 +98,7 @@ func New(eth Backend, config *Config, chainConfig *params.ChainConfig, mux *even exitCh: make(chan struct{}), startCh: make(chan struct{}), stopCh: make(chan struct{}), - worker: newWorker(config, chainConfig, engine, eth, mux, isLocalBlock, false), + Worker: newWorker(config, chainConfig, engine, eth, mux, isLocalBlock, false), } miner.wg.Add(1) go miner.update() @@ -128,42 +133,42 @@ func (miner *Miner) update() { switch ev.Data.(type) { case downloader.StartEvent: wasMining := miner.Mining() - miner.worker.stop() + miner.Worker.stop() canStart = false if wasMining { // Resume mining after sync was finished shouldStart = true log.Info("Mining aborted due to sync") } - miner.worker.syncing.Store(true) + miner.Worker.syncing.Store(true) case downloader.FailedEvent: canStart = true if shouldStart { - miner.worker.start() + miner.Worker.start() } - miner.worker.syncing.Store(false) + miner.Worker.syncing.Store(false) case downloader.DoneEvent: canStart = true if shouldStart { - miner.worker.start() + miner.Worker.start() } - miner.worker.syncing.Store(false) + miner.Worker.syncing.Store(false) // Stop reacting to downloader events events.Unsubscribe() } case <-miner.startCh: if canStart { - miner.worker.start() + miner.Worker.start() } shouldStart = true case <-miner.stopCh: shouldStart = false - miner.worker.stop() + miner.Worker.stop() case <-miner.exitCh: - miner.worker.close() + miner.Worker.close() return } } @@ -183,7 +188,7 @@ func (miner *Miner) Close() { } func (miner *Miner) Mining() bool { - return miner.worker.isRunning() + return miner.Worker.isRunning() } func (miner *Miner) Hashrate() uint64 { @@ -197,34 +202,34 @@ func (miner *Miner) SetExtra(extra []byte) error { if uint64(len(extra)) > params.MaximumExtraDataSize { return fmt.Errorf("extra exceeds max length. %d > %v", len(extra), params.MaximumExtraDataSize) } - miner.worker.setExtra(extra) + miner.Worker.setExtra(extra) return nil } // SetRecommitInterval sets the interval for sealing work resubmitting. func (miner *Miner) SetRecommitInterval(interval time.Duration) { - miner.worker.setRecommitInterval(interval) + miner.Worker.setRecommitInterval(interval) } // Pending returns the currently pending block and associated state. The returned // values can be nil in case the pending block is not initialized func (miner *Miner) Pending() (*types.Block, *state.StateDB) { - if miner.worker.isRunning() { - pendingBlock, pendingState := miner.worker.pending() + if miner.Worker.isRunning() { + pendingBlock, pendingState := miner.Worker.pending() if pendingState != nil && pendingBlock != nil { return pendingBlock, pendingState } } // fallback to latest block - block := miner.worker.chain.CurrentBlock() + block := miner.Worker.chain.CurrentBlock() if block == nil { return nil, nil } - stateDb, err := miner.worker.chain.StateAt(block.Root) + stateDb, err := miner.Worker.chain.StateAt(block.Root) if err != nil { return nil, nil } - return miner.worker.chain.GetBlockByHash(block.Hash()), stateDb + return miner.Worker.chain.GetBlockByHash(block.Hash()), stateDb } // PendingBlock returns the currently pending block. The returned block can be @@ -234,39 +239,96 @@ func (miner *Miner) Pending() (*types.Block, *state.StateDB) { // simultaneously, please use Pending(), as the pending state can // change between multiple method calls func (miner *Miner) PendingBlock() *types.Block { - if miner.worker.isRunning() { - pendingBlock := miner.worker.pendingBlock() + if miner.Worker.isRunning() { + pendingBlock := miner.Worker.pendingBlock() if pendingBlock != nil { return pendingBlock } } // fallback to latest block - return miner.worker.chain.GetBlockByHash(miner.worker.chain.CurrentBlock().Hash()) + return miner.Worker.chain.GetBlockByHash(miner.Worker.chain.CurrentBlock().Hash()) } // PendingBlockAndReceipts returns the currently pending block and corresponding receipts. // The returned values can be nil in case the pending block is not initialized. func (miner *Miner) PendingBlockAndReceipts() (*types.Block, types.Receipts) { - return miner.worker.pendingBlockAndReceipts() + return miner.Worker.pendingBlockAndReceipts() } func (miner *Miner) SetEtherbase(addr common.Address) { - miner.worker.setEtherbase(addr) + miner.Worker.setEtherbase(addr) } // SetGasCeil sets the gaslimit to strive for when mining blocks post 1559. // For pre-1559 blocks, it sets the ceiling. func (miner *Miner) SetGasCeil(ceil uint64) { - miner.worker.setGasCeil(ceil) + miner.Worker.setGasCeil(ceil) } // SubscribePendingLogs starts delivering logs from pending transactions // to the given channel. func (miner *Miner) SubscribePendingLogs(ch chan<- []*types.Log) event.Subscription { - return miner.worker.pendingLogsFeed.Subscribe(ch) + return miner.Worker.pendingLogsFeed.Subscribe(ch) } // BuildPayload builds the payload according to the provided parameters. func (miner *Miner) BuildPayload(args *BuildPayloadArgs) (*Payload, error) { - return miner.worker.buildPayload(args) + return miner.Worker.buildPayload(args) +} + +func (miner *Miner) SimulateBundle(bundle *types.Bundle) (*big.Int, error) { + parent := miner.eth.BlockChain().CurrentBlock() + timestamp := time.Now().Unix() + if parent.Time >= uint64(timestamp) { + timestamp = int64(parent.Time + 1) + } + + header := &types.Header{ + ParentHash: parent.Hash(), + Number: new(big.Int).Add(parent.Number, common.Big1), + GasLimit: core.CalcGasLimit(parent.GasLimit, miner.Worker.config.GasCeil), + Extra: miner.Worker.extra, + Time: uint64(timestamp), + Coinbase: miner.Worker.etherbase(), + } + + // Set baseFee and GasLimit if we are on an EIP-1559 chain + if miner.Worker.chainConfig.IsLondon(header.Number) { + header.BaseFee = eip1559.CalcBaseFee(miner.Worker.chainConfig, parent) + } + + if err := miner.Worker.engine.Prepare(miner.eth.BlockChain(), header); err != nil { + return nil, err + } + + state, err := miner.eth.BlockChain().StateAt(parent.Root) + if err != nil { + return nil, err + } + + env := &environment{ + header: header, + state: state.Copy(), + signer: types.MakeSigner(miner.Worker.chainConfig, header.Number, header.Time), + } + + s, err := miner.Worker.simulateBundles(env, []*types.Bundle{bundle}) + if err != nil { + return nil, err + } + return s[0].BundleGasPrice, nil +} + +func (miner *Miner) RegisterMevValidator(validator common.Address, url string) error { + if miner.Worker.Bidder != nil { + return miner.Worker.Bidder.register(validator, url) + } + + return fmt.Errorf("bidder is nil") +} + +func (miner *Miner) UnregisterMevValidator(validator common.Address) { + if miner.Worker.Bidder != nil { + miner.Worker.Bidder.unregister(validator) + } } diff --git a/miner/miner_test.go b/miner/miner_test.go index 489bc46a91..a3546c19fb 100644 --- a/miner/miner_test.go +++ b/miner/miner_test.go @@ -232,7 +232,7 @@ func TestMinerSetEtherbase(t *testing.T) { coinbase := common.HexToAddress("0xdeedbeef") miner.SetEtherbase(coinbase) - if addr := miner.worker.etherbase(); addr != coinbase { + if addr := miner.Worker.etherbase(); addr != coinbase { t.Fatalf("Unexpected etherbase want %x got %x", coinbase, addr) } } diff --git a/miner/worker.go b/miner/worker.go index ca3327f279..2226988bec 100644 --- a/miner/worker.go +++ b/miner/worker.go @@ -24,10 +24,11 @@ import ( "sync/atomic" "time" + lru "github.com/hashicorp/golang-lru" + "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/consensus" "github.com/ethereum/go-ethereum/consensus/misc/eip1559" - "github.com/ethereum/go-ethereum/consensus/parlia" "github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/systemcontracts" @@ -38,7 +39,6 @@ import ( "github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/trie" - lru "github.com/hashicorp/golang-lru" ) const ( @@ -86,6 +86,8 @@ type environment struct { header *types.Header txs []*types.Transaction receipts []*types.Receipt + + profit *big.Int // block gas fee + BNBSentToSystem } // copy creates a deep copy of environment. @@ -131,6 +133,9 @@ const ( commitInterruptResubmit commitInterruptTimeout commitInterruptOutOfGas + commitInterruptBundleTxNil + commitInterruptBundleTxProtected + commitInterruptBundleCommit ) // newWorkReq represents a request for new sealing work submitting with relative interrupt notifier. @@ -219,6 +224,10 @@ type worker struct { fullTaskHook func() // Method to call before pushing the full sealing task. resubmitHook func(time.Duration, time.Duration) // Method to call upon updating resubmitting interval. recentMinedBlocks *lru.Cache + + // MEV + Bidder *Bidder + bundleCache *BundleCache } func newWorker(config *Config, chainConfig *params.ChainConfig, engine consensus.Engine, eth Backend, mux *event.TypeMux, isLocalBlock func(header *types.Header) bool, init bool) *worker { @@ -244,6 +253,8 @@ func newWorker(config *Config, chainConfig *params.ChainConfig, engine consensus exitCh: make(chan struct{}), resubmitIntervalCh: make(chan time.Duration), recentMinedBlocks: recentMinedBlocks, + Bidder: NewBidder(&config.Bidder, engine, eth.BlockChain()), + bundleCache: NewBundleCache(), } // Subscribe events for blockchain worker.chainHeadSub = eth.BlockChain().SubscribeChainHeadEvent(worker.chainHeadCh) @@ -267,11 +278,18 @@ func newWorker(config *Config, chainConfig *params.ChainConfig, engine consensus } worker.newpayloadTimeout = newpayloadTimeout - worker.wg.Add(4) + worker.wg.Add(2) go worker.mainLoop() go worker.newWorkLoop(recommit) - go worker.resultLoop() - go worker.taskLoop() + // if not builder + if !worker.Bidder.isEnabled() { + worker.wg.Add(2) + go worker.resultLoop() + go worker.taskLoop() + } else { + worker.Bidder.wg.Add(1) + go worker.Bidder.mainLoop() + } // Submit first work to initialize pending state. if init { @@ -418,17 +436,6 @@ func (w *worker) newWorkLoop(recommit time.Duration) { } clearPending(head.Block.NumberU64()) timestamp = time.Now().Unix() - if p, ok := w.engine.(*parlia.Parlia); ok { - signedRecent, err := p.SignRecently(w.chain, head.Block) - if err != nil { - log.Debug("Not allowed to propose block", "err", err) - continue - } - if signedRecent { - log.Info("Signed recently, must wait") - continue - } - } commit(commitInterruptNewHead) case <-timer.C: @@ -486,6 +493,10 @@ func (w *worker) mainLoop() { // System stopped case <-w.exitCh: + if w.Bidder.isEnabled() { + close(w.Bidder.exitCh) + w.Bidder.wg.Wait() + } return case <-w.chainHeadSub.Err(): return @@ -647,7 +658,8 @@ func (w *worker) resultLoop() { // makeEnv creates a new environment for the sealing block. func (w *worker) makeEnv(parent *types.Header, header *types.Header, coinbase common.Address, - prevEnv *environment) (*environment, error) { + prevEnv *environment, +) (*environment, error) { // Retrieve the parent state to execute on top and start a prefetcher for // the miner to speed block sealing up a bit state, err := w.chain.StateAtWithSharedPool(parent.Root) @@ -700,14 +712,19 @@ func (w *worker) commitTransaction(env *environment, tx *txpool.Transaction, rec env.gasPool.SetGas(gp) return nil, err } + env.txs = append(env.txs, tx.Tx) env.receipts = append(env.receipts, receipt) + gasUsed := new(big.Int).SetUint64(receipt.GasUsed) + env.profit.Add(env.profit, gasUsed.Mul(gasUsed, tx.Tx.GasPrice())) + return receipt.Logs, nil } func (w *worker) commitTransactions(env *environment, txs *transactionsByPriceAndNonce, - interruptCh chan int32, stopTimer *time.Timer) error { + interruptCh chan int32, stopTimer *time.Timer, +) error { gasLimit := env.header.GasLimit if env.gasPool == nil { env.gasPool = new(core.GasPool).AddGas(gasLimit) @@ -728,7 +745,7 @@ func (w *worker) commitTransactions(env *environment, txs *transactionsByPriceAn stopPrefetchCh := make(chan struct{}) defer close(stopPrefetchCh) - //prefetch txs from all pending txs + // prefetch txs from all pending txs txsPrefetch := txs.Copy() tx := txsPrefetch.PeekWithUnwrap() if tx != nil { @@ -991,10 +1008,34 @@ func (w *worker) commitWork(interruptCh chan int32, timestamp int64) { // Set the coinbase if the worker is running or it's required var coinbase common.Address if w.isRunning() { - coinbase = w.etherbase() - if coinbase == (common.Address{}) { - log.Error("Refusing to mine without etherbase") - return + if w.Bidder.isEnabled() { + var err error + // take the next in-turn validator as coinbase + coinbase, err = w.engine.NextInTurnValidator(w.chain, w.chain.CurrentBlock()) + if err != nil { + log.Error("Failed to get next in-turn validator", "err", err) + return + } + + // do not build work if not register to the coinbase + if !w.Bidder.registered(coinbase) { + log.Warn("Refusing to mine with unregistered validator") + return + } + + // set validator to the consensus engine + if posa, ok := w.engine.(consensus.PoSA); ok { + posa.SetValidator(coinbase) + } else { + log.Warn("Consensus engine does not support validator setting") + return + } + } else { + coinbase = w.etherbase() + if coinbase == (common.Address{}) { + log.Error("Refusing to mine without etherbase") + return + } } } @@ -1060,7 +1101,7 @@ LOOP: // Fill pending transactions from the txpool into the block. fillStart := time.Now() - err = w.fillTransactions(interruptCh, work, stopTimer) + err = w.fillTransactionsAndBundles(interruptCh, work, stopTimer) fillDuration := time.Since(fillStart) switch { case errors.Is(err, errBlockInterruptedByNewHead): @@ -1077,6 +1118,10 @@ LOOP: break LOOP } + if w.Bidder.isEnabled() { + w.Bidder.newBidCh <- work + } + if interruptCh == nil || stopTimer == nil { // it is single commit work, no need to try several time. log.Info("commitWork interruptCh or stopTimer is nil") @@ -1159,7 +1204,7 @@ LOOP: // Note the assumption is held that the mutation is allowed to the passed env, do // the deep copy first. func (w *worker) commit(env *environment, interval func(), update bool, start time.Time) error { - if w.isRunning() { + if w.isRunning() && !w.Bidder.isEnabled() { if interval != nil { interval() } diff --git a/miner/worker_builder.go b/miner/worker_builder.go new file mode 100644 index 0000000000..4189ac1952 --- /dev/null +++ b/miner/worker_builder.go @@ -0,0 +1,445 @@ +package miner + +import ( + "errors" + "math/big" + "sort" + "sync" + "time" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/consensus" + "github.com/ethereum/go-ethereum/core" + "github.com/ethereum/go-ethereum/core/state" + "github.com/ethereum/go-ethereum/core/txpool" + "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/log" + "github.com/ethereum/go-ethereum/params" +) + +var ( + errNonRevertingTxInBundleFailed = errors.New("non-reverting tx in bundle failed") + errBundlePriceTooLow = errors.New("bundle price too low") +) + +// commitWorkV2 +// TODO(roshan) consider whether to take bundle pool status as LOOP_WAIT condition +func (w *worker) commitWorkV2(interruptCh chan int32, timestamp int64) { +} + +// fillTransactions retrieves the pending bundles and transactions from the txpool and fills them +// into the given sealing block. The selection and ordering strategy can be extended in the future. +func (w *worker) fillTransactionsAndBundles(interruptCh chan int32, env *environment, stopTimer *time.Timer) error { + env.profit = new(big.Int) + + var ( + pending map[common.Address][]*txpool.LazyTransaction + localTxs map[common.Address][]*txpool.LazyTransaction + remoteTxs map[common.Address][]*txpool.LazyTransaction + bundles []*types.Bundle + ) + { + // Split the pending transactions into locals and remotes + // Fill the block with all available pending transactions. + pending = w.eth.TxPool().Pending(false) + + localTxs, remoteTxs = make(map[common.Address][]*txpool.LazyTransaction), pending + for _, account := range w.eth.TxPool().Locals() { + if txs := remoteTxs[account]; len(txs) > 0 { + delete(remoteTxs, account) + localTxs[account] = txs + } + } + + bundles = w.eth.TxPool().PendingBundles(env.header.Number.Uint64(), env.header.Time) + + log.Info("fill bundles and transactions", "bundles_count", len(bundles), "tx_count", len(pending)) + + // if no bundles, not necessary to fill transactions + if len(bundles) == 0 { + return errors.New("no bundles in bundle pool") + } + } + + { + txs, bundle, err := w.generateOrderedBundles(env, bundles) + if err != nil { + log.Error("fail to generate ordered bundles", "err", err) + return err + } + + if err = w.commitBundles(env, txs, interruptCh, stopTimer); err != nil { + log.Error("fail to commit bundles", "err", err) + return err + } + + env.profit.Add(env.profit, bundle.EthSentToSystem) + } + + env.state.StopPrefetcher() // no need to prefetch txs for a builder + + if len(localTxs) > 0 { + txs := newTransactionsByPriceAndNonce(env.signer, localTxs, env.header.BaseFee) + err := w.commitTransactions(env, txs, interruptCh, stopTimer) + if err != nil { + return err + } + } + + if len(remoteTxs) > 0 { + txs := newTransactionsByPriceAndNonce(env.signer, remoteTxs, env.header.BaseFee) + err := w.commitTransactions(env, txs, interruptCh, stopTimer) + if err != nil { + return err + } + } + + return nil +} + +func (w *worker) commitBundles( + env *environment, + txs types.Transactions, + interruptCh chan int32, + stopTimer *time.Timer, +) error { + if env.gasPool == nil { + env.gasPool = prepareGasPool(env.header.GasLimit) + } + + var coalescedLogs []*types.Log + signal := commitInterruptNone +LOOP: + for _, tx := range txs { + // In the following three cases, we will interrupt the execution of the transaction. + // (1) new head block event arrival, the reason is 1 + // (2) worker start or restart, the reason is 1 + // (3) worker recreate the sealing block with any newly arrived transactions, the reason is 2. + // For the first two cases, the semi-finished work will be discarded. + // For the third case, the semi-finished work will be submitted to the consensus engine. + if interruptCh != nil { + select { + case signal, ok := <-interruptCh: + if !ok { + // should never be here, since interruptCh should not be read before + log.Warn("commit transactions stopped unknown") + } + return signalToErr(signal) + default: + } + } // If we don't have enough gas for any further transactions then we're done + if env.gasPool.Gas() < params.TxGas { + log.Trace("Not enough gas for further transactions", "have", env.gasPool, "want", params.TxGas) + signal = commitInterruptOutOfGas + break + } + if tx == nil { + log.Error("Unexpected nil transaction in bundle") + return signalToErr(commitInterruptBundleTxNil) + } + if stopTimer != nil { + select { + case <-stopTimer.C: + log.Info("Not enough time for further transactions", "txs", len(env.txs)) + stopTimer.Reset(0) // re-active the timer, in case it will be used later. + signal = commitInterruptTimeout + break LOOP + default: + } + } + + // Error may be ignored here. The error has already been checked + // during transaction acceptance is the transaction pool. + // + // We use the eip155 signer regardless of the current hf. + from, _ := types.Sender(env.signer, tx) + // Check whether the tx is replay protected. If we're not in the EIP155 hf + // phase, start ignoring the sender until we do. + if tx.Protected() && !w.chainConfig.IsEIP155(env.header.Number) { + log.Debug("Unexpected protected transaction in bundle") + return signalToErr(commitInterruptBundleTxProtected) + } + // Start executing the transaction + env.state.SetTxContext(tx.Hash(), env.tcount) + + logs, err := w.commitTransaction(env, &txpool.Transaction{Tx: tx}, core.NewReceiptBloomGenerator()) + switch err { + case core.ErrGasLimitReached: + // Pop the current out-of-gas transaction without shifting in the next from the account + log.Error("Unexpected gas limit exceeded for current block in the bundle", "sender", from) + return signalToErr(commitInterruptBundleCommit) + + case core.ErrNonceTooLow: + // New head notification data race between the transaction pool and miner, shift + log.Error("Transaction with low nonce in the bundle", "sender", from, "nonce", tx.Nonce()) + return signalToErr(commitInterruptBundleCommit) + + case core.ErrNonceTooHigh: + // Reorg notification data race between the transaction pool and miner, skip account = + log.Error("Account with high nonce in the bundle", "sender", from, "nonce", tx.Nonce()) + return signalToErr(commitInterruptBundleCommit) + + case nil: + // Everything ok, collect the logs and shift in the next transaction from the same account + coalescedLogs = append(coalescedLogs, logs...) + env.tcount++ + continue + + default: + // Strange error, discard the transaction and get the next in line (note, the + // nonce-too-high clause will prevent us from executing in vain). + log.Error("Transaction failed in the bundle", "hash", tx.Hash(), "err", err) + return signalToErr(commitInterruptBundleCommit) + } + } + + if !w.isRunning() && len(coalescedLogs) > 0 { + // We don't push the pendingLogsEvent while we are mining. The reason is that + // when we are mining, the worker will regenerate a mining block every 3 seconds. + // In order to avoid pushing the repeated pendingLog, we disable the pending log pushing. + + // make a copy, the state caches the logs and these logs get "upgraded" from pending to mined + // logs by filling in the block hash when the block was mined by the local miner. This can + // cause a race condition if a log was "upgraded" before the PendingLogsEvent is processed. + cpy := make([]*types.Log, len(coalescedLogs)) + for i, l := range coalescedLogs { + cpy[i] = new(types.Log) + *cpy[i] = *l + } + w.pendingLogsFeed.Send(cpy) + } + return signalToErr(signal) +} + +// generateOrderedBundles generates ordered txs from the given bundles. +// 1. sort bundles according to computed gas price when received. +// 2. simulate bundles based on the same state, resort. +// 3. merge resorted simulateBundles based on the iterative state. +func (w *worker) generateOrderedBundles( + env *environment, + bundles []*types.Bundle, +) (types.Transactions, *types.SimulatedBundle, error) { + // sort bundles according to gas price computed when received + sort.SliceStable(bundles, func(i, j int) bool { + priceI, priceJ := bundles[i].Price, bundles[j].Price + + return priceI.Cmp(priceJ) >= 0 + }) + + // recompute bundle gas price based on the same state and current env + simulatedBundles, err := w.simulateBundles(env, bundles) + if err != nil { + log.Error("fail to simulate bundles base on the same state", "err", err) + return nil, nil, err + } + + // sort bundles according to fresh gas price + sort.SliceStable(simulatedBundles, func(i, j int) bool { + priceI, priceJ := simulatedBundles[i].BundleGasPrice, simulatedBundles[j].BundleGasPrice + + return priceI.Cmp(priceJ) >= 0 + }) + + // merge bundles based on iterative state + includedTxs, mergedBundle, err := w.mergeBundles(env, simulatedBundles) + if err != nil { + log.Error("fail to merge bundles", "err", err) + return nil, nil, err + } + + return includedTxs, mergedBundle, nil +} + +func (w *worker) simulateBundles(env *environment, bundles []*types.Bundle) ([]*types.SimulatedBundle, error) { + headerHash := env.header.Hash() + simCache := w.bundleCache.GetBundleCache(headerHash) + simResult := make([]*types.SimulatedBundle, len(bundles)) + + var wg sync.WaitGroup + for i, bundle := range bundles { + if simmed, ok := simCache.GetSimulatedBundle(bundle.Hash()); ok { + simResult = append(simResult, simmed) + continue + } + + wg.Add(1) + go func(idx int, bundle *types.Bundle, state *state.StateDB) { + defer wg.Done() + + gasPool := prepareGasPool(env.header.GasLimit) + simmed, err := w.simulateBundle(env, bundle, state, gasPool, 0, true, true) + if err != nil { + log.Trace("Error computing gas for a simulateBundle", "error", err) + return + } + + simResult[idx] = simmed + }(i, bundle, env.state.Copy()) + } + + wg.Wait() + + simulatedBundles := make([]*types.SimulatedBundle, 0) + + for _, bundle := range simResult { + if bundle == nil { + continue + } + + simulatedBundles = append(simulatedBundles, bundle) + } + + simCache.UpdateSimulatedBundles(simResult, bundles) + + return simulatedBundles, nil +} + +// mergeBundles merges the given simulateBundle into the given environment. +// It returns the merged simulateBundle and the number of transactions that were merged. +func (w *worker) mergeBundles( + env *environment, + bundles []*types.SimulatedBundle, +) (types.Transactions, *types.SimulatedBundle, error) { + currentState := env.state.Copy() + gasPool := prepareGasPool(env.header.GasLimit) + + includedTxs := types.Transactions{} + mergedBundle := types.SimulatedBundle{ + BundleGasFees: new(big.Int), + BundleGasUsed: 0, + BundleGasPrice: new(big.Int), + EthSentToSystem: new(big.Int), + } + + for _, bundle := range bundles { + prevState := currentState.Copy() + prevGasPool := new(core.GasPool).AddGas(gasPool.Gas()) + + // the floor gas price is 99/100 what was simulated at the top of the block + floorGasPrice := new(big.Int).Mul(bundle.BundleGasPrice, big.NewInt(99)) + floorGasPrice = floorGasPrice.Div(floorGasPrice, big.NewInt(100)) + + simulatedBundle, err := w.simulateBundle(env, bundle.OriginalBundle, currentState, gasPool, len(includedTxs), true, false) + + if err != nil || simulatedBundle.BundleGasPrice.Cmp(floorGasPrice) <= 0 { + currentState = prevState + gasPool = prevGasPool + + log.Error("failed to merge bundle", "floorGasPrice", floorGasPrice, "err", err) + continue + } + + log.Info("included bundle", + "gasUsed", simulatedBundle.BundleGasUsed, + "gasPrice", simulatedBundle.BundleGasPrice, + "txcount", len(simulatedBundle.OriginalBundle.Txs)) + + includedTxs = append(includedTxs, bundle.OriginalBundle.Txs...) + + mergedBundle.BundleGasFees.Add(mergedBundle.BundleGasFees, simulatedBundle.BundleGasFees) + mergedBundle.BundleGasUsed += simulatedBundle.BundleGasUsed + } + + if len(includedTxs) == 0 { + return nil, nil, errors.New("include no txs when merge bundles") + } + + mergedBundle.BundleGasPrice.Div(mergedBundle.BundleGasFees, new(big.Int).SetUint64(mergedBundle.BundleGasUsed)) + + return includedTxs, &mergedBundle, nil +} + +// simulateBundle computes the gas price for a whole simulateBundle based on the same ctx +// named computeBundleGas in flashbots +func (w *worker) simulateBundle( + env *environment, bundle *types.Bundle, state *state.StateDB, gasPool *core.GasPool, currentTxCount int, + prune, pruneGasExceed bool, +) (*types.SimulatedBundle, error) { + var ( + tempGasUsed uint64 + bundleGasUsed uint64 + bundleGasFees = new(big.Int) + ethSentToSystem = new(big.Int) + ) + + for i, tx := range bundle.Txs { + state.SetTxContext(tx.Hash(), i+currentTxCount) + sysBalanceBefore := state.GetBalance(consensus.SystemAddress) + + receipt, err := core.ApplyTransaction(w.chainConfig, w.chain, &w.coinbase, gasPool, state, env.header, tx, + &tempGasUsed, *w.chain.GetVMConfig()) + if err != nil { + log.Warn("fail to simulate bundle", "hash", bundle.Hash().String(), "err", err) + + if prune { + if errors.Is(err, core.ErrGasLimitReached) && !pruneGasExceed { + log.Warn("bundle gas limit exceed", "hash", bundle.Hash().String()) + } else { + log.Warn("prune bundle", "hash", bundle.Hash().String(), "err", err) + w.eth.TxPool().PruneBundle(bundle.Hash()) + } + } + + return nil, err + } + + if receipt.Status == types.ReceiptStatusFailed && !containsHash(bundle.RevertingTxHashes, receipt.TxHash) { + err = errNonRevertingTxInBundleFailed + log.Warn("fail to simulate bundle", "hash", bundle.Hash().String(), "err", err) + + if prune { + w.eth.TxPool().PruneBundle(bundle.Hash()) + log.Warn("prune bundle", "hash", bundle.Hash().String()) + } + + return nil, err + } + + bundleGasUsed += receipt.GasUsed + + txGasUsed := new(big.Int).SetUint64(receipt.GasUsed) + txGasFees := new(big.Int).Mul(txGasUsed, tx.GasPrice()) + bundleGasFees.Add(bundleGasFees, txGasFees) + sysBalanceAfter := state.GetBalance(consensus.SystemAddress) + sysDelta := new(big.Int).Sub(sysBalanceAfter, sysBalanceBefore) + sysDelta.Sub(sysDelta, txGasFees) + ethSentToSystem.Add(ethSentToSystem, sysDelta) + } + + bundleGasPrice := new(big.Int).Div(bundleGasFees, new(big.Int).SetUint64(bundleGasUsed)) + + if bundleGasPrice.Cmp(big.NewInt(w.config.MevGasPriceFloor)) < 0 { + err := errBundlePriceTooLow + log.Warn("fail to simulate bundle", "hash", bundle.Hash().String(), "err", err) + + if prune { + log.Warn("prune bundle", "hash", bundle.Hash().String()) + w.eth.TxPool().PruneBundle(bundle.Hash()) + } + + return nil, err + } + + return &types.SimulatedBundle{ + OriginalBundle: bundle, + BundleGasFees: bundleGasFees, + BundleGasPrice: bundleGasPrice, + BundleGasUsed: bundleGasUsed, + EthSentToSystem: ethSentToSystem, + }, nil +} + +func containsHash(arr []common.Hash, match common.Hash) bool { + for _, elem := range arr { + if elem == match { + return true + } + } + return false +} + +func prepareGasPool(gasLimit uint64) *core.GasPool { + gasPool := new(core.GasPool).AddGas(gasLimit) + gasPool.SubGas(params.SystemTxsGas) // reserve gas for system txs(keep align with mainnet) + return gasPool +}