+cd script
+```
+
+## Manual
+
+You can also manually setup a project. First create a new cargo project:
+
+```bash
+cargo new script
+cd script
+```
+
+#### Cargo Manifest
+
+Inside this crate, add the `sp1-sdk` crate as a dependency. Your `Cargo.toml` should look like as follows:
+
+```rust
+[workspace]
+[package]
+version = "0.1.0"
+name = "script"
+edition = "2021"
+
+[dependencies]
+sp1-sdk = "2.0.0"
+```
+
+The `sp1-sdk` crate includes the necessary utilities to generate, save, and verify proofs.
diff --git a/book/versioned_docs/version-3.4.0/generating-proofs/sp1-sdk-faq.md b/book/versioned_docs/version-3.4.0/generating-proofs/sp1-sdk-faq.md
new file mode 100644
index 0000000000..ca2a0a0596
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/generating-proofs/sp1-sdk-faq.md
@@ -0,0 +1,15 @@
+# FAQ
+
+## Logging and Tracing Information
+
+You can use `sp1_sdk::utils::setup_logger()` to enable logging information respectively. You can set the logging level with the `RUST_LOG` environment variable.
+
+```rust
+sp1_sdk::utils::setup_logger();
+```
+
+Example of setting the logging level to `info` (other options are `debug`, `trace`, and `warn`):
+
+```bash
+RUST_LOG=info cargo run --release
+```
\ No newline at end of file
diff --git a/book/versioned_docs/version-3.4.0/getting-started/hardware-requirements.md b/book/versioned_docs/version-3.4.0/getting-started/hardware-requirements.md
new file mode 100644
index 0000000000..828ea96080
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/getting-started/hardware-requirements.md
@@ -0,0 +1,41 @@
+# Proof Generation Requirements
+
+
+We recommend that developers who want to use SP1 for non-trivial programs generate proofs on our prover network. The prover network generates SP1 proofs across multiple machines, reducing latency and also runs SP1 on optimized hardware instances that result in faster + cheaper proof generation times.
+
+We recommend that for any production benchmarking, you use the prover network to estimate latency and costs of proof generation.
+
+
+
+## Local Proving
+
+If you want to generate SP1 proofs locally, this section has an overview of the hardware requirements required. These requires depend on which [types of proofs](../generating-proofs/proof-types.md) you want to generate and can also change over time as the design of the zKVM evolves.
+
+**The most important requirement is CPU for performance/latency and RAM to prevent running out of memory.**
+
+| | Mock / Network | Core / Compress | Groth16 and PLONK (EVM) |
+| -------------- | ---------------------------- | ---------------------------------- | ----------------------- |
+| CPU | 1+, single-core perf matters | 16+, more is better | 16+, more is better |
+| Memory | 8GB+, more is better | 16GB+, more if you have more cores | 16GB+, more is better |
+| Disk | 10GB+ | 10GB+ | 10GB+ |
+| EVM Compatible | ✅ | ❌ | ✅ |
+
+### CPU
+
+The execution & trace generation of the zkVM is mostly CPU bound, having a high single-core
+performance is recommended to accelerate these steps. The rest of the prover is mostly bound by hashing/field operations
+which can be parallelized with multiple cores.
+
+### Memory
+
+Our prover requires keeping large matrices (i.e., traces) in memory to generate the proofs. Certain steps of the prover
+have a minimum memory requirement, meaning that if you have less than this amount of memory, the process will OOM.
+
+This effect is most noticeable when using the Groth16 or PLONK provers.
+
+### Disk
+
+Disk is required to install the SP1 zkVM toolchain and to install the circuit artifacts, if you
+plan to locally build the Groth16 or PLONK provers.
+
+Furthermore, disk is used to checkpoint the state of the program execution, which is required to generate the proofs.
diff --git a/book/versioned_docs/version-3.4.0/getting-started/install.md b/book/versioned_docs/version-3.4.0/getting-started/install.md
new file mode 100644
index 0000000000..e58b9a624e
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/getting-started/install.md
@@ -0,0 +1,115 @@
+# Installation
+
+SP1 currently runs on Linux and macOS. You can either use prebuilt binaries through sp1up or
+build the Succinct [Rust toolchain](https://rust-lang.github.io/rustup/concepts/toolchains.html) and CLI from source.
+
+## Requirements
+
+- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
+- [Rust (Nightly)](https://www.rust-lang.org/tools/install)
+- [Docker](https://docs.docker.com/get-docker/)
+
+## Option 1: Prebuilt Binaries (Recommended)
+
+`sp1up` is the SP1 toolchain installer. Open your terminal and run the following command and follow the instructions:
+
+```bash
+curl -L https://sp1.succinct.xyz | bash
+```
+
+Then simply follow the instructions on-screen, which will make the `sp1up` command available in your CLI.
+
+After following the instructions, you can run `sp1up` to install the toolchain and the `cargo prove` CLI:
+
+```bash
+sp1up
+```
+
+This will install two things:
+
+1. The `succinct` Rust toolchain which has support for the `riscv32im-succinct-zkvm-elf` compilation target.
+2. `cargo prove` CLI tool that provides convenient commands for compiling SP1 programs and other helper functionality.
+
+You can verify the installation of the CLI by running `cargo prove --version`:
+
+```bash
+cargo prove --version
+```
+
+You can check the version of the Succinct Rust toolchain by running:
+
+```bash
+RUSTUP_TOOLCHAIN=succinct cargo --version
+```
+or equivalently:
+
+```bash
+cargo +succinct --version
+```
+
+If this works, go to the [next section](./quickstart.md) to compile and prove a simple zkVM program.
+
+### Troubleshooting
+
+#### Rate-limiting
+
+If you experience [rate-limiting](https://docs.github.com/en/rest/using-the-rest-api/getting-started-with-the-rest-api?apiVersion=2022-11-28#rate-limiting) when using the `sp1up` command, you can resolve this by using the `--token` flag and providing your GitHub token. To create a Github token, follow the instructions [here](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic).
+
+
+
+#### Unsupported OS Architectures
+
+Currently our prebuilt binaries are built on Ubuntu 20.04 (22.04 on ARM) and macOS. If your OS uses an older GLIBC version, it's possible these may not work and you will need to [build the toolchain from source](#option-2-building-from-source).
+
+#### Conflicting `cargo-prove` installations
+
+If you have installed `cargo-prove` from source, it may conflict with `sp1up`'s `cargo-prove` installation or vice versa. You can remove the `cargo-prove` that was installed from source with the following command:
+
+```bash
+rm ~/.cargo/bin/cargo-prove
+```
+
+Or, you can remove the `cargo-prove` that was installed through `sp1up`:
+
+```bash
+rm ~/.sp1/bin/cargo-prove
+```
+
+
+## Option 2: Building from Source
+
+
+Warning: This option will take a long time to build and is only recommended for advanced users.
+
+
+Make sure you have installed the [dependencies](https://github.com/rust-lang/rust/blob/master/INSTALL.md#dependencies) needed to build the rust toolchain from source.
+
+Clone the `sp1` repository and navigate to the root directory.
+
+```bash
+git clone git@github.com:succinctlabs/sp1.git
+cd sp1
+cd crates
+cd cli
+cargo install --locked --path .
+cd ~
+cargo prove build-toolchain
+```
+
+Building the toolchain can take a while, ranging from 30 mins to an hour depending on your machine. If you're on a machine that we have prebuilt binaries for (ARM Mac or x86 or ARM Linux), you can use the following to download a prebuilt version.
+
+```bash
+cargo prove install-toolchain
+```
+
+To verify the installation of the toolchain, run and make sure you see `succinct`:
+
+```bash
+rustup toolchain list
+```
+
+You can delete your existing installation of the toolchain with:
+
+```bash
+rustup toolchain remove succinct
+```
diff --git a/book/versioned_docs/version-3.4.0/getting-started/project-template.md b/book/versioned_docs/version-3.4.0/getting-started/project-template.md
new file mode 100644
index 0000000000..b6cbd8d733
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/getting-started/project-template.md
@@ -0,0 +1,5 @@
+# Project Template
+
+Another option for getting started with SP1 is to use the [SP1 Project Template](https://github.com/succinctlabs/sp1-project-template/tree/main).
+
+You can use this as a Github template to create a new repository that has a SP1 program, a script to generate proofs, and also a contracts folder that contains a Solidity contract that can verify SP1 proofs on any EVM chain.
diff --git a/book/versioned_docs/version-3.4.0/getting-started/quickstart.md b/book/versioned_docs/version-3.4.0/getting-started/quickstart.md
new file mode 100644
index 0000000000..0d7f46821e
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/getting-started/quickstart.md
@@ -0,0 +1,124 @@
+# Quickstart
+
+In this section, we will show you how to create a simple Fibonacci program using the SP1 zkVM.
+
+## Create an SP1 Project
+
+### Option 1: Cargo Prove New CLI (Recommended)
+
+You can use the `cargo prove` CLI to create a new project using the `cargo prove new <--bare|--evm> ` command. The `--bare` option sets up a basic SP1 project for standalone zkVM programs, while `--evm` adds additional components including Solidity contracts for on-chain proof verification.
+
+This command will create a new folder in your current directory which includes solidity smart contracts for onchain integration.
+
+```bash
+cargo prove new --evm fibonacci
+cd fibonacci
+```
+
+### Option 2: Project Template (Solidity Contracts for Onchain Verification)
+
+If you want to use SP1 to generate proofs that will eventually be verified on an EVM chain, you should use the [SP1 project template](https://github.com/succinctlabs/sp1-project-template/tree/main). This Github template is scaffolded with a SP1 program, a script to generate proofs, and also a contracts folder that contains a Solidity contract that can verify SP1 proofs on any EVM chain.
+
+Either fork the project template repository or clone it:
+
+```bash
+git clone https://github.com/succinctlabs/sp1-project-template.git
+```
+
+### Project Overview
+
+Your new project will have the following structure (ignoring the `contracts` folder, if you are using the project template):
+
+```
+.
+├── program
+│ ├── Cargo.lock
+│ ├── Cargo.toml
+│ ├── elf
+│ │ └── riscv32im-succinct-zkvm-elf
+│ └── src
+│ └── main.rs
+├── rust-toolchain
+└── script
+ ├── Cargo.lock
+ ├── Cargo.toml
+ ├── build.rs
+ └── src
+ └── bin
+ ├── prove.rs
+ └── vkey.rs
+
+6 directories, 4 files
+```
+
+There are 2 directories (each a crate) in the project:
+- `program`: the source code that will be proven inside the zkVM.
+- `script`: code that contains proof generation and verification code.
+
+**We recommend you install the [rust-analyzer](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer) extension.**
+Note that if you use `cargo prove new` inside a monorepo, you will need to add the manifest file to `rust-analyzer.linkedProjects` to get full IDE support.
+
+## Build
+
+Before we can run the program inside the zkVM, it must be compiled to a RISC-V executable using the `succinct` Rust toolchain. This is called an [ELF (Executable and Linkable Format)](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format). To compile the program, you can run the following command:
+
+```
+cd program && cargo prove build
+```
+
+which will output the compiled ELF to the file `program/elf/riscv32im-succinct-zkvm-elf`.
+
+Note: the `build.rs` file in the `script` directory will use run the above command automatically to build the ELF, meaning you don't have to manually run `cargo prove build` every time you make a change to the program!
+
+## Execute
+
+To test your program, you can first execute your program without generating a proof. In general this is helpful for iterating on your program and verifying that it is correct.
+
+```bash
+cd ../script
+RUST_LOG=info cargo run --release -- --execute
+```
+
+## Prove
+
+When you are ready to generate a proof, you should run the script with the `--prove` flag that will generate a proof.
+
+```bash
+cd ../script
+RUST_LOG=info cargo run --release -- --prove
+```
+
+The output should show something like this:
+```
+n: 20
+2024-07-23T17:07:07.874856Z INFO prove_core:collect_checkpoints: clk = 0 pc = 0x2017e8
+2024-07-23T17:07:07.876264Z INFO prove_core:collect_checkpoints: close time.busy=2.00ms time.idle=1.50µs
+2024-07-23T17:07:07.913304Z INFO prove_core:shard: close time.busy=32.2ms time.idle=791ns
+2024-07-23T17:07:10.724280Z INFO prove_core:commit: close time.busy=2.81s time.idle=1.25µs
+2024-07-23T17:07:10.725923Z INFO prove_core:prove_checkpoint: clk = 0 pc = 0x2017e8 num=0
+2024-07-23T17:07:10.729130Z INFO prove_core:prove_checkpoint: close time.busy=3.68ms time.idle=1.17µs num=0
+2024-07-23T17:07:14.648146Z INFO prove_core: execution report (totals): total_cycles=9329, total_syscall_cycles=20
+2024-07-23T17:07:14.648180Z INFO prove_core: execution report (opcode counts):
+2024-07-23T17:07:14.648197Z INFO prove_core: 1948 add
+...
+2024-07-23T17:07:14.648277Z INFO prove_core: execution report (syscall counts):
+2024-07-23T17:07:14.648408Z INFO prove_core: 8 commit
+...
+2024-07-23T17:07:14.648858Z INFO prove_core: summary: cycles=9329, e2e=9.193968459, khz=1014.69, proofSize=1419780
+2024-07-23T17:07:14.653193Z INFO prove_core: close time.busy=9.20s time.idle=12.2µs
+Successfully generated proof!
+fib(n): 10946
+```
+
+The program by default is quite small, so proof generation will only take a few seconds locally. After it generates, the proof will be verified for correctness.
+
+**Note:** When benchmarking proof generation times locally, it is important to note that there is a fixed overhead for proving, which means that the proof generation time for programs with a small number of cycles is not representative of the performance of larger programs (which often have better performance characteristics as the overhead is amortized across many cycles).
+
+## Recommended Workflow
+
+Please see the [Recommended Workflow](../generating-proofs/recommended-workflow) section for more details on how to develop your SP1 program and generate proofs.
+
+We *strongly recommend* that developers who want to use SP1 for non-trivial programs generate proofs on the beta version of our [Prover Network](../generating-proofs/prover-network.md). The prover network generates SP1 proofs across multiple machines, reducing latency and also runs SP1 on optimized hardware instances that result in faster + cheaper proof generation times.
+
+We recommend that for any production benchmarking, you use the prover network to estimate latency and costs of proof generation. We also would love to chat with your team directly to help you get started with the prover network--please fill out this [form](https://partner.succinct.xyz/).
+
diff --git a/book/versioned_docs/version-3.4.0/introduction.md b/book/versioned_docs/version-3.4.0/introduction.md
new file mode 100644
index 0000000000..f3a645786c
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/introduction.md
@@ -0,0 +1,33 @@
+# Introduction
+
+*Documentation for SP1 users and developers*.
+
+[![Telegram Chat][tg-badge]][tg-url]
+
+![](./sp1.png)
+
+
+SP1 is a performant, open-source zero-knowledge virtual machine (zkVM) that verifies the execution of arbitrary Rust (or any LLVM-compiled language) programs.
+
+[tg-badge]: https://img.shields.io/endpoint?color=neon&logo=telegram&label=chat&url=https%3A%2F%2Ftg.sumanjay.workers.dev%2Fsuccinct%5Fsp1
+[tg-url]: https://t.me/+AzG4ws-kD24yMGYx
+
+SP1 has undergone multiple audits from leading ZK security firms and is currently used in production by many top blockchain teams.
+
+## The future of ZK is writing normal code
+
+Zero-knowledge proofs (ZKPs) are one of the most critical technologies to blockchain scaling, interoperability and privacy. But, historically building ZKP systems was extremely complicated--requiring large teams with specialized cryptography expertise and taking years to go to production.
+
+SP1 provides a performant, general-purpose zkVM that enables **any developer** to use ZKPs by writing normal code (in Rust), and get cheap and fast proofs. SP1 will enable ZKPs to become mainstream, introducing a new era of verifiability for all of blockchain infrastructure and beyond.
+
+
+## SP1 enables a diversity of use-cases
+
+ZKPs enable a diversity of use-cases in blockchain and beyond, including:
+
+* Rollups: Use SP1 to generate a ZKP for the state transition function of your rollup and connect to Ethereum, Bitcoin or other chains with full validity proofs or ZK fraud proofs.
+* Interoperability: Use SP1 for fast-finality, cross rollup interoperability
+* Bridges: Use SP1 to generate a ZKP for verifying consensus of L1s, including Tendermint, Ethereum’s Light Client protocol and more, for bridging between chains.
+* Oracles: Use SP1 for large scale computations with onchain state, including consensus data and storage data.
+* Aggregation: Use SP1 to aggregate and verify other ZKPs for reduced onchain verification costs.
+* Privacy: Use SP1 for onchain privacy, including private transactions and private state.
diff --git a/book/versioned_docs/version-3.4.0/sp1.png b/book/versioned_docs/version-3.4.0/sp1.png
new file mode 100644
index 0000000000..78576befe3
Binary files /dev/null and b/book/versioned_docs/version-3.4.0/sp1.png differ
diff --git a/book/versioned_docs/version-3.4.0/theme/head.hbs b/book/versioned_docs/version-3.4.0/theme/head.hbs
new file mode 100644
index 0000000000..2e2be7a19f
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/theme/head.hbs
@@ -0,0 +1,6 @@
+
+
+
+
+
\ No newline at end of file
diff --git a/book/verification/off-chain-verification.md b/book/versioned_docs/version-3.4.0/verification/off-chain-verification.md
similarity index 92%
rename from book/verification/off-chain-verification.md
rename to book/versioned_docs/version-3.4.0/verification/off-chain-verification.md
index 100e554eb4..783ee7d1bc 100644
--- a/book/verification/off-chain-verification.md
+++ b/book/versioned_docs/version-3.4.0/verification/off-chain-verification.md
@@ -1,3 +1,6 @@
+import ProgramMain from "@site/static/examples_groth16_program_src_main.rs.mdx";
+import ProgramScript from "@site/static/examples_groth16_script_src_main.rs.mdx";
+
# Offchain Verification
## Rust `no_std` Verification
@@ -26,20 +29,16 @@ and [`PlonkVerifier::verify_proof`](https://docs.rs/sp1-verifier/latest/sp1_veri
keys correspond to the current SP1 version's official Groth16 and Plonk verifying keys, which are used for verifying proofs generated
using docker or the prover network.
-First, generate your groth16/plonk proof with the SP1 SDK. See [here](./onchain/getting-started.md#generating-sp1-proofs-for-onchain-verification)
+First, generate your groth16/plonk proof with the SP1 SDK. See [here](./onchain/getting-started#generating-sp1-proofs-for-onchain-verification)
for more information -- `sp1-verifier` and the solidity verifier expect inputs in the same format.
Next, verify the proof with `sp1-verifier`. The following snippet is from the [Groth16 example program](https://github.com/succinctlabs/sp1/tree/dev/examples/groth16/), which verifies a Groth16 proof within SP1 using `sp1-verifier`.
-```rust,noplayground
-{{#include ../../examples/groth16/program/src/main.rs}}
-```
+
Here, the proof, public inputs, and vkey hash are read from stdin. See the following snippet to see how these values are generated.
-```rust,noplayground
-{{#include ../../examples/groth16/script/src/main.rs:12:34}}
-```
+
> Note that the SP1 SDK itself is *not* `no_std` compatible.
diff --git a/book/versioned_docs/version-3.4.0/verification/onchain/contract-addresses.md b/book/versioned_docs/version-3.4.0/verification/onchain/contract-addresses.md
new file mode 100644
index 0000000000..0a23f6ab2e
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/verification/onchain/contract-addresses.md
@@ -0,0 +1,101 @@
+# Contract Addresses
+
+To verify SP1 proofs on-chain, we recommend using our deployed canonical verifier gateways. The
+[SP1VerifierGateway](https://github.com/succinctlabs/sp1-contracts/blob/main/contracts/src/ISP1VerifierGateway.sol)
+will automatically route your SP1 proof to the correct verifier based on the SP1 version used.
+
+## Canonical Verifier Gateways
+
+There are different verifier gateway for each proof system: Groth16 and PLONK. This means that you
+must use the correct verifier gateway depending on if you are verifying a Groth16 or PLONK proof.
+
+### Groth16
+
+| Chain ID | Chain | Gateway |
+| -------- | ---------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
+| 1 | Mainnet | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://etherscan.io/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 11155111 | Sepolia | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://sepolia.etherscan.io/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 17000 | Holesky | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://holesky.etherscan.io/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 42161 | Arbitrum One | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://arbiscan.io/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 421614 | Arbitrum Sepolia | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://sepolia.arbiscan.io/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 8453 | Base | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://basescan.org/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 84532 | Base Sepolia | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://sepolia.basescan.org/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 10 | Optimism | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://optimistic.etherscan.io/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 11155420 | Optimism Sepolia | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://sepolia-optimism.etherscan.io/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 534351 | Scroll Sepolia | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://sepolia.scrollscan.com/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+| 534352 | Scroll | [0x397A5f7f3dBd538f23DE225B51f532c34448dA9B](https://scrollscan.com/address/0x397A5f7f3dBd538f23DE225B51f532c34448dA9B) |
+
+### PLONK
+
+| Chain ID | Chain | Gateway |
+| -------- | ---------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
+| 1 | Mainnet | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://etherscan.io/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+| 11155111 | Sepolia | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://sepolia.etherscan.io/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+| 17000 | Holesky | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://holesky.etherscan.io/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+| 42161 | Arbitrum One | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://arbiscan.io/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+| 421614 | Arbitrum Sepolia | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://sepolia.arbiscan.io/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+| 8453 | Base | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://basescan.org/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+| 84532 | Base Sepolia | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://sepolia.basescan.org/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+| 10 | Optimism | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://optimistic.etherscan.io/address/0x3b6041173b80e77f038f3f2c0f9744f04837185e) |
+| 11155420 | Optimism Sepolia | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://sepolia-optimism.etherscan.io/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+| 534351 | Scroll Sepolia | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://sepolia.scrollscan.com/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+| 534352 | Scroll | [0x3B6041173B80E77f038f3F2C0f9744f04837185e](https://scrollscan.com/address/0x3B6041173B80E77f038f3F2C0f9744f04837185e) |
+
+The most up-to-date reference on each chain can be found in the
+[deployments](https://github.com/succinctlabs/sp1-contracts/blob/main/contracts/deployments)
+directory in the
+SP1 contracts repository, where each chain has a dedicated JSON file with each verifier's address.
+
+## Versioning Policy
+
+Whenever a verifier for a new SP1 version is deployed, the gateway contract will be updated to
+support it with
+[addRoute()](https://github.com/succinctlabs/sp1-contracts/blob/main/contracts/src/ISP1VerifierGateway.sol#L65).
+If a verifier for an SP1 version has an issue, the route will be frozen with
+[freezeRoute()](https://github.com/succinctlabs/sp1-contracts/blob/main/contracts/src/ISP1VerifierGateway.sol#L71).
+
+On mainnets, only official versioned releases are deployed and added to the gateway. Testnets have
+`rc` versions of the verifier deployed supported in addition to the official versions.
+
+## Deploying to other Chains
+
+In the case that you need to use a chain that is not listed above, you can deploy your own
+verifier contract by following the instructions in the
+[SP1 Contracts Repo](https://github.com/succinctlabs/sp1-contracts/blob/main/README.md#deployments).
+
+Since both the `SP1VerifierGateway` and each `SP1Verifier` implement the [ISP1Verifier
+interface](https://github.com/succinctlabs/sp1-contracts/blob/main/contracts/src/ISP1Verifier.sol), you can choose to either:
+
+* Deploy the `SP1VerifierGateway` and add `SP1Verifier` contracts to it. Then point to the
+ `SP1VerifierGateway` address in your contracts.
+* Deploy just the `SP1Verifier` contract that you want to use. Then point to the `SP1Verifier`
+ address in
+ your contracts.
+
+If you want support for a canonical verifier on your chain, contact us [here](https://t.me/+AzG4ws-kD24yMGYx). We often deploy canonical verifiers on new chains if there's enough demand.
+
+## ISP1Verifier Interface
+
+All verifiers implement the [ISP1Verifier](https://github.com/succinctlabs/sp1-contracts/blob/main/contracts/src/ISP1Verifier.sol) interface.
+
+```c++
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.20;
+
+/// @title SP1 Verifier Interface
+/// @author Succinct Labs
+/// @notice This contract is the interface for the SP1 Verifier.
+interface ISP1Verifier {
+ /// @notice Verifies a proof with given public values and vkey.
+ /// @dev It is expected that the first 4 bytes of proofBytes must match the first 4 bytes of
+ /// target verifier's VERIFIER_HASH.
+ /// @param programVKey The verification key for the RISC-V program.
+ /// @param publicValues The public values encoded as bytes.
+ /// @param proofBytes The proof of the program execution the SP1 zkVM encoded as bytes.
+ function verifyProof(
+ bytes32 programVKey,
+ bytes calldata publicValues,
+ bytes calldata proofBytes
+ ) external view;
+}
+```
diff --git a/book/verification/onchain/getting-started.md b/book/versioned_docs/version-3.4.0/verification/onchain/getting-started.mdx
similarity index 93%
rename from book/verification/onchain/getting-started.md
rename to book/versioned_docs/version-3.4.0/verification/onchain/getting-started.mdx
index 715b100d58..e38fb1de75 100644
--- a/book/verification/onchain/getting-started.md
+++ b/book/versioned_docs/version-3.4.0/verification/onchain/getting-started.mdx
@@ -1,3 +1,5 @@
+import Example from "@site/static/examples_fibonacci_script_bin_groth16_bn254.rs.mdx";
+
# Onchain Verification: Setup
The best way to get started with verifying SP1 proofs on-chain is to refer to the [SP1 Project Template](https://github.com/succinctlabs/sp1-project-template/tree/main).
@@ -6,7 +8,7 @@ The best way to get started with verifying SP1 proofs on-chain is to refer to th
- The template [script](https://github.com/succinctlabs/sp1-project-template/blob/main/script/src/bin/prove.rs) shows how to generate the proof using the SDK and save it to a file.
- The template [contract](https://github.com/succinctlabs/sp1-project-template/blob/main/contracts/src/Fibonacci.sol) shows how to verify the proof onchain using Solidity.
-Refer to the section on [Contract Addresses](./contract-addresses.md#contract-addresses) for the addresses of the deployed verifiers.
+Refer to the section on [Contract Addresses](./contract-addresses) for the addresses of the deployed verifiers.
## Generating SP1 Proofs for Onchain Verification
@@ -19,9 +21,7 @@ By default, the proofs generated by SP1 are not verifiable onchain, as they are
### Example
-```rust,noplayground
-{{#include ../../examples/fibonacci/script/bin/groth16_bn254.rs}}
-```
+
You can run the above script with `RUST_LOG=info cargo run --bin groth16_bn254 --release` in `examples/fibonacci/script`.
diff --git a/book/versioned_docs/version-3.4.0/verification/onchain/solidity-sdk.md b/book/versioned_docs/version-3.4.0/verification/onchain/solidity-sdk.md
new file mode 100644
index 0000000000..822ab620b9
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/verification/onchain/solidity-sdk.md
@@ -0,0 +1,122 @@
+# Solidity Verifier
+
+We maintain a suite of [contracts](https://github.com/succinctlabs/sp1-contracts/tree/main) used for verifying SP1 proofs onchain. We highly recommend using [Foundry](https://book.getfoundry.sh/).
+
+## Installation
+
+To install the latest release version:
+
+```bash
+forge install succinctlabs/sp1-contracts
+```
+
+To install a specific version:
+
+```bash
+forge install succinctlabs/sp1-contracts@
+```
+
+Finally, add `@sp1-contracts/=lib/sp1-contracts/contracts/src/` in `remappings.txt.`
+
+### Usage
+
+Once installed, you can use the contracts in the library by importing them:
+
+```c++
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.20;
+
+import {ISP1Verifier} from "@sp1-contracts/ISP1Verifier.sol";
+
+/// @title Fibonacci.
+/// @author Succinct Labs
+/// @notice This contract implements a simple example of verifying the proof of a computing a
+/// fibonacci number.
+contract Fibonacci {
+ /// @notice The address of the SP1 verifier contract.
+ /// @dev This can either be a specific SP1Verifier for a specific version, or the
+ /// SP1VerifierGateway which can be used to verify proofs for any version of SP1.
+ /// For the list of supported verifiers on each chain, see:
+ /// https://docs.succinct.xyz/onchain-verification/contract-addresses
+ address public verifier;
+
+ /// @notice The verification key for the fibonacci program.
+ bytes32 public fibonacciProgramVKey;
+
+ constructor(address _verifier, bytes32 _fibonacciProgramVKey) {
+ verifier = _verifier;
+ fibonacciProgramVKey = _fibonacciProgramVKey;
+ }
+
+ /// @notice The entrypoint for verifying the proof of a fibonacci number.
+ /// @param _proofBytes The encoded proof.
+ /// @param _publicValues The encoded public values.
+ function verifyFibonacciProof(bytes calldata _publicValues, bytes calldata _proofBytes)
+ public
+ view
+ returns (uint32, uint32, uint32)
+ {
+ ISP1Verifier(verifier).verifyProof(fibonacciProgramVKey, _publicValues, _proofBytes);
+ (uint32 n, uint32 a, uint32 b) = abi.decode(_publicValues, (uint32, uint32, uint32));
+ return (n, a, b);
+ }
+}
+
+```
+
+### Finding your program vkey
+
+The program vkey (`fibonacciProgramVKey` in the example above) is passed into the `ISP1Verifier` along with the public values and proof bytes. You
+can find your program vkey by going through the following steps:
+
+1. Find what version of SP1 crates you are using.
+2. Use the version from step to run this command: `sp1up --version `
+3. Use the vkey command to get the program vkey: `cargo prove vkey -elf `
+
+Alternatively, you can set up a simple script to do this using the `sp1-sdk` crate:
+
+```rust
+fn main() {
+ // Setup the logger.
+ sp1_sdk::utils::setup_logger();
+
+ // Setup the prover client.
+ let client = ProverClient::new();
+
+ // Setup the program.
+ let (_, vk) = client.setup(FIBONACCI_ELF);
+
+ // Print the verification key.
+ println!("Program Verification Key: {}", vk.bytes32());
+}
+```
+
+### Testing
+
+To test the contract, we recommend setting up [Foundry
+Tests](https://book.getfoundry.sh/forge/tests). We have an example of such a test in the [SP1
+Project
+Template](https://github.com/succinctlabs/sp1-project-template/blob/dev/contracts/test/Fibonacci.t.sol).
+
+### Solidity Versions
+
+The officially deployed contracts are built using Solidity 0.8.20 and exist on the
+[sp1-contracts main](https://github.com/succinctlabs/sp1-contracts/tree/main) branch.
+
+If you need to use different versions that are compatible with your contracts, there are also other
+branches you can install that contain different versions. For
+example for branch [main-0.8.15](https://github.com/succinctlabs/sp1-contracts/tree/main-0.8.15)
+contains the contracts with:
+
+```c++
+pragma solidity ^0.8.15;
+```
+
+and you can install it with:
+
+```sh
+forge install succinctlabs/sp1-contracts@main-0.8.15
+```
+
+If there is different versions that you need but there aren't branches for them yet, please ask in
+the [SP1 Telegram](https://t.me/+AzG4ws-kD24yMGYx).
diff --git a/book/verification/supported-versions.md b/book/versioned_docs/version-3.4.0/verification/supported-versions.md
similarity index 100%
rename from book/verification/supported-versions.md
rename to book/versioned_docs/version-3.4.0/verification/supported-versions.md
diff --git a/book/versioned_docs/version-3.4.0/what-is-a-zkvm.md b/book/versioned_docs/version-3.4.0/what-is-a-zkvm.md
new file mode 100644
index 0000000000..4f91fa3213
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/what-is-a-zkvm.md
@@ -0,0 +1,35 @@
+# What is a zkVM?
+
+A zero-knowledge virtual machine (zkVM) is zero-knowledge proof system that allows developers to prove the execution of arbitrary Rust (or other LLVM-compiled language) programs.
+
+Conceptually, you can think of the SP1 zkVM as proving the evaluation of a function `f(x) = y` by following the steps below:
+
+- Define `f` using normal Rust code and compile it to an ELF (covered in the [writing programs](./writing-programs/setup.md) section).
+- Setup a proving key (`pk`) and verifying key (`vk`) for the program given the ELF.
+- Generate a proof `π` using the SP1 zkVM that `f(x) = y` with `prove(pk, x)`.
+- Verify the proof `π` using `verify(vk, x, y, π)`.
+
+As a practical example, `f` could be a simple Fibonacci [program](https://github.com/succinctlabs/sp1/blob/main/examples/fibonacci/program/src/main.rs). The process of generating a proof and verifying it can be seen [here](https://github.com/succinctlabs/sp1/blob/main/examples/fibonacci/script/src/main.rs).
+
+For blockchain applications, the verification usually happens inside of a [smart contract](https://github.com/succinctlabs/sp1-project-template/blob/main/contracts/src/Fibonacci.sol).
+
+## How does SP1 Work?
+
+At a high level, SP1 works with the following steps:
+
+* Write a program in Rust that defines the logic of your computation for which you want to generate a ZKP.
+* Compile the program to the RISC-V ISA (a standard Rust compilation target) using the `cargo prove` CLI tool (installation instructions [here](./getting-started/install.md)) and generate a RISC-V ELF file.
+* SP1 will prove the correct execution of arbitrary RISC-V programs by generating a STARK proof of execution.
+* Developers can leverage the `sp1-sdk` crate to generate proofs with their ELF and input data. Under the hood the `sp1-sdk` will either generate proofs locally or use a beta version of Succinct's prover network to generate proofs.
+
+SP1 leverages performant STARK recursion that allows us to prove the execution of arbitrarily long programs and also has a STARK -> SNARK "wrapping system" that allows us to generate small SNARK proofs that can be efficiently verified on EVM chains.
+
+## Proof System
+
+For more technical details, check out the SP1 technical note that explains our proof system in detail. In short, we use:
+
+* STARKs + FRI over the Baby Bear field
+* We use performant STARK recursion that allows us to prove the execution of arbitrarily long programs
+* We have a system of performant precompiles that accelerate hash functions and cryptographic signature verification that allow us to get substantial performance gains on blockchain workloads
+
+
diff --git a/book/versioned_docs/version-3.4.0/why-use-sp1.md b/book/versioned_docs/version-3.4.0/why-use-sp1.md
new file mode 100644
index 0000000000..44db0ab6bc
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/why-use-sp1.md
@@ -0,0 +1,46 @@
+# Why use SP1?
+
+## Use-Cases
+
+Zero-knowledge proofs (ZKPs) are a powerful primitive that enable **verifiable computation**. With ZKPs, anyone can verify a cryptographic proof that a program has executed correctly, without needing to trust the prover, re-execute the program or even know the inputs to the program.
+
+Historically, building ZKP systems has been extremely complicated, requiring large teams with specialized cryptography expertise and taking years to go to production. SP1 is a performant, general-purpose zkVM that solves this problem and creates a future where all blockchain infrastructure, including rollups, bridges, coprocessors, and more, utilize ZKPs **via maintainable software written in Rust**.
+
+SP1 is especially powerful in blockchain contexts which rely on verifiable computation. Example applications include:
+- [Rollups](https://ethereum.org/en/developers/docs/scaling/zk-rollups/): SP1 can be used in combination with existing node infrastructure like [Reth](https://github.com/paradigmxyz/reth) to build rollups with ZKP validity proofs or ZK fraud proofs.
+- [Coprocessors](https://crypto.mirror.xyz/BFqUfBNVZrqYau3Vz9WJ-BACw5FT3W30iUX3mPlKxtA): SP1 can be used to outsource onchain computation to offchain provers to enable use cases such as large-scale computation over historical state and onchain machine learning, dramatically reducing gas costs.
+- [Light Clients](https://ethereum.org/en/developers/docs/nodes-and-clients/light-clients/): SP1 can be used to build light clients that can verify the state of other chains, facilitating interoperability between different blockchains without relying on any trusted third parties.
+
+SP1 has already been integrated in many of these applications, including but not limited to:
+
+- [SP1 Tendermint](https://github.com/succinctlabs/sp1-tendermint-example): An example of a ZK Tendermint light client on Ethereum powered by SP1.
+- [SP1 Reth](https://github.com/succinctlabs/rsp): A performant, type-1 zkEVM written in Rust & SP1 using Reth.
+- [SP1 Contract Call](https://github.com/succinctlabs/sp1-contract-call): A lightweight library to generate ZKPs of Ethereum smart contract execution
+- and many more!
+
+SP1 is used by protocols in production today:
+
+- [SP1 Blobstream](https://github.com/succinctlabs/sp1-blobstream): A bridge that verifies [Celestia](https://celestia.org/) “data roots” (a commitment to all data blobs posted in a range of Celestia blocks) on Ethereum and other EVM chains.
+- [SP1 Vector](https://github.com/succinctlabs/sp1-vector): A bridge that relays [Avail's](https://www.availproject.org/) merkle root to Ethereum and also functions as a token bridge from Avail to Ethereum.
+
+
+## 100x developer productivity
+
+SP1 enables teams to use ZKPs in production with minimal overhead and fast timelines.
+
+**Maintainable:** With SP1, you can reuse existing Rust crates, like `revm`, `reth`, `tendermint-rs`, `serde` and more, to write your ZKP logic in maintainable, Rust code.
+
+**Go to market faster:** By reusing existing crates and expressing ZKP logic in regular code, SP1 significantly reduces audit surface area and complexity, enabling teams to go to market with ZKPs faster.
+
+## Blazing Fast Performance
+
+SP1 is the fastest zkVM and has blazing fast performance on a variety of realistic blockchain workloads, including light clients and rollups. With SP1, ZKP proving costs are an order of magnitude less than alternative zkVMs or even circuits, making it cost-effective and fast for practical use.
+
+Read more about our benchmarking results [here](https://blog.succinct.xyz/sp1-benchmarks-8-6-24).
+
+## Open Source
+
+SP1 is 100% open-source (MIT / Apache 2.0) with no code obfuscation and built to be contributor friendly, with all development done in the open. Unlike existing zkVMs whose constraint logic is closed-source and impossible to audit or modify, SP1 is modularly architected and designed to be customizable from day one. This customizability (unique to SP1) allows for users to add “precompiles” to the core zkVM logic that yield substantial performance gains, making SP1’s performance not only SOTA vs. existing zkVMs, but also competitive with circuits in a variety of use-cases.
+
+
+
diff --git a/book/versioned_docs/version-3.4.0/writing-programs/basics.mdx b/book/versioned_docs/version-3.4.0/writing-programs/basics.mdx
new file mode 100644
index 0000000000..23c7c027ad
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/writing-programs/basics.mdx
@@ -0,0 +1,15 @@
+import Example from "@site/static/examples_fibonacci_program_src_main.rs.mdx";
+
+# Basics
+
+The easiest way to understand how to write programs for the SP1 zkVM is to look at some examples.
+
+## Example: Fibonacci
+
+This program is from the `examples` [directory](https://github.com/succinctlabs/sp1/tree/main/examples) in the SP1 repo which contains several example programs of varying complexity.
+
+
+
+As you can see, writing programs is as simple as writing normal Rust.
+
+After you've written your program, you must compile it to an ELF that the SP1 zkVM can prove. To read more about compiling programs, refer to the section on [Compiling Programs](./compiling). To read more about how inputs and outputs work, refer to the section on [Inputs & Outputs](./inputs-and-outputs).
diff --git a/book/versioned_docs/version-3.4.0/writing-programs/compiling.mdx b/book/versioned_docs/version-3.4.0/writing-programs/compiling.mdx
new file mode 100644
index 0000000000..0041ba8fae
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/writing-programs/compiling.mdx
@@ -0,0 +1,100 @@
+import Example from "@site/static/examples_fibonacci_script_build.rs.mdx";
+
+# Compiling Programs
+
+Once you have written an SP1 program, you must compile it to an ELF file that can be executed in the zkVM. The `cargo prove` CLI tool (downloaded during installation) provides convenient commands for compiling SP1 programs.
+
+## Development Builds
+
+> WARNING: This may not generate a reproducible ELF which is necessary for verifying that your binary corresponds to given source code.
+>
+> Use the [reproducible build system](#production-builds) for production builds.
+
+To build a program while developing, simply run the following command in the crate that contains your SP1 program:
+
+```bash
+cargo prove build
+```
+
+This will compile the ELF that can be executed in the zkVM and put it in the file `elf/riscv32im-succinct-zkvm-elf`. The output from the command will look something like this:
+
+```bash
+[sp1] Compiling version_check v0.9.4
+[sp1] Compiling proc-macro2 v1.0.86
+[sp1] Compiling unicode-ident v1.0.12
+[sp1] Compiling cfg-if v1.0.0
+...
+[sp1] Compiling sp1-lib v1.0.1
+[sp1] Compiling sp1-zkvm v1.0.1
+[sp1] Compiling fibonacci-program v0.1.0 (/Users/username/Documents/fibonacci/program)
+[sp1] Finished `release` profile [optimized] target(s) in 8.33s
+```
+
+Under the hood, this CLI command calls `cargo build` with the `riscv32im-succinct-zkvm-elf` target and other required environment variables and flags. The logic for this command is defined in the [sp1-build](https://github.com/succinctlabs/sp1/tree/main/build) crate.
+
+### Advanced Build Options
+
+You can pass additional arguments to the `cargo prove build` command to customize the build process, like configuring what features are enabled, customizing the output directory and more. To see all available options, run `cargo prove build --help`. Many of these options mirror the options available in the `cargo build` command.
+
+## Production Builds
+
+For production builds of programs, you can build your program inside a Docker container which will generate a **reproducible ELF** on all platforms. To do so, just use the `--docker` flag and optionally the `--tag` flag with the release version you want to use (defaults to `latest`). For example:
+
+```bash
+cargo prove build --docker --tag v1.0.1
+```
+
+To verify that your build is reproducible, you can compute the SHA-512 hash of the ELF on different platforms and systems with:
+
+```bash
+$ shasum -a 512 elf/riscv32im-succinct-zkvm-elf
+f9afb8caaef10de9a8aad484c4dd3bfa54ba7218f3fc245a20e8a03ed40b38c617e175328515968aecbd3c38c47b2ca034a99e6dbc928512894f20105b03a203
+```
+
+## Build Script
+
+If you want your program crate to be built automatically whenever you build/run your script crate, you can add a `build.rs` file inside of `script/` (at the same level as `Cargo.toml` of your script crate) that utilizes the `sp1-build` crate:
+
+
+
+The path passed in to `build_program` should point to the directory containing the `Cargo.toml` file for your program. Make sure to also add `sp1-build` as a build dependency in `script/Cargo.toml`:
+
+```toml
+[build-dependencies]
+sp1-build = "2.0.0"
+```
+
+You will see output like the following from the build script if the program has changed, indicating that the program was rebuilt:
+
+```
+[fibonacci-script 0.1.0] cargo:rerun-if-changed=../program/src
+[fibonacci-script 0.1.0] cargo:rerun-if-changed=../program/Cargo.toml
+[fibonacci-script 0.1.0] cargo:rerun-if-changed=../program/Cargo.lock
+[fibonacci-script 0.1.0] cargo:warning=fibonacci-program built at 2024-03-02 22:01:26
+[fibonacci-script 0.1.0] [sp1] Compiling fibonacci-program v0.1.0 (/Users/umaroy/Documents/fibonacci/program)
+[fibonacci-script 0.1.0] [sp1] Finished release [optimized] target(s) in 0.15s
+warning: fibonacci-script@0.1.0: fibonacci-program built at 2024-03-02 22:01:26
+```
+
+The above output was generated by running `RUST_LOG=info cargo run --release -vv` for the `script` folder of the Fibonacci example.
+
+### Advanced Build Options
+
+To configure the build process when using the `sp1-build` crate, you can pass a [`BuildArgs`](https://docs.rs/sp1-build/1.2.0/sp1_build/struct.BuildArgs.html) struct to to the [`build_program_with_args`](https://docs.rs/sp1-build/1.2.0/sp1_build/fn.build_program_with_args.html) function. The build arguments are the same as the ones available from the `cargo prove build` command.
+
+As an example, you could use the following code to build the Fibonacci example with the `docker` flag set to `true` and a custom output directory for the generated ELF:
+
+```rust
+use sp1_build::{build_program_with_args, BuildArgs};
+
+fn main() {
+ let args = BuildArgs {
+ docker: true,
+ output_directory: "./fibonacci-program".to_string(),
+ ..Default::default()
+ };
+ build_program_with_args("../program", &args);
+}
+```
+
+**Note:** If you want reproducible builds with the `build.rs` approach, you should use the `docker` flag and the `build_program_with_args` function, as shown in the example above.
diff --git a/book/writing-programs/cycle-tracking.md b/book/versioned_docs/version-3.4.0/writing-programs/cycle-tracking.mdx
similarity index 97%
rename from book/writing-programs/cycle-tracking.md
rename to book/versioned_docs/version-3.4.0/writing-programs/cycle-tracking.mdx
index e7168c5723..f29c303a08 100644
--- a/book/writing-programs/cycle-tracking.md
+++ b/book/versioned_docs/version-3.4.0/writing-programs/cycle-tracking.mdx
@@ -1,3 +1,5 @@
+import Example from "@site/static/examples_cycle-tracking_program_bin_normal.rs.mdx";
+
# Cycle Tracking
When writing a program, it is useful to know how many RISC-V cycles a portion of the program takes to identify potential performance bottlenecks. SP1 provides a way to track the number of cycles spent in a portion of the program.
@@ -6,15 +8,13 @@ When writing a program, it is useful to know how many RISC-V cycles a portion of
To track the number of cycles spent in a portion of the program, you can either put `println!("cycle-tracker-start: block name")` + `println!("cycle-tracker-end: block name")` statements (block name must be same between start and end) around the portion of your program you want to profile or use the `#[sp1_derive::cycle_tracker]` macro on a function. An example is shown below:
-```rust,noplayground
-{{#include ../../examples/cycle-tracking/program/bin/normal.rs}}
-```
+
Note that to use the macro, you must add the `sp1-derive` crate to your dependencies for your program.
```toml
[dependencies]
-sp1-derive = "3.0.0"
+sp1-derive = "2.0.0"
```
In the script for proof generation, setup the logger with `utils::setup_logger()` and run the script with `RUST_LOG=info cargo run --release`. You should see the following output:
@@ -46,7 +46,7 @@ Note that we elegantly handle nested cycle tracking, as you can see above.
To include tracked cycle counts in the `ExecutionReport` when using `ProverClient::execute`, use the following annotations:
-```rust,noplayground
+```rust
fn main() {
println!("cycle-tracker-report-start: block name");
// ...
diff --git a/book/versioned_docs/version-3.4.0/writing-programs/inputs-and-outputs.mdx b/book/versioned_docs/version-3.4.0/writing-programs/inputs-and-outputs.mdx
new file mode 100644
index 0000000000..f8116ed317
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/writing-programs/inputs-and-outputs.mdx
@@ -0,0 +1,67 @@
+import Example from "@site/static/examples_io_program_src_main.rs.mdx";
+
+# Inputs and Outputs
+
+In real world applications of zero-knowledge proofs, you almost always want to verify your proof in the context of some inputs and outputs. For example:
+
+- **Rollups**: Given a list of transactions, prove the new state of the blockchain.
+- **Coprocessors**: Given a block header, prove the historical state of some storage slot inside a smart contract.
+- **Attested Images**: Given a signed image, prove that you made a restricted set of image transformations.
+
+In this section, we cover how you pass inputs and outputs to the zkVM and create new types that support serialization.
+
+## Reading Data
+
+Data that is read is not public to the verifier by default. Use the `sp1_zkvm::io::read::` method:
+
+```rust
+let a = sp1_zkvm::io::read::();
+let b = sp1_zkvm::io::read::();
+let c = sp1_zkvm::io::read::();
+```
+
+Note that `T` must implement the `serde::Serialize` and `serde::Deserialize` trait. If you want to read bytes directly, you can also use the `sp1_zkvm::io::read_vec` method.
+
+```rust
+let my_vec = sp1_zkvm::io::read_vec();
+```
+
+## Committing Data
+
+Committing to data makes the data public to the verifier. Use the `sp1_zkvm::io::commit::` method:
+
+```rust
+sp1_zkvm::io::commit::(&a);
+sp1_zkvm::io::commit::(&b);
+sp1_zkvm::io::commit::(&c);
+```
+
+Note that `T` must implement the `Serialize` and `Deserialize` trait. If you want to write bytes directly, you can also use `sp1_zkvm::io::commit_slice` method:
+
+```rust
+let mut my_slice = [0_u8; 32];
+sp1_zkvm::io::commit_slice(&my_slice);
+```
+
+## Creating Serializable Types
+
+Typically, you can implement the `Serialize` and `Deserialize` traits using a simple derive macro on a struct.
+
+```rust
+use serde::{Serialize, Deserialize};
+
+#[derive(Serialize, Deserialize)]
+struct MyStruct {
+ a: u32,
+ b: u64,
+ c: String
+}
+```
+
+For more complex usecases, refer to the [Serde docs](https://serde.rs/).
+
+## Example
+
+Here is a basic example of using inputs and outputs with more complex types.
+
+
diff --git a/book/versioned_docs/version-3.4.0/writing-programs/patched-crates.md b/book/versioned_docs/version-3.4.0/writing-programs/patched-crates.md
new file mode 100644
index 0000000000..818190f2ee
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/writing-programs/patched-crates.md
@@ -0,0 +1,234 @@
+# Patched Crates
+
+We maintain forks of commonly used libraries in blockchain infrastructure to significantly accelerate the execution of certain operations.
+Under the hood, we use [precompiles](./precompiles) to achieve tremendous performance improvements in proof generation time.
+
+**If you know of a library or library version that you think should be patched, please open an issue or a pull request!**
+
+## Supported Libraries
+
+| Crate Name | Repository | Notes | Versions |
+|---------------------|---------------------------------------------------------------------------------------|------------------|-----------------------|
+| sha2 | [sp1-patches/RustCrypto-hashes](https://github.com/sp1-patches/RustCrypto-hashes) | sha256 | 0.9.8, 0.10.6, 0.10.8 |
+| sha3 | [sp1-patches/RustCrypto-hashes](https://github.com/sp1-patches/RustCrypto-hashes) | keccak256 | 0.9.8, 0.10.6, 0.10.8 |
+| bigint | [sp1-patches/RustCrypto-bigint](https://github.com/sp1-patches/RustCrypto-bigint) | bigint | 0.5.5 |
+| tiny-keccak | [sp1-patches/tiny-keccak](https://github.com/sp1-patches/tiny-keccak) | keccak256 | 2.0.2 |
+| curve25519-dalek | [sp1-patches/curve25519-dalek](https://github.com/sp1-patches/curve25519-dalek) | ed25519 verify | 4.1.3, 3.2.0 |
+| curve25519-dalek-ng | [sp1-patches/curve25519-dalek-ng](https://github.com/sp1-patches/curve25519-dalek-ng) | ed25519 verify | 4.1.1 |
+| ed25519-consensus | [sp1-patches/ed25519-consensus](http://github.com/sp1-patches/ed25519-consensus) | ed25519 verify | 2.1.0 |
+| ed25519-dalek | [sp1-patches/ed25519-dalek](http://github.com/sp1-patches/ed25519-dalek) | ed25519 verify | 1.0.1 |
+| ecdsa-core | [sp1-patches/signatures](http://github.com/sp1-patches/signatures) | secp256k1 verify | 0.16.8, 0.16.9 |
+| secp256k1 | [sp1-patches/rust-secp256k1](http://github.com/sp1-patches/rust-secp256k1) | secp256k1 verify | 0.29.0, 0.29.1 |
+| substrate-bn | [sp1-patches/bn](https://github.com/sp1-patches/bn) | BN254 | 0.6.0 |
+| bls12_381 | [sp1-patches/bls12_381](https://github.com/sp1-patches/bls12_381) | BLS12-381 | 0.8.0 |
+
+## Using Patched Crates
+
+To use the patched libraries, you can use corresponding patch entries in your program's `Cargo.toml` such as:
+
+```toml
+[patch.crates-io]
+sha2-v0-9-8 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha2", tag = "sha2-v0.9.8-patch-v1" }
+sha2-v0-10-6 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha2", tag = "sha2-v0.10.6-patch-v1" }
+sha2-v0-10-8 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha2", tag = "sha2-v0.10.8-patch-v1" }
+sha3-v0-9-8 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha3", tag = "sha3-v0.9.8-patch-v1" }
+sha3-v0-10-6 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha3", tag = "sha3-v0.10.6-patch-v1" }
+sha3-v0-10-8 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha3", tag = "sha3-v0.10.8-patch-v1" }
+crypto-bigint = { git = "https://github.com/sp1-patches/RustCrypto-bigint", tag = "crypto_bigint-v0.5.5-patch-v1" }
+tiny-keccak = { git = "https://github.com/sp1-patches/tiny-keccak", tag = "tiny_keccak-v2.0.2-patch-v1" }
+substrate-bn = { git = "https://github.com/sp1-patches/bn", tag = "substrate_bn-v0.6.0-patch-v1" }
+bls12_381 = { git = "https://github.com/sp1-patches/bls12_381", tag = "bls12_381-v0.8.0-patch-v1" }
+
+# For sp1 versions >= 3.4.0
+curve25519-dalek = { git = "https://github.com/sp1-patches/curve25519-dalek", tag = "patch-v4.1.3-v3.4.0" }
+# For sp1 versions < 3.4.0
+curve25519-dalek = { git = "https://github.com/sp1-patches/curve25519-dalek", tag = "curve25519_dalek-v4.1.3-patch-v1" }
+curve25519-dalek-ng = { git = "https://github.com/sp1-patches/curve25519-dalek-ng", tag = "curve25519_dalek_ng-v4.1.1-patch-v1" }
+ed25519-consensus = { git = "https://github.com/sp1-patches/ed25519-consensus", tag = "ed25519_consensus-v2.1.0-patch-v1" }
+# For sp1 versions >= 3.3.0
+ecdsa-core = { git = "https://github.com/sp1-patches/signatures", package = "ecdsa", tag = "ecdsa-v0.16.9-patch-v3.3.0" }
+secp256k1 = { git = "https://github.com/sp1-patches/rust-secp256k1", tag = "secp256k1-v0.29.0-patch-v3.3.0" }
+# For sp1 versions < 3.3.0
+ecdsa-core = { git = "https://github.com/sp1-patches/signatures", package = "ecdsa", tag = "ecdsa-v0.16.9-patch-v1" }
+secp256k1 = { git = "https://github.com/sp1-patches/rust-secp256k1", tag = "secp256k1-v0.29.0-patch-v1" }
+```
+
+If you are patching a crate from Github instead of from crates.io, you need to specify the
+repository in the patch section. For example:
+
+```toml
+[patch."https://github.com/RustCrypto/hashes"]
+sha3 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha3", tag = "sha3-v0.10.8-patch-v1" }
+```
+
+An example of using patched crates is available in our [Tendermint Example](https://github.com/succinctlabs/sp1/blob/main/examples/tendermint/program/Cargo.toml#L22-L25).
+
+## Ed25519 Acceleration
+
+To accelerate Ed25519 operations, you'll need to patch crates depending on if you're using the `ed25519-consensus` or `ed25519-dalek` library in your program or dependencies.
+
+Generally, `ed25519-consensus` has better performance than `ed25519-dalek` by a factor of 2.
+
+### Patches
+
+Apply the following patches based on what crates are in your dependencies.
+
+- `ed25519-consensus`
+
+ ```toml
+ ed25519-consensus = { git = "https://github.com/sp1-patches/ed25519-consensus", tag = "ed25519_consensus-v2.1.0-patch-v1" }
+ ```
+
+ Note: The curve operations for Ed25519 occur mainly inside of `curve25519-dalek-ng`, but the crate also exposes
+ a `u32_backend` feature flag which accelerates signature recovery by 10% over the default `u64_backend`, which is why
+ `ed25519-consensus` is patched rather than `ed25519-dalek`.
+
+- `ed25519-dalek`
+
+ If using `ed25519-dalek` version `2.1`, you can patch it with the following:
+
+ ```toml
+ curve25519-dalek = { git = "https://github.com/sp1-patches/curve25519-dalek", tag = "curve25519_dalek-v4.1.3-patch-v1" }
+ ```
+
+ If using `ed25519-dalek` version `1.0.1`, you can patch it with the following:
+
+ ```toml
+ ed25519-dalek = { git = "https://github.com/sp1-patches/ed25519-dalek", tag = "ed25519_dalek-v1.0.1-patch-v1" }
+ ```
+
+ Note: We need to patch the underlying Ed25519 curve operations in the `curve25519-dalek` crate. `ed25519-dalek`
+ version `2.1` uses `curve25519-dalek` version `4.1.3`, while `1.0.1` uses `3.2.0`. For version `2.1`, we patch
+ `curve25519-dalek` directly, while for version `1.0.1`, we patch `ed25519-dalek`.
+
+- `curve25519-dalek`
+
+ ```toml
+ curve25519-dalek = { git = "https://github.com/sp1-patches/curve25519-dalek", tag = "curve25519_dalek-v4.1.3-patch-v1" }
+ ```
+
+- `curve25519-dalek-ng`
+
+ ```toml
+ curve25519-dalek-ng = { git = "https://github.com/sp1-patches/curve25519-dalek-ng", tag = "curve25519_dalek_ng-v4.1.1-patch-v1" }
+ ```
+
+## Secp256k1 Acceleration
+
+To accelerate Secp256k1 operations, you'll need to patch `k256` or `secp256k1` depending on your usage.
+
+Generally, if a crate you're using (ex. `revm`) has support for using `k256` instead of `secp256k1`, you should use `k256`.
+
+### Patches
+
+Apply the following patches based on what crates are in your dependencies.
+
+- `k256`
+
+ ```toml
+ ecdsa-core = { git = "https://github.com/sp1-patches/signatures", package = "ecdsa", tag = "ecdsa-v0.16.9-patch-v1" }
+ ```
+
+ Note: The curve operations for `k256` are inside of the `ecdsa-core` crate, so you don't need to patch `k256` itself, and just patching `ecdsa-core` is enough.
+
+- `secp256k1`
+
+ ```toml
+ secp256k1 = { git = "https://github.com/sp1-patches/rust-secp256k1", tag = "secp256k1-v0.29.0-patch-v1" }
+ ```
+
+## BN254 Acceleration
+
+To accelerate BN254 (Also known as BN128 and Alt-BN128), you will need to patch the `substrate-bn` crate.
+
+### Patches
+
+Apply the patch by adding the following to your list of dependencies:
+
+```rust
+substrate-bn = { git = "https://github.com/sp1-patches/bn", tag = "substrate_bn-v0.6.0-patch-v1" }
+```
+
+### Performance Benchmarks for Patched `substrate-bn` in `revm`
+
+| Operation | Standard `substrate-bn` Cycles | Patched `substrate-bn` Cycles | Times Faster |
+| --------- | ------------------------------ | ----------------------------- | ------------ |
+| run-add | 170,298 | 111,615 | 1.52 |
+| run-mul | 1,860,836 | 243,830 | 7.64 |
+| run-pair | 255,627,625 | 11,528,503 | 22.15 |
+
+Note: The operations `run-add`, `run-mul`, and `run-pair` are from the `revm` crate, specifically from the file `crates/precompile/src/bn128.rs` on GitHub. In the patched version of the `substrate-bn` crate, these functions utilize SP1's BN254 Fp precompiles.
+
+To accelerate [revm](https://github.com/bluealloy/revm) in SP1 using the BN254 patched crate, replace the `substrate-bn` crate with the patched crate by adding the following to `crates/precompile/Cargo.toml`:
+
+```toml
+bn = { git = "https://github.com/sp1-patches/bn", package = "substrate-bn", tag = "substrate_bn-v0.6.0-patch-v1" }
+```
+
+## BLS12-381 Acceleration
+
+To accelerate BLS12-381 operations, you'll need to patch the `bls12_381` crate. Apply the following patch by adding the following to your list of dependencies:
+
+```toml
+bls12_381 = { git = "https://github.com/sp1-patches/bls12_381", tag = "bls12_381-v0.8.0-patch-v1" }
+```
+
+This patch significantly improves the performance of BLS12-381 operations, making it essential for applications that rely heavily on these cryptographic primitives.
+
+### Performance Benchmarks for Patched `bls12_381` in [`kzg-rs`](https://github.com/succinctlabs/kzg-rs)
+
+| Test | Unpatched Cycles | Patched Cycles | Improvement (x faster) |
+| -------------------------------------- | ---------------- | -------------- | ---------------------- |
+| Verify blob KZG proof | 265,322,934 | 27,166,173 | 9.77x |
+| Verify blob KZG proof batch (10 blobs) | 1,228,277,089 | 196,571,578 | 6.25x |
+| Evaluate polynomial in evaluation form | 90,717,711 | 59,370,556 | 1.53x |
+| Compute challenge | 63,400,511 | 57,341,532 | 1.11x |
+| Verify KZG proof | 212,708,597 | 9,390,640 | 22.65x |
+
+## Troubleshooting
+
+### Verifying Patch Usage: Cargo
+
+You can check if the patch was applied by using cargo's tree command to print the dependencies of the crate you patched.
+
+```bash
+cargo tree -p sha2@0.9.8
+```
+
+Next to the package name, it should have a link to the Github repository that you patched with.
+
+Ex.
+
+```text
+sha2 v0.9.8 (https://github.com/sp1-patches/RustCrypto-hashes?branch=patch-sha2-v0.9.8#afdbfb09)
+├── ...
+```
+
+### Verifying Patch Usage: SP1
+
+To check if a precompile is used by your program, you can view SP1's ExecutionReport, which is returned when executing a program with `execute`. In `ExecutionReport` you can view the `syscall_counts` map to view if a specific syscall was used.
+
+For example, if you wanted to check `sha256` was used, you would look for `SHA_EXTEND` and `SHA_COMPRESS` in `syscall_counts`.
+
+An example of this is available in our [Patch Testing Example](https://github.com/succinctlabs/sp1/blob/dd032eb23949828d244d1ad1f1569aa78155837c/examples/patch-testing/script/src/main.rs).
+
+### Cargo Version Issues
+
+If you encounter issues with version commits on your patches, you should try updating the patched crate manually.
+
+```bash
+cargo update -p
+```
+
+If you encounter issues relating to cargo / git, you can try setting `CARGO_NET_GIT_FETCH_WITH_CLI`:
+
+```bash
+CARGO_NET_GIT_FETCH_WITH_CLI=true cargo update -p
+```
+
+You can permanently set this value in `~/.cargo/config`:
+
+```toml
+[net]
+git-fetch-with-cli = true
+```
diff --git a/book/versioned_docs/version-3.4.0/writing-programs/precompiles.mdx b/book/versioned_docs/version-3.4.0/writing-programs/precompiles.mdx
new file mode 100644
index 0000000000..0f7324f623
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/writing-programs/precompiles.mdx
@@ -0,0 +1,23 @@
+import Example from "@site/static/crates_zkvm_lib_src_lib.rs.mdx";
+
+# Precompiles
+
+Precompiles are built into the SP1 zkVM and accelerate commonly used operations such as elliptic curve arithmetic and hashing. Under the hood, precompiles are implemented as custom STARK tables dedicated to proving one or few operations. **They typically improve the performance
+of executing expensive operations in SP1 by a few orders of magnitude.**
+
+Inside the zkVM, precompiles are exposed as system calls executed through the `ecall` RISC-V instruction.
+Each precompile has a unique system call number and implements an interface for the computation.
+
+SP1 also has been designed specifically to make it easy for external contributors to create and extend the zkVM with their own precompiles.
+To learn more about this, you can look at implementations of existing precompiles in the [precompiles](https://github.com/succinctlabs/sp1/tree/main/crates/core/executor/src/events/precompiles) folder. More documentation on this will be coming soon.
+
+**To use precompiles, we typically recommend you interact with them through [patches](./patched-crates.md), which are crates modified
+to use these precompiles under the hood, without requiring you to call system calls directly.**
+
+## Specification
+
+If you are an advanced user you can interact with the precompiles directly using external system calls.
+
+Here is a list of all available system calls & precompiles.
+
+
diff --git a/book/versioned_docs/version-3.4.0/writing-programs/proof-aggregation.md b/book/versioned_docs/version-3.4.0/writing-programs/proof-aggregation.md
new file mode 100644
index 0000000000..dc13d6e42c
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/writing-programs/proof-aggregation.md
@@ -0,0 +1,58 @@
+# Proof Aggregation
+
+SP1 supports proof aggregation and recursion, which allows you to verify an SP1 proof within SP1. Use cases include:
+
+- Reducing on-chain verification costs by aggregating multiple SP1 proofs into a single SP1 proof.
+- Proving logic that is split into multiple proofs, such as proving a statement about a rollup's state transition function by proving each block individually and aggregating these proofs to produce a final proof of a range of blocks.
+
+**For an example of how to use proof aggregation and recursion in SP1, refer to the [aggregation example](https://github.com/succinctlabs/sp1/blob/main/examples/aggregation/script/src/main.rs).**
+
+Note that to verify an SP1 proof inside SP1, you must generate a "compressed" SP1 proof (see [Proof Types](../generating-proofs/proof-types.md) for more details).
+
+### When to use aggregation
+
+Note that by itself, SP1 can already prove arbitrarily large programs by chunking the program's execution into multiple "shards" (contiguous batches of cycles) and generating proofs for each shard in parallel, and then recursively aggregating the proofs. Thus, aggregation is generally **not necessary** for most use-cases, as SP1's proving for large programs is already parallelized. However, aggregation can be useful for aggregating computations that require more than the zkVM's limited (~2GB) memory or for aggregating multiple SP1 proofs from different parties into a single proof to save on onchain verification costs.
+
+## Verifying Proofs inside the zkVM
+
+To verify a proof inside the zkVM, you can use the `sp1_zkvm::lib::verify::verify_sp1_proof` function.
+
+```rust
+sp1_zkvm::lib::verify::verify_sp1_proof(vkey, public_values_digest);
+```
+
+**You do not need to pass in the proof as input into the syscall, as the proof will automatically be read for the proof input stream by the prover.**
+
+Note that you must include the `verify` feature in your `Cargo.toml` for `sp1-zkvm` to be able to use the `verify_proof` function (like [this](https://github.com/succinctlabs/sp1/blob/main/examples/aggregation/program/Cargo.toml#L11)).
+
+## Generating Proofs with Aggregation
+
+To provide an existing proof as input to the SP1 zkVM, you can use the existing `SP1Stdin` object
+which is already used for all inputs to the zkVM.
+
+```rust
+# Generating proving key and verifying key.
+let (input_pk, input_vk) = client.setup(PROOF_INPUT_ELF);
+let (aggregation_pk, aggregation_vk) = client.setup(AGGREGATION_ELF);
+
+// Generate a proof that will be recursively verified / aggregated. Note that we use the "compressed"
+// proof type, which is necessary for aggregation.
+let mut stdin = SP1Stdin::new();
+let input_proof = client
+ .prove(&input_pk, stdin)
+ .compressed()
+ .run()
+ .expect("proving failed");
+
+// Create a new stdin object to write the proof and the corresponding verifying key to.
+let mut stdin = SP1Stdin::new();
+stdin.write_proof(input_proof, input_vk);
+
+// Generate a proof that will recursively verify / aggregate the input proof.
+let aggregation_proof = client
+ .prove(&aggregation_pk, stdin)
+ .compressed()
+ .run()
+ .expect("proving failed");
+
+```
diff --git a/book/versioned_docs/version-3.4.0/writing-programs/setup.md b/book/versioned_docs/version-3.4.0/writing-programs/setup.md
new file mode 100644
index 0000000000..2cd677f4a9
--- /dev/null
+++ b/book/versioned_docs/version-3.4.0/writing-programs/setup.md
@@ -0,0 +1,50 @@
+# Setup
+
+In this section, we will teach you how to setup a self-contained crate which can be compiled as a program that can be executed inside the zkVM.
+
+## Create Project with CLI (Recommended)
+
+The recommended way to setup your first program to prove inside the zkVM is using the method described in [Quickstart](../getting-started/quickstart.md) which will create a program folder.
+
+```bash
+cargo prove new
+cd program
+```
+
+## Manual Project Setup
+
+You can also manually setup a project. First create a new Rust project using `cargo`:
+
+```bash
+cargo new program
+cd program
+```
+
+### Cargo Manifest
+
+Inside this crate, add the `sp1-zkvm` crate as a dependency. Your `Cargo.toml` should look like the following:
+
+```rust
+[workspace]
+[package]
+version = "0.1.0"
+name = "program"
+edition = "2021"
+
+[dependencies]
+sp1-zkvm = "2.0.0"
+```
+
+The `sp1-zkvm` crate includes necessary utilities for your program, including handling inputs and outputs,
+precompiles, patches, and more.
+
+### main.rs
+
+Inside the `src/main.rs` file, you must make sure to include these two lines to ensure that your program properly compiles to a valid SP1 program.
+
+```rust
+#![no_main]
+sp1_zkvm::entrypoint!(main);
+```
+
+These two lines of code wrap your main function with some additional logic to ensure that your program compiles correctly with the RISC-V target.
diff --git a/book/versioned_sidebars/version-3.4.0-sidebars.json b/book/versioned_sidebars/version-3.4.0-sidebars.json
new file mode 100644
index 0000000000..0e9b005f29
--- /dev/null
+++ b/book/versioned_sidebars/version-3.4.0-sidebars.json
@@ -0,0 +1,98 @@
+{
+ "docs": [
+ "introduction",
+ {
+ "type": "category",
+ "label": "Getting Started",
+ "items": [
+ "getting-started/install",
+ "getting-started/quickstart",
+ "getting-started/hardware-requirements",
+ "getting-started/project-template"
+ ],
+ "collapsed": false
+ },
+ {
+ "type": "category",
+ "label": "Writing Programs",
+ "items": [
+ "writing-programs/basics",
+ "writing-programs/compiling",
+ "writing-programs/cycle-tracking",
+ "writing-programs/inputs-and-outputs",
+ "writing-programs/patched-crates",
+ "writing-programs/precompiles",
+ "writing-programs/proof-aggregation",
+ "writing-programs/setup"
+ ],
+ "collapsed": true
+ },
+ {
+ "type": "category",
+ "label": "Generating Proofs",
+ "items": [
+ "generating-proofs/basics",
+ "generating-proofs/setup",
+ "generating-proofs/proof-types",
+ "generating-proofs/recommended-workflow",
+ "generating-proofs/sp1-sdk-faq",
+ {
+ "type": "category",
+ "label": "Hardware Acceleration",
+ "link": {
+ "type": "doc",
+ "id": "generating-proofs/hardware-acceleration"
+ },
+ "items": [
+ "generating-proofs/hardware-acceleration",
+ "generating-proofs/hardware-acceleration/avx",
+ "generating-proofs/hardware-acceleration/cuda"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Prover Network",
+ "link": {
+ "type": "doc",
+ "id": "generating-proofs/prover-network"
+ },
+ "items": [
+ "generating-proofs/prover-network/key-setup",
+ "generating-proofs/prover-network/usage",
+ "generating-proofs/prover-network/versions"
+ ]
+ },
+ "generating-proofs/advanced"
+ ],
+ "collapsed": true
+ },
+ {
+ "type": "category",
+ "label": "Verification",
+ "items": [
+ "verification/off-chain-verification",
+ {
+ "type": "category",
+ "label": "On-Chain Verification",
+ "items": [
+ "verification/onchain/getting-started",
+ "verification/onchain/contract-addresses",
+ "verification/onchain/solidity-sdk"
+ ]
+ }
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Developers",
+ "items": [
+ "developers/common-issues",
+ "developers/usage-in-ci",
+ "developers/building-circuit-artifacts",
+ "developers/rv32im-specification"
+ ]
+ },
+ "what-is-a-zkvm",
+ "why-use-sp1"
+ ]
+}
diff --git a/book/versions.json b/book/versions.json
new file mode 100644
index 0000000000..af427f44cc
--- /dev/null
+++ b/book/versions.json
@@ -0,0 +1,3 @@
+[
+ "3.4.0"
+]
diff --git a/crates/cli/Cargo.toml b/crates/cli/Cargo.toml
index cf2e8cb4b2..d291ab46ad 100644
--- a/crates/cli/Cargo.toml
+++ b/crates/cli/Cargo.toml
@@ -43,4 +43,4 @@ regex = "1.5.4"
prettytable-rs = "0.10"
textwrap = "0.16.0"
ctrlc = "3.4.2"
-cargo_metadata = "0.18.1"
\ No newline at end of file
+cargo_metadata = "0.18.1"
diff --git a/crates/cli/src/bin/cargo-prove.rs b/crates/cli/src/bin/cargo-prove.rs
index e2b87d44bd..bb84ea02ca 100644
--- a/crates/cli/src/bin/cargo-prove.rs
+++ b/crates/cli/src/bin/cargo-prove.rs
@@ -3,7 +3,7 @@ use clap::{Parser, Subcommand};
use sp1_cli::{
commands::{
build::BuildCmd, build_toolchain::BuildToolchainCmd,
- install_toolchain::InstallToolchainCmd, new::NewCmd, trace::TraceCmd, vkey::VkeyCmd,
+ install_toolchain::InstallToolchainCmd, new::NewCmd, vkey::VkeyCmd,
},
SP1_VERSION_MESSAGE,
};
@@ -27,7 +27,6 @@ pub enum ProveCliCommands {
Build(BuildCmd),
BuildToolchain(BuildToolchainCmd),
InstallToolchain(InstallToolchainCmd),
- Trace(TraceCmd),
Vkey(VkeyCmd),
}
@@ -39,7 +38,6 @@ fn main() -> Result<()> {
ProveCliCommands::Build(cmd) => cmd.run(),
ProveCliCommands::BuildToolchain(cmd) => cmd.run(),
ProveCliCommands::InstallToolchain(cmd) => cmd.run(),
- ProveCliCommands::Trace(cmd) => cmd.run(),
ProveCliCommands::Vkey(cmd) => cmd.run(),
}
}
diff --git a/crates/cli/src/commands/mod.rs b/crates/cli/src/commands/mod.rs
index fc6eb6a5ac..e17d443d05 100644
--- a/crates/cli/src/commands/mod.rs
+++ b/crates/cli/src/commands/mod.rs
@@ -2,5 +2,4 @@ pub mod build;
pub mod build_toolchain;
pub mod install_toolchain;
pub mod new;
-pub mod trace;
pub mod vkey;
diff --git a/crates/cli/src/commands/trace.rs b/crates/cli/src/commands/trace.rs
deleted file mode 100644
index 16cd2e219c..0000000000
--- a/crates/cli/src/commands/trace.rs
+++ /dev/null
@@ -1,428 +0,0 @@
-//! RISC-V tracer for SP1 traces. This tool can be used to analyze function call graphs and
-//! instruction counts from a trace file from SP1 execution by setting the `TRACE_FILE` env
-//! variable.
-//
-// Adapted from Sovereign's RISC-V tracer tool: https://github.com/Sovereign-Labs/riscv-cycle-tracer.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-// Modified by Succinct Labs on July 25, 2024.
-
-use anyhow::Result;
-use clap::Parser;
-use goblin::elf::{sym::STT_FUNC, Elf};
-use indicatif::{ProgressBar, ProgressStyle};
-use prettytable::{format, Cell, Row, Table};
-use regex::Regex;
-use rustc_demangle::demangle;
-use std::{
- cmp::Ordering,
- collections::HashMap,
- io::Read,
- process::Command,
- str,
- sync::{atomic::AtomicBool, Arc},
-};
-use textwrap::wrap;
-
-#[derive(Parser, Debug)]
-#[command(name = "trace", about = "Trace a program execution and analyze cycle counts.")]
-pub struct TraceCmd {
- /// Include the "top" number of functions.
- #[arg(short, long, default_value_t = 30)]
- top: usize,
-
- /// Don't print stack aware instruction counts
- #[arg(long)]
- no_stack_counts: bool,
-
- /// Don't print raw (stack un-aware) instruction counts.
- #[arg(long)]
- no_raw_counts: bool,
-
- /// Path to the ELF.
- #[arg(long, required = true)]
- elf: String,
-
- /// Path to the trace file. Simply run the program with `TRACE_FILE=trace.log` environment
- /// variable. File must be one u64 program counter per line
- #[arg(long, required = true)]
- trace: String,
-
- /// Strip the hashes from the function name while printing.
- #[arg(short, long)]
- keep_hashes: bool,
-
- /// Function name to target for getting stack counts.
- #[arg(short, long)]
- function_name: Option,
-
- /// Exclude functions matching these patterns from display.
- ///
- /// Usage: `-e func1 -e func2 -e func3`.
- #[arg(short, long)]
- exclude_view: Vec,
-}
-
-fn strip_hash(name_with_hash: &str) -> String {
- let re = Regex::new(r"::h[0-9a-fA-F]{16}").unwrap();
- let mut result = re.replace(name_with_hash, "").to_string();
- let re2 = Regex::new(r"^<(.+) as .+>").unwrap();
- result = re2.replace(&result, "$1").to_string();
- let re2 = Regex::new(r"^<(.+) as .+>").unwrap();
- result = re2.replace(&result, "$1").to_string();
- let re2 = Regex::new(r"([^\:])<.+>::").unwrap();
- result = re2.replace_all(&result, "$1::").to_string();
- result
-}
-
-fn print_instruction_counts(
- first_header: &str,
- count_vec: Vec<(String, usize)>,
- top_n: usize,
- strip_hashes: bool,
- exclude_list: Option<&[String]>,
-) {
- let mut table = Table::new();
- table.set_format(*format::consts::FORMAT_NO_LINESEP);
- table.set_titles(Row::new(vec![Cell::new(first_header), Cell::new("Instruction Count")]));
-
- let wrap_width = 120;
- let mut row_count = 0;
- for (key, value) in count_vec {
- let mut cont = false;
- if let Some(ev) = exclude_list {
- for e in ev {
- if key.contains(e) {
- cont = true;
- break;
- }
- }
- if cont {
- continue;
- }
- }
- let mut stripped_key = key.clone();
- if strip_hashes {
- stripped_key = strip_hash(&key);
- }
- row_count += 1;
- if row_count > top_n {
- break;
- }
- let wrapped_key = wrap(&stripped_key, wrap_width);
- let key_cell_content = wrapped_key.join("\n");
- table.add_row(Row::new(vec![Cell::new(&key_cell_content), Cell::new(&value.to_string())]));
- }
-
- table.printstd();
-}
-
-fn focused_stack_counts(
- function_stack: &[String],
- filtered_stack_counts: &mut HashMap, usize>,
- function_name: &str,
- num_instructions: usize,
-) {
- if let Some(index) = function_stack.iter().position(|s| s == function_name) {
- let truncated_stack = &function_stack[0..=index];
- let count = filtered_stack_counts.entry(truncated_stack.to_vec()).or_insert(0);
- *count += num_instructions;
- }
-}
-
-fn _build_radare2_lookups(
- start_lookup: &mut HashMap,
- end_lookup: &mut HashMap,
- func_range_lookup: &mut HashMap,
- elf_name: &str,
-) -> std::io::Result<()> {
- let output = Command::new("r2").arg("-q").arg("-c").arg("aa;afl").arg(elf_name).output()?;
-
- if output.status.success() {
- let result_str = str::from_utf8(&output.stdout).unwrap();
- for line in result_str.lines() {
- let parts: Vec<&str> = line.split_whitespace().collect();
- let address = u64::from_str_radix(&parts[0][2..], 16).unwrap();
- let size = parts[2].parse::().unwrap();
- let end_address = address + size - 4;
- let function_name = parts[3];
- start_lookup.insert(address, function_name.to_string());
- end_lookup.insert(end_address, function_name.to_string());
- func_range_lookup.insert(function_name.to_string(), (address, end_address));
- }
- } else {
- eprintln!("Error executing command: {}", str::from_utf8(&output.stderr).unwrap());
- }
- Ok(())
-}
-
-fn build_goblin_lookups(
- start_lookup: &mut HashMap,
- end_lookup: &mut HashMap,
- func_range_lookup: &mut HashMap,
- elf_name: &str,
-) -> std::io::Result<()> {
- let buffer = std::fs::read(elf_name).unwrap();
- let elf = Elf::parse(&buffer).unwrap();
-
- for sym in &elf.syms {
- if sym.st_type() == STT_FUNC {
- let name = elf.strtab.get_at(sym.st_name).unwrap_or("");
- let demangled_name = demangle(name);
- let size = sym.st_size;
- let start_address = sym.st_value;
- let end_address = start_address + size - 4;
- start_lookup.insert(start_address, demangled_name.to_string());
- end_lookup.insert(end_address, demangled_name.to_string());
- func_range_lookup.insert(demangled_name.to_string(), (start_address, end_address));
- }
- }
- Ok(())
-}
-
-fn increment_stack_counts(
- instruction_counts: &mut HashMap,
- function_stack: &[String],
- filtered_stack_counts: &mut HashMap, usize>,
- function_name: &Option,
- num_instructions: usize,
-) {
- for f in function_stack {
- *instruction_counts.entry(f.clone()).or_insert(0) += num_instructions;
- }
- if let Some(f) = function_name {
- focused_stack_counts(function_stack, filtered_stack_counts, f, num_instructions)
- }
-}
-
-impl TraceCmd {
- pub fn run(&self) -> Result<()> {
- let top_n = self.top;
- let elf_path = self.elf.clone();
- let trace_path = self.trace.clone();
- let no_stack_counts = self.no_stack_counts;
- let no_raw_counts = self.no_raw_counts;
- let strip_hashes = !self.keep_hashes;
- let function_name = self.function_name.clone();
- let exclude_view = self.exclude_view.clone();
-
- let mut start_lookup = HashMap::new();
- let mut end_lookup = HashMap::new();
- let mut func_range_lookup = HashMap::new();
- build_goblin_lookups(&mut start_lookup, &mut end_lookup, &mut func_range_lookup, &elf_path)
- .unwrap();
-
- let mut function_ranges: Vec<(u64, u64, String)> =
- func_range_lookup.iter().map(|(f, &(start, end))| (start, end, f.clone())).collect();
-
- function_ranges.sort_by_key(|&(start, _, _)| start);
-
- let file = std::fs::File::open(trace_path).unwrap();
- let file_size = file.metadata().unwrap().len();
- let mut buf = std::io::BufReader::new(file);
- let mut function_stack: Vec = Vec::new();
- let mut instruction_counts: HashMap = HashMap::new();
- let mut counts_without_callgraph: HashMap = HashMap::new();
- let mut filtered_stack_counts: HashMap, usize> = HashMap::new();
- let total_lines = file_size / 4;
- let mut current_function_range: (u64, u64) = (0, 0);
-
- let update_interval = 1000usize;
- let pb = ProgressBar::new(total_lines);
- pb.set_style(
- ProgressStyle::default_bar()
- .template(
- "{spinner:.green} [{elapsed_precise}] [{bar:40.cyan/blue}] {pos}/{len} ({eta})",
- )
- .unwrap()
- .progress_chars("#>-"),
- );
-
- let running = Arc::new(AtomicBool::new(true));
- let r = running.clone();
-
- ctrlc::set_handler(move || {
- r.store(false, std::sync::atomic::Ordering::SeqCst);
- })
- .expect("Error setting Ctrl-C handler");
-
- for c in 0..total_lines {
- if (c as usize) % update_interval == 0 {
- pb.inc(update_interval as u64);
- if !running.load(std::sync::atomic::Ordering::SeqCst) {
- pb.finish_with_message("Interrupted");
- break;
- }
- }
-
- // Parse pc from hex.
- let mut pc_bytes = [0u8; 4];
- buf.read_exact(&mut pc_bytes).unwrap();
- let pc = u32::from_be_bytes(pc_bytes) as u64;
-
- // Only 1 instruction per opcode.
- let num_instructions = 1;
-
- // Raw counts without considering the callgraph at all we're just checking if the PC
- // belongs to a function if so we're incrementing. This would ignore the call stack
- // so for example "main" would only have a hundred instructions or so.
- if let Ok(index) = function_ranges.binary_search_by(|&(start, end, _)| {
- if pc < start {
- Ordering::Greater
- } else if pc > end {
- Ordering::Less
- } else {
- Ordering::Equal
- }
- }) {
- let (_, _, fname) = &function_ranges[index];
- *counts_without_callgraph.entry(fname.clone()).or_insert(0) += num_instructions
- } else {
- *counts_without_callgraph.entry("anonymous".to_string()).or_insert(0) +=
- num_instructions;
- }
-
- // The next section considers the callstack. We build a callstack and maintain it based
- // on some rules. Functions lower in the stack get their counts incremented.
-
- // We are still in the current function.
- if pc > current_function_range.0 && pc <= current_function_range.1 {
- increment_stack_counts(
- &mut instruction_counts,
- &function_stack,
- &mut filtered_stack_counts,
- &function_name,
- num_instructions,
- );
- continue;
- }
-
- // Jump to a new function (or the same one).
- if let Some(f) = start_lookup.get(&pc) {
- increment_stack_counts(
- &mut instruction_counts,
- &function_stack,
- &mut filtered_stack_counts,
- &function_name,
- num_instructions,
- );
-
- // Jump to a new function (not recursive).
- if !function_stack.contains(f) {
- function_stack.push(f.clone());
- current_function_range = *func_range_lookup.get(f).unwrap();
- }
- } else {
- // This means pc now points to an instruction that is
- //
- // 1. not in the current function's range
- // 2. not a new function call
- //
- // We now account for a new possibility where we're returning to a function in the
- // stack this need not be the immediate parent and can be any of the existing
- // functions in the stack due to some optimizations that the compiler can make.
- let mut unwind_point = 0;
- let mut unwind_found = false;
- for (c, f) in function_stack.iter().enumerate() {
- let (s, e) = func_range_lookup.get(f).unwrap();
- if pc > *s && pc <= *e {
- unwind_point = c;
- unwind_found = true;
- break;
- }
- }
-
- // Unwinding until the parent.
- if unwind_found {
- function_stack.truncate(unwind_point + 1);
- increment_stack_counts(
- &mut instruction_counts,
- &function_stack,
- &mut filtered_stack_counts,
- &function_name,
- num_instructions,
- );
- continue;
- }
-
- // If no unwind point has been found, that means we jumped to some random location
- // so we'll just increment the counts for everything in the stack.
- increment_stack_counts(
- &mut instruction_counts,
- &function_stack,
- &mut filtered_stack_counts,
- &function_name,
- num_instructions,
- );
- }
- }
-
- pb.finish_with_message("done");
-
- let mut raw_counts: Vec<(String, usize)> =
- instruction_counts.iter().map(|(key, value)| (key.clone(), *value)).collect();
- raw_counts.sort_by(|a, b| b.1.cmp(&a.1));
-
- println!("\n\nTotal instructions in trace: {}", total_lines);
- if !no_stack_counts {
- println!("\n\n Instruction counts considering call graph");
- print_instruction_counts(
- "Function Name",
- raw_counts,
- top_n,
- strip_hashes,
- Some(&exclude_view),
- );
- }
-
- let mut raw_counts: Vec<(String, usize)> =
- counts_without_callgraph.iter().map(|(key, value)| (key.clone(), *value)).collect();
- raw_counts.sort_by(|a, b| b.1.cmp(&a.1));
- if !no_raw_counts {
- println!("\n\n Instruction counts ignoring call graph");
- print_instruction_counts(
- "Function Name",
- raw_counts,
- top_n,
- strip_hashes,
- Some(&exclude_view),
- );
- }
-
- let mut raw_counts: Vec<(String, usize)> = filtered_stack_counts
- .iter()
- .map(|(stack, count)| {
- let numbered_stack = stack
- .iter()
- .rev()
- .enumerate()
- .map(|(index, line)| {
- let modified_line =
- if strip_hashes { strip_hash(line) } else { line.clone() };
- format!("({}) {}", index + 1, modified_line)
- })
- .collect::>()
- .join("\n");
- (numbered_stack, *count)
- })
- .collect();
-
- raw_counts.sort_by(|a, b| b.1.cmp(&a.1));
- if let Some(f) = function_name {
- println!("\n\n Stack patterns for function '{f}' ");
- print_instruction_counts("Function Stack", raw_counts, top_n, strip_hashes, None);
- }
- Ok(())
- }
-}
diff --git a/crates/core/executor/Cargo.toml b/crates/core/executor/Cargo.toml
index 2cbf03a117..b344da6793 100644
--- a/crates/core/executor/Cargo.toml
+++ b/crates/core/executor/Cargo.toml
@@ -43,9 +43,23 @@ vec_map = { version = "0.8.2", features = ["serde"] }
enum-map = { version = "2.7.3", features = ["serde"] }
test-artifacts = { workspace = true, optional = true }
+# profiling
+goblin = { version = "0.9", optional = true }
+rustc-demangle = { version = "0.1.18", optional = true }
+gecko_profile = { version = "0.4.0", optional = true }
+indicatif = { version = "0.17.8", optional = true }
+serde_json = { version = "1.0.121", optional = true }
+
[dev-dependencies]
sp1-zkvm = { workspace = true, features = ["lib"] }
[features]
programs = ["dep:test-artifacts"]
bigint-rug = ["sp1-curves/bigint-rug"]
+profiling = [
+ "dep:goblin",
+ "dep:rustc-demangle",
+ "dep:gecko_profile",
+ "dep:indicatif",
+ "dep:serde_json",
+]
diff --git a/crates/core/executor/src/executor.rs b/crates/core/executor/src/executor.rs
index a9e0834a45..aaea43f0e2 100644
--- a/crates/core/executor/src/executor.rs
+++ b/crates/core/executor/src/executor.rs
@@ -1,9 +1,3 @@
-use std::{
- fs::File,
- io::{BufWriter, Write},
- sync::Arc,
-};
-
use hashbrown::HashMap;
use serde::{Deserialize, Serialize};
use sp1_stark::SP1CoreOpts;
@@ -26,6 +20,13 @@ use crate::{
Instruction, Opcode, Program, Register,
};
+#[cfg(feature = "profiling")]
+use crate::profiler::Profiler;
+#[cfg(feature = "profiling")]
+use std::{fs::File, io::BufWriter};
+
+use std::sync::Arc;
+
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
/// Whether to verify deferred proofs during execution.
pub enum DeferredProofVerification {
@@ -110,8 +111,11 @@ pub struct Executor<'a> {
/// A buffer for stdout and stderr IO.
pub io_buf: HashMap,
- /// A buffer for writing trace events to a file.
- pub trace_buf: Option>,
+ /// The ZKVM program profiler.
+ ///
+ /// Keeps track of the number of cycles spent in each function.
+ #[cfg(feature = "profiling")]
+ pub profiler: Option<(Profiler, BufWriter)>,
/// The state of the runtime when in unconstrained mode.
pub unconstrained_state: ForkState,
@@ -190,11 +194,53 @@ impl<'a> Executor<'a> {
Self::with_context(program, opts, SP1Context::default())
}
+ /// Create a new runtime for the program, and setup the profiler if `TRACE_FILE` env var is set
+ /// and the feature flag `profiling` is enabled.
+ #[must_use]
+ pub fn with_context_and_elf(
+ opts: SP1CoreOpts,
+ context: SP1Context<'a>,
+ elf_bytes: &[u8],
+ ) -> Self {
+ let program = Program::from(elf_bytes).expect("Failed to create program from ELF bytes");
+
+ #[cfg(not(feature = "profiling"))]
+ return Self::with_context(program, opts, context);
+
+ #[cfg(feature = "profiling")]
+ {
+ let mut this = Self::with_context(program, opts, context);
+
+ let trace_buf = std::env::var("TRACE_FILE").ok().map(|file| {
+ let file = File::create(file).unwrap();
+ BufWriter::new(file)
+ });
+
+ if let Some(trace_buf) = trace_buf {
+ println!("Profiling enabled");
+
+ let sample_rate = std::env::var("TRACE_SAMPLE_RATE")
+ .ok()
+ .and_then(|rate| {
+ println!("Profiling sample rate: {rate}");
+ rate.parse::().ok()
+ })
+ .unwrap_or(1);
+
+ this.profiler = Some((
+ Profiler::new(elf_bytes, sample_rate as u64)
+ .expect("Failed to create profiler"),
+ trace_buf,
+ ));
+ }
+
+ this
+ }
+ }
+
/// Create a new runtime from a program, options, and a context.
///
- /// # Panics
- ///
- /// This function may panic if it fails to create the trace file if `TRACE_FILE` is set.
+ /// Note: This function *will not* set up the profiler.
#[must_use]
pub fn with_context(program: Program, opts: SP1CoreOpts, context: SP1Context<'a>) -> Self {
// Create a shared reference to the program.
@@ -203,14 +249,6 @@ impl<'a> Executor<'a> {
// Create a default record with the program.
let record = ExecutionRecord::new(program.clone());
- // If `TRACE_FILE`` is set, initialize the trace buffer.
- let trace_buf = if let Ok(trace_file) = std::env::var("TRACE_FILE") {
- let file = File::create(trace_file).unwrap();
- Some(BufWriter::new(file))
- } else {
- None
- };
-
// Determine the maximum number of cycles for any syscall.
let syscall_map = default_syscall_map();
let max_syscall_cycles =
@@ -230,7 +268,8 @@ impl<'a> Executor<'a> {
shard_batch_size: opts.shard_batch_size as u32,
cycle_tracker: HashMap::new(),
io_buf: HashMap::new(),
- trace_buf,
+ #[cfg(feature = "profiling")]
+ profiler: None,
unconstrained: false,
unconstrained_state: ForkState::default(),
syscall_map,
@@ -1190,7 +1229,6 @@ impl<'a> Executor<'a> {
let instruction = self.fetch();
// Log the current state of the runtime.
- #[cfg(debug_assertions)]
self.log(&instruction);
// Execute the instruction.
@@ -1466,6 +1504,12 @@ impl<'a> Executor<'a> {
self.executor_mode = ExecutorMode::Simple;
self.print_report = true;
while !self.execute()? {}
+
+ #[cfg(feature = "profiling")]
+ if let Some((profiler, writer)) = self.profiler.take() {
+ profiler.write(writer).expect("Failed to write profile to output file");
+ }
+
Ok(())
}
@@ -1478,6 +1522,12 @@ impl<'a> Executor<'a> {
self.executor_mode = ExecutorMode::Trace;
self.print_report = true;
while !self.execute()? {}
+
+ #[cfg(feature = "profiling")]
+ if let Some((profiler, writer)) = self.profiler.take() {
+ profiler.write(writer).expect("Failed to write profile to output file");
+ }
+
Ok(())
}
@@ -1576,11 +1626,6 @@ impl<'a> Executor<'a> {
}
}
- // Flush trace buf
- if let Some(ref mut buf) = self.trace_buf {
- buf.flush().unwrap();
- }
-
// Ensure that all proofs and input bytes were read, otherwise warn the user.
if self.state.proof_stream_ptr != self.state.proof_stream.len() {
tracing::warn!(
@@ -1648,12 +1693,11 @@ impl<'a> Executor<'a> {
}
#[inline]
- #[cfg(debug_assertions)]
fn log(&mut self, _: &Instruction) {
- // Write the current program counter to the trace buffer for the cycle tracer.
- if let Some(ref mut buf) = self.trace_buf {
+ #[cfg(feature = "profiling")]
+ if let Some((ref mut profiler, _)) = self.profiler {
if !self.unconstrained {
- buf.write_all(&u32::to_be_bytes(self.state.pc)).unwrap();
+ profiler.record(self.state.global_clk, self.state.pc as u64);
}
}
diff --git a/crates/core/executor/src/lib.rs b/crates/core/executor/src/lib.rs
index a4b6a06ced..a1bccb45cf 100644
--- a/crates/core/executor/src/lib.rs
+++ b/crates/core/executor/src/lib.rs
@@ -29,6 +29,8 @@ mod instruction;
mod io;
mod memory;
mod opcode;
+#[cfg(feature = "profiling")]
+mod profiler;
mod program;
#[cfg(any(test, feature = "programs"))]
pub mod programs;
diff --git a/crates/core/executor/src/profiler.rs b/crates/core/executor/src/profiler.rs
new file mode 100644
index 0000000000..a3066455cb
--- /dev/null
+++ b/crates/core/executor/src/profiler.rs
@@ -0,0 +1,227 @@
+use gecko_profile::{Frame, ProfileBuilder, StringIndex, ThreadBuilder};
+use goblin::elf::{sym::STT_FUNC, Elf};
+use indicatif::{ProgressBar, ProgressStyle};
+use rustc_demangle::demangle;
+use std::collections::HashMap;
+
+#[derive(Debug, thiserror::Error)]
+pub enum ProfilerError {
+ #[error("Failed to read ELF file {}", .0)]
+ Io(#[from] std::io::Error),
+ #[error("Failed to parse ELF file {}", .0)]
+ Elf(#[from] goblin::error::Error),
+ #[error("Failed to serialize samples {}", .0)]
+ Serde(#[from] serde_json::Error),
+}
+
+/// During execution, the profiler always keeps track of the callstack
+/// and will occasionally save the stack according to the sample rate.
+pub struct Profiler {
+ sample_rate: u64,
+ /// `start_address`-> index in `function_ranges`
+ start_lookup: HashMap,
+ /// the start and end of the function
+ function_ranges: Vec<(u64, u64, Frame)>,
+
+ /// the current known call stack
+ function_stack: Vec,
+ /// useful for quick search as to not count recursive calls
+ function_stack_indices: Vec,
+ /// The call stacks code ranges, useful for keeping track of unwinds
+ function_stack_ranges: Vec<(u64, u64)>,
+ /// The deepest function code range
+ current_function_range: (u64, u64),
+
+ main_idx: Option,
+ builder: ThreadBuilder,
+ samples: Vec,
+}
+
+struct Sample {
+ stack: Vec,
+}
+
+impl Profiler {
+ pub(super) fn new(elf_bytes: &[u8], sample_rate: u64) -> Result {
+ let elf = Elf::parse(elf_bytes)?;
+
+ let mut start_lookup = HashMap::new();
+ let mut function_ranges = Vec::new();
+ let mut builder = ThreadBuilder::new(1, 0, std::time::Instant::now(), false, false);
+
+ // We need to extract all the functions from the ELF file
+ // and their corresponding PC ranges.
+ let mut main_idx = None;
+ for sym in &elf.syms {
+ // check if its a function
+ if sym.st_type() == STT_FUNC {
+ let name = elf.strtab.get_at(sym.st_name).unwrap_or("");
+ let demangled_name = demangle(name);
+ let size = sym.st_size;
+ let start_address = sym.st_value;
+ let end_address = start_address + size - 4;
+
+ // Now that we have the name let's immediately intern it so we only need to copy
+ // around a usize
+ let demangled_name = demangled_name.to_string();
+ let string_idx = builder.intern_string(&demangled_name);
+ if main_idx.is_none() && demangled_name == "main" {
+ main_idx = Some(string_idx);
+ }
+
+ let start_idx = function_ranges.len();
+ function_ranges.push((start_address, end_address, Frame::Label(string_idx)));
+ start_lookup.insert(start_address, start_idx);
+ }
+ }
+
+ Ok(Self {
+ builder,
+ main_idx,
+ sample_rate,
+ samples: Vec::new(),
+ start_lookup,
+ function_ranges,
+ function_stack: Vec::new(),
+ function_stack_indices: Vec::new(),
+ function_stack_ranges: Vec::new(),
+ current_function_range: (0, 0),
+ })
+ }
+
+ pub(super) fn record(&mut self, clk: u64, pc: u64) {
+ // We are still in the current function.
+ if pc > self.current_function_range.0 && pc <= self.current_function_range.1 {
+ if clk % self.sample_rate == 0 {
+ self.samples.push(Sample { stack: self.function_stack.clone() });
+ }
+
+ return;
+ }
+
+ // Jump to a new function (or the same one).
+ if let Some(f) = self.start_lookup.get(&pc) {
+ // Jump to a new function (not recursive).
+ if !self.function_stack_indices.contains(f) {
+ self.function_stack_indices.push(*f);
+ let (start, end, name) = self.function_ranges.get(*f).unwrap();
+ self.current_function_range = (*start, *end);
+ self.function_stack_ranges.push((*start, *end));
+ self.function_stack.push(name.clone());
+ }
+ } else {
+ // This means pc now points to an instruction that is
+ //
+ // 1. not in the current function's range
+ // 2. not a new function call
+ //
+ // We now account for a new possibility where we're returning to a function in the
+ // stack this need not be the immediate parent and can be any of the existing
+ // functions in the stack due to some optimizations that the compiler can make.
+ let mut unwind_point = 0;
+ let mut unwind_found = false;
+ for (c, (s, e)) in self.function_stack_ranges.iter().enumerate() {
+ if pc > *s && pc <= *e {
+ unwind_point = c;
+ unwind_found = true;
+ break;
+ }
+ }
+
+ // Unwinding until the parent.
+ if unwind_found {
+ self.function_stack.truncate(unwind_point + 1);
+ self.function_stack_ranges.truncate(unwind_point + 1);
+ self.function_stack_indices.truncate(unwind_point + 1);
+ }
+
+ // If no unwind point has been found, that means we jumped to some random location
+ // so we'll just increment the counts for everything in the stack.
+ }
+
+ if clk % self.sample_rate == 0 {
+ self.samples.push(Sample { stack: self.function_stack.clone() });
+ }
+ }
+
+ /// Write the captured samples so far to the `std::io::Write`. This will output a JSON gecko
+ /// profile.
+ pub(super) fn write(mut self, writer: impl std::io::Write) -> Result<(), ProfilerError> {
+ self.check_samples();
+
+ let start_time = std::time::Instant::now();
+ let mut profile_builder = ProfileBuilder::new(
+ start_time,
+ std::time::SystemTime::now(),
+ "SP1 ZKVM",
+ 0,
+ std::time::Duration::from_micros(1),
+ );
+
+ let pb = ProgressBar::new(self.samples.len() as u64);
+ pb.set_style(
+ ProgressStyle::default_bar()
+ .template(
+ "{msg} \n {spinner:.green} [{elapsed_precise}] [{bar:40.cyan/blue}] {pos}/{len} ({eta})",
+ )
+ .unwrap()
+ .progress_chars("#>-"),
+ );
+
+ pb.set_message("Creating profile");
+
+ let mut last_known_time = std::time::Instant::now();
+ for sample in self.samples.drain(..) {
+ pb.inc(1);
+
+ self.builder.add_sample(
+ last_known_time,
+ sample.stack.into_iter(),
+ // We don't have a way to know the duration of each sample, so we just use 1us for
+ // all instructions.
+ std::time::Duration::from_micros(self.sample_rate),
+ );
+
+ last_known_time += std::time::Duration::from_micros(self.sample_rate);
+ }
+
+ profile_builder.add_thread(self.builder);
+
+ pb.finish();
+
+ println!("Writing profile, this can take awhile");
+ serde_json::to_writer(writer, &profile_builder.to_serializable())?;
+ println!("Profile written successfully");
+
+ Ok(())
+ }
+
+ /// Simple check to makes sure we have valid main function that lasts
+ /// for most of the exeuction time.
+ fn check_samples(&self) {
+ let Some(main_idx) = self.main_idx else {
+ eprintln!("Warning: The `main` function is not present in the Elf file, this is likely caused by using the wrong Elf file");
+ return;
+ };
+
+ let main_count =
+ self.samples
+ .iter()
+ .filter(|s| {
+ s.stack.iter().any(|f| {
+ if let Frame::Label(idx) = f {
+ *idx == main_idx
+ } else {
+ false
+ }
+ })
+ })
+ .count();
+
+ #[allow(clippy::cast_precision_loss)]
+ let main_ratio = main_count as f64 / self.samples.len() as f64;
+ if main_ratio < 0.9 {
+ eprintln!("Warning: This trace appears to be invalid. The `main` function is present in only {:.2}% of the samples, this is likely caused by the using the wrong Elf file", main_ratio * 100.0);
+ }
+ }
+}
diff --git a/crates/prover/src/lib.rs b/crates/prover/src/lib.rs
index 75915e5bc0..844a5feb3c 100644
--- a/crates/prover/src/lib.rs
+++ b/crates/prover/src/lib.rs
@@ -273,9 +273,9 @@ impl SP1Prover {
mut context: SP1Context<'a>,
) -> Result<(SP1PublicValues, ExecutionReport), ExecutionError> {
context.subproof_verifier.replace(Arc::new(self));
- let program = self.get_program(elf).unwrap();
let opts = SP1CoreOpts::default();
- let mut runtime = Executor::with_context(program, opts, context);
+ let mut runtime = Executor::with_context_and_elf(opts, context, elf);
+
runtime.write_vecs(&stdin.buffer);
for (proof, vkey) in stdin.proofs.iter() {
runtime.write_proof(proof.clone(), vkey.clone());
diff --git a/crates/sdk/Cargo.toml b/crates/sdk/Cargo.toml
index f6fe5acdd3..64e102189c 100644
--- a/crates/sdk/Cargo.toml
+++ b/crates/sdk/Cargo.toml
@@ -88,6 +88,8 @@ network-v2 = [
]
cuda = ["sp1-cuda"]
+profiling = ["sp1-core-executor/profiling"]
+
[build-dependencies]
vergen = { version = "8", default-features = false, features = [
"build",
diff --git a/examples/Cargo.lock b/examples/Cargo.lock
index 53dd96cfd5..05d23d23a0 100644
--- a/examples/Cargo.lock
+++ b/examples/Cargo.lock
@@ -219,7 +219,7 @@ dependencies = [
"alloy-sol-types",
"serde",
"serde_json",
- "thiserror",
+ "thiserror 1.0.68",
"tracing",
]
@@ -241,7 +241,7 @@ dependencies = [
"async-trait",
"auto_impl",
"futures-utils-wasm",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -426,7 +426,7 @@ dependencies = [
"auto_impl",
"elliptic-curve",
"k256",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -442,7 +442,7 @@ dependencies = [
"async-trait",
"k256",
"rand 0.8.5",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -1155,7 +1155,7 @@ dependencies = [
"semver 1.0.23",
"serde",
"serde_json",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -1636,6 +1636,15 @@ dependencies = [
"rustversion",
]
+[[package]]
+name = "debugid"
+version = "0.8.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "bef552e6f588e446098f6ba40d89ac146c8c7b64aade83c051ee00bb5d2bc18d"
+dependencies = [
+ "uuid",
+]
+
[[package]]
name = "der"
version = "0.5.1"
@@ -1834,7 +1843,7 @@ dependencies = [
"rand_core 0.6.4",
"serde",
"sha2 0.9.9",
- "thiserror",
+ "thiserror 1.0.68",
"zeroize",
]
@@ -2214,6 +2223,17 @@ version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d758ba1b47b00caf47f24925c0074ecb20d6dfcffe7f6d53395c0465674841a"
+[[package]]
+name = "gecko_profile"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "890852c7e1e02bc6758e325d6b1e0236e4fbf21b492f585ce4d4715be54b4c6a"
+dependencies = [
+ "debugid",
+ "serde",
+ "serde_json",
+]
+
[[package]]
name = "generic-array"
version = "0.14.7"
@@ -2271,6 +2291,17 @@ version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b"
+[[package]]
+name = "goblin"
+version = "0.8.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1b363a30c165f666402fe6a3024d3bec7ebc898f96a4a23bd1c99f8dbf3f4f47"
+dependencies = [
+ "log",
+ "plain",
+ "scroll",
+]
+
[[package]]
name = "groth16-verifier-program"
version = "1.1.0"
@@ -3846,7 +3877,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "879952a81a83930934cbf1786752d6dedc3b1f29e8f8fb2ad1d0a36f377cf442"
dependencies = [
"memchr",
- "thiserror",
+ "thiserror 1.0.68",
"ucd-trie",
]
@@ -3911,6 +3942,12 @@ version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "953ec861398dccce10c670dfeaf3ec4911ca479e9c02154b3a215178c5f566f2"
+[[package]]
+name = "plain"
+version = "0.2.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b4596b6d070b27117e987119b4dac604f3c58cfb0b191112e24771b2faeac1a6"
+
[[package]]
name = "portable-atomic"
version = "1.9.0"
@@ -4106,7 +4143,7 @@ dependencies = [
"rustc-hash 2.0.0",
"rustls",
"socket2",
- "thiserror",
+ "thiserror 1.0.68",
"tokio",
"tracing",
]
@@ -4123,7 +4160,7 @@ dependencies = [
"rustc-hash 2.0.0",
"rustls",
"slab",
- "thiserror",
+ "thiserror 1.0.68",
"tinyvec",
"tracing",
]
@@ -4289,7 +4326,7 @@ checksum = "ba009ff324d1fc1b900bd1fdb31564febe58a8ccc8a6fdbb93b543d33b13ca43"
dependencies = [
"getrandom",
"libredox",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -4407,7 +4444,7 @@ dependencies = [
"http",
"reqwest",
"serde",
- "thiserror",
+ "thiserror 1.0.68",
"tower-service",
]
@@ -4420,7 +4457,7 @@ dependencies = [
"reth-execution-errors",
"reth-primitives",
"reth-storage-errors",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -4514,7 +4551,7 @@ dependencies = [
"reth-execution-errors",
"reth-fs-util",
"reth-storage-errors",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -4598,7 +4635,7 @@ dependencies = [
"reth-revm",
"revm",
"revm-primitives",
- "thiserror",
+ "thiserror 1.0.68",
"tracing",
]
@@ -4637,7 +4674,7 @@ source = "git+https://github.com/sp1-patches/reth?tag=rsp-20240830#260c7ed2c9374
dependencies = [
"serde",
"serde_json",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -4649,7 +4686,7 @@ dependencies = [
"alloy-rlp",
"enr",
"serde_with",
- "thiserror",
+ "thiserror 1.0.68",
"url",
]
@@ -4706,7 +4743,7 @@ dependencies = [
"reth-trie-common",
"revm-primitives",
"serde",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -4741,7 +4778,7 @@ dependencies = [
"modular-bitfield",
"reth-codecs",
"serde",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -5090,7 +5127,7 @@ dependencies = [
"rlp",
"rsp-primitives",
"serde",
- "thiserror",
+ "thiserror 1.0.68",
]
[[package]]
@@ -5337,6 +5374,26 @@ version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
+[[package]]
+name = "scroll"
+version = "0.12.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6ab8598aa408498679922eff7fa985c25d58a90771bd6be794434c5277eab1a6"
+dependencies = [
+ "scroll_derive",
+]
+
+[[package]]
+name = "scroll_derive"
+version = "0.12.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7f81c2fde025af7e69b1d1420531c8a8811ca898919db177141a85313b1cb932"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn 2.0.87",
+]
+
[[package]]
name = "sdd"
version = "3.0.4"
@@ -5697,8 +5754,11 @@ dependencies = [
"elf",
"enum-map",
"eyre",
+ "gecko_profile",
+ "goblin",
"hashbrown 0.14.5",
"hex",
+ "indicatif",
"itertools 0.13.0",
"log",
"nohash-hasher",
@@ -5707,13 +5767,15 @@ dependencies = [
"p3-maybe-rayon",
"rand 0.8.5",
"rrs-succinct",
+ "rustc-demangle",
"serde",
+ "serde_json",
"sp1-curves",
"sp1-primitives",
"sp1-stark",
"strum",
"strum_macros",
- "thiserror",
+ "thiserror 1.0.68",
"tiny-keccak",
"tracing",
"typenum",
@@ -5758,7 +5820,7 @@ dependencies = [
"strum",
"strum_macros",
"tempfile",
- "thiserror",
+ "thiserror 1.0.68",
"tracing",
"tracing-forest",
"tracing-subscriber",
@@ -5888,7 +5950,7 @@ dependencies = [
"sp1-recursion-core",
"sp1-recursion-gnark-ffi",
"sp1-stark",
- "thiserror",
+ "thiserror 1.0.68",
"tracing",
"tracing-subscriber",
]
@@ -5973,7 +6035,7 @@ dependencies = [
"sp1-primitives",
"sp1-stark",
"static_assertions",
- "thiserror",
+ "thiserror 1.0.68",
"tracing",
"vec_map",
"zkhash",
@@ -6046,7 +6108,7 @@ dependencies = [
"strum",
"strum_macros",
"tempfile",
- "thiserror",
+ "thiserror 1.0.68",
"tokio",
"tracing",
"twirp-rs",
@@ -6488,7 +6550,16 @@ version = "1.0.68"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "02dd99dc800bbb97186339685293e1cc5d9df1f8fae2d0aecd9ff1c77efea892"
dependencies = [
- "thiserror-impl",
+ "thiserror-impl 1.0.68",
+]
+
+[[package]]
+name = "thiserror"
+version = "2.0.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "c006c85c7651b3cf2ada4584faa36773bd07bac24acfb39f3c431b36d7e667aa"
+dependencies = [
+ "thiserror-impl 2.0.3",
]
[[package]]
@@ -6502,6 +6573,17 @@ dependencies = [
"syn 2.0.87",
]
+[[package]]
+name = "thiserror-impl"
+version = "2.0.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f077553d607adc1caf65430528a576c757a71ed73944b66ebb58ef2bbd243568"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn 2.0.87",
+]
+
[[package]]
name = "thiserror-impl-no-std"
version = "2.0.2"
@@ -6758,7 +6840,7 @@ checksum = "ee40835db14ddd1e3ba414292272eddde9dad04d3d4b65509656414d1c42592f"
dependencies = [
"ansi_term",
"smallvec",
- "thiserror",
+ "thiserror 1.0.68",
"tracing",
"tracing-subscriber",
]
@@ -6814,7 +6896,7 @@ dependencies = [
"reqwest",
"serde",
"serde_json",
- "thiserror",
+ "thiserror 1.0.68",
"tokio",
"tower",
"url",
@@ -6909,6 +6991,12 @@ version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
+[[package]]
+name = "uuid"
+version = "1.11.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f8c5f0a0af699448548ad1a2fbf920fb4bee257eae39953ba95cb84891a0446a"
+
[[package]]
name = "valuable"
version = "0.1.0"