Kona Book

Documentation for the Kona project.

📖 kona is in active development, and is not yet ready for use in production. During development, this book will evolve quickly and may contain inaccuracies.

Please open an issue if you find any errors or have any suggestions for improvements, and also feel free to contribute to the project!

Introduction

Kona is a suite of libraries and build pipelines for developing verifiable Rust programs targeting Fault Proof VMs .

It is built and maintained by members of OP Labs as well as open source contributors, and is licensed under the MIT License.

Kona provides tooling and abstractions around low-level syscalls, memory management, and other common structures that authors of verifiable programs will need to interact with. It also provides build pipelines for compiling no_std Rust programs to a format that can be executed by supported Fault Proof VM targets.

Goals of Kona

1. Composability

Kona provides a common set of tools and abstractions for developing verifiable Rust programs on top of several supported Fault Proof VM targets. This is done to ensure that programs written for one supported FPVM can be easily ported to another supported FPVM, and that the ecosystem of programs built on top of these targets can be easily shared and reused.

2. Safety

Through standardization of these low-level system interfaces and build pipelines, Kona seeks to increase coverage over the low-level operations that are required to build on top of a FPVM.

3. Developer Experience

Building on top of custom Rust targets can be difficult, especially when the target is nascent and tooling is not yet mature. Kona seeks to improve this experience by standardizing and streamlining the process of developing and compiling verifiable Rust programs, targeted at supported FPVMs.

4. Performance

Kona is opinionated in that it favors no_std Rust programs for embedded FPVM development, for both performance and portability. In contrast with alternative approaches, such as the op-program using the Golang MIPS32 target, no_std Rust programs produce much smaller binaries, resulting in fewer instructions that need to be executed on the FPVM. In addition, this offers developers more low-level control over interactions with the FPVM kernel, which can be useful for optimizing performance-critical code.

Development Status

Kona is currently in active development, and is not yet ready for use in production.

Contributing

Contributors are welcome! Please see the contributing guide for more information.

Fault Proof Program Development

This chapter provides an overview of Fault Proof Program development on top of the custom FPVM targets supported by Kona.

At a high level, a Fault Proof Program is not much different from a regular no_std Rust program. A custom entrypoint is provided, and the program is compiled down to a custom target, which is then executed on the FPVM.

Fault Proof Programs are structured with 3 stages:

  1. Prologue: The bootstrapping stage, where the program is loaded into memory and the initial state is set up. During this phase, the program's initial state is written to the FPVM's memory, and the program's entrypoint is set.
  2. Execution: The main execution stage, where the program is executed on the FPVM. During this phase, the program's entrypoint is called, and the program is executed until it exits.
  3. Epilogue: The finalization stage, where the program's final state is read from the FPVM's memory. During this phase, the program's final state is inspected and properties of the state transition are verified.

The following sections will provide a more in-depth overview of each of these stages, as well as the tools and abstractions provided by Kona for developing your own Fault Proof Programs.

Environment

Before kicking off the development of your own Fault Proof Program , it's important to understand the environment that your program will be running in.

The FPP runs on top of a custom FPVM target, which is typically a VM with a modified subset of an existing reduced instruction set architecture and a subset of Linux syscalls. The FPVM is designed to execute verifiable programs, and commonly modifies the instruction set it is derived from as well as the internal representation of memory to support verifiable memory access, client (program) communication with the host (the FPVM), and other implementation-specific features.

Host <-> Client Communication

While the program is running on top of the FPVM, it is considered to be in the client role, while the VM is in the host role. The only way for the client and host to communicate with one another is synchronously through the Preimage ABI (specification).

In order for the client to read from the host, the read and write syscalls are modified within the FPVM to allow the client to request preparation of and read foreign data.

Reading

When the client wants to read data from the host, it must first send a "hint" to the host through the hint file descriptor, which signals a request for the host to prepare the data for reading. The host will then prepare the data, and send a hint acknowledgement back to the client. The client can then read the data from the host through the designated file descriptor.

The preparation step ("hinting") is an optimization that allows the host to know ahead of time the intents of the client and the data it requires for execution. This can allow for lazy loading of data, and also prevent the need for unnecessary allocations within the host's memory. This step is a no-op on-chain, and is only ran locally when the host is the native implementation of the FPVM.

sequenceDiagram
    Client->>+Host: Hint preimage (no-op on-chain / read-only mode)
    Host-->>-Client: Hint acknowledgement
    Client-->>+Host: Preimage Request
    Host-->>Host: Prepare Preimage
    Host-->>-Client: Preimage Data

Full Example

Below, we have a full architecture diagram of the op-program (source: fault proof specs), the reference implementation for the OP Stack's Fault Proof Program, which has the objective of verifying claims about the state of an OP Stack layer two.

op-program-architecture

In this program, execution and derivation of the L2 chain is performed within it, and ultimately the claimed state of the L2 chain is verified in the prologue stage.

It communicates with the host for two reasons:

  1. To request preparation of L1 and L2 state data preimages.
  2. To read the L1 and L2 state data preimages that were prepared after the above requests.

The host is responsible for:

  1. Preparing and maintaining a store of the L1 and L2 state data preimages, as well as localized bootstrap k/v pairs.
  2. Providing the L1 and L2 state data preimages to the client for reading.

Other programs (clients) may have different requirements for communication with the host, but the above is a common pattern for programs built on top of a FPVMs. In general:

  1. The client program is a state machine that is responsible for bootstrapping itself from the inputs, executing the program logic, and verifying the outcome.
  2. The host is responsible for providing the client with data it wasn't bootstrapped with, and for executing the program itself.

Supported Targets

Kona seeks to support all FPVM targets that LLVM and rustc can offer introductory support for. Below is a matrix of features that Kona offers for each FPVM target:

TargetBuild PipelineIOmalloc
cannon & cannon-rs✅✅✅
asterisc✅✅✅

If there is a feature that you would like to see supported, please open an issue or consider contributing!

Asterisc (RISC-V)

Asterisc is based off of the rv64gc target architecture, which defines the following extensions:

  • RV32I support - 32 bit base instruction set
    • FENCE, ECALL, EBREAK are hardwired to implement a minimal subset of systemcalls of the linux kernel
      • Work in progress. All syscalls used by the Golang risc64 runtime.
  • RV64I support
  • RV32M+RV64M: Multiplication support
  • RV32A+RV64A: Atomics support
  • RV{32,64}{D,F,Q}: no-op: No floating points support (since no IEEE754 determinism with rounding modes etc., nor worth the complexity)
  • Zifencei: FENCE.I no-op: No need for FENCE.I
  • Zicsr: no-op: some support for Control-and-status registers may come later though.
  • Ztso: no-op: no need for Total Store Ordering
  • other: revert with error code on unrecognized instructions

asterisc supports a plethora of syscalls, documented in the repository. kona offers an interface for programs to directly invoke a select few syscalls:

  1. EXIT - Terminate the process with the provided exit code.
  2. WRITE - Write the passed buffer to the passed file descriptor.
  3. READ - Read the specified number of bytes from the passed file descriptor.

Cannon (MIPS32r2)

Cannon is based off of the mips32r2 target architecture, specified in MIPS32â„¢ Architecture For Programmers Volume III: The MIPS32â„¢ Privileged Resource Architecture

Syscalls

Syscalls supported by cannon can be found within the cannon specification here.

Prologue

The prologue stage of the program is commonly responsible for bootstrapping the program with inputs from an external source, pulled in through the Host <-> Client communication implementation.

As a rule of thumb, the prologue implementation should be kept minimal, and should not do much more than establish the inputs for the execution phase.

Example

As an example, the prologue stage of the kona-client program runs through several steps:

  1. Pull in the boot information over the Preimage Oracle ABI, containing:
    • The L1 head hash containing all data required to reproduce the L2 safe chain at the claimed block height.
    • The latest finalized L2 output root.
    • The L2 output root claim.
    • The block number of the L2 output root claim.
    • The L2 chain ID.
  2. Pull in the RollupConfig and L2ChainConfig corresponding to the passed L2 chain ID.
  3. Validate these values.
  4. Pass the boot information to the execution phase.

Execution

The execution phase of the program is commonly the heaviest portion of the fault proof program, where the computation that is being verified is performed.

This phase consumes the outputs of the prologue phase, and performs the bulk of the verifiable computation. After execution has concluded, the outputs are passed along to the epilogue phase for final verification.

Example

At a high-level, in the kona-client program, the execution phase:

  1. Derives the inputs to the L2 derivation pipeline by unrolling the L1 head hash fetched in the epilogue.
  2. Passes the inputs to the L2 derivation pipeline, producing the L2 execution payloads required to reproduce the L2 safe chain at the claimed height.
  3. Executes the payloads produced by the L2 derivation pipeline, producing the L2 output root at the L2 claim height.

Epilogue

The epilogue stage of the program is intended to perform the final validation on the outputs from the execution phase. In most programs, this entails comparing the outputs of the execution phase to portions of the bootstrap data made available during the prologue phase.

Generally, this phase should consist almost entirely of validation steps.

Example

In the kona-client program, the prologue phase only contains two directives:

  1. Validate that the L2 safe chain could be produced at the claimed L2 block height.
  2. The constructed output root is equivalent to the claimed L2 output root.

Kona SDK

Welcome to the Kona SDK, a powerful set of libraries designed to revolutionize the way developers build proofs for the OP Stack STF on top of the OP Stack's FPVMs and other verifiable backends like SP-1, Risc0, Intel TDX, and AMD SEV-SNP. At its core, Kona is built on the principles of modularity, extensibility, and developer empowerment.

A Foundation of Flexibility

The kona repository is more than a fault proof program for the OP Stack — it's an ecosystem of interoperable components, each crafted with reusability and extensibility as primary goals. While we provide Fault Proof VM and "online" backends for key components like kona-derive and kona-executor, the true power of kona lies in its adaptability.

Extend Without Forking

One of Kona's standout features is its ability to support custom features and data sources without requiring you to fork the entire project. Through careful use of Rust's powerful trait system and abstract interfaces, we've created a framework that allows you to plug in your own features and ideas seamlessly.

What You'll Learn

In this section of the developer book, we'll dive deep into the Kona SDK, covering:

  • Building on the FPVM Backend: Learn how to leverage the Fault Proof VM tooling to create your own fault proof programs.
  • Creating Custom Backends: Discover the process of designing and implementing your own backend to run kona-client or a variation of it on different targets.
  • Extending Core Components: Explore techniques for creating new constructs that integrate smoothly with crates like kona-derive and kona-executor.

Whether you're looking to use Kona as-is, extend its functionality, or create entirely new programs based on its libraries, this guide is intended to provide you with the knowledge and tools you need to succeed.

FPVM Backend

📖 Before reading this section of the book, it is advised to read the Fault Proof Program Environment section to familiarize yourself with the PreimageOracle IO pattern.

Kona is effectively split into two parts:

  • OP Stack state transition logic (kona-derive, kona-executor, kona-mpt)
  • Fault Proof VM IO and utilities (kona-common, kona-common-proc, kona-preimage)

This section of the book focuses on the usage of kona-common and kona-preimage to facilitate host<->client communication for programs running on top of the FPVM targets.

Host <-> Client Communication API

The FPVM system API is built on several layers. In this document, we'll cover these layers, from lowest-level to highest-level API.

kona-common

kona-common implements raw syscall dispatch, a default global memory allocator, and a blocking async runtime. kona-common relies on a minimal linux backend to function, supporting only the syscalls required to implement the PreimageOracle ABI (read, write, exit_group).

These syscalls are exposed to the user through the io module directly, with each supported platform implementing the BasicKernelInterface trait.

To directly dispatch these syscalls, the io module exposes a safe API:

use kona_common::{io, FileDescriptor};

// Print to `stdout`. Infallible, will panic if dispatch fails.
io::print("Hello, world!");

// Print to `stderr`. Infallible, will panic if dispatch fails.
io::print_err("Goodbye, world!");

// Read from or write to a specified file descriptor. Returns a result with the
// return value or syscall errno.
let _ = io::write(FileDescriptor::StdOut, "Hello, world!".as_bytes());
let mut buf = Vec::with_capacity(8);
let _ = io::read(FileDescriptor::StdIn, buf.as_mut_slice());

// Exit the program with a specified exit code.
io::exit(0);

With this library, you can implement a custom host<->client communication protocol, or extend the existing PreimageOracle ABI. However, for most developers, we recommend sticking with kona-preimage when developing programs that target the FPVMs, barring needs like printing directly to stdout.

kona-preimage

kona-preimage is an implementation of the PreimageOracle ABI, built on top of kona-common. This crate enables synchronous communication between the host and client program, described in Host <-> Client Communication in the FPP Dev environment section of the book.

The crate is built around the PipeHandle, which serves as a single end of a bidirectional pipe (see: pipe manpage).

Through this handle, the higher-level constructs can read and write data to the counterparty holding on to the other end of the pipe, following the protocol below:

sequenceDiagram
    Client->>+Host: Hint preimage (no-op on-chain / read-only mode)
    Host-->>-Client: Hint acknowledgement
    Client-->>+Host: Preimage Request
    Host-->>Host: Prepare Preimage
    Host-->>-Client: Preimage Data

The interfaces of each part of the above protocol are described by the following traits:

Each of these traits, however, can be re-implemented to redefine the host<->client communication protocol if the needs of the consumer are not covered by the to-spec implementations.

kona-client - Oracle-backed sources (example)

Finally, in kona-client, implementations of data source traits from kona-derive and kona-executor are implemented to pull in untyped data from the host by PreimageKey. These data source traits are covered in more detail within the Custom Backend section, but we'll quickly gloss over them here to build intuition.

Let's take, for example, OracleL1ChainProvider. The ChainProvider trait in kona-derive defines a simple interface for fetching information about the L1 chain. In the OracleL1ChainProvider, this information is pulled in over the PreimageOracle ABI. There are many other examples of these data source traits, namely the L2ChainProvider, BlobProvider, TrieProvider, and TrieHinter, which enable the creation of different data-source backends.

As an example, let's look at OracleL1ChainProvider::header_by_hash, built on top of the CommsClient trait, which is a composition trait of the PreimageOracleClient + HintReaderServer traits outlined above.

#[async_trait]
impl<T: CommsClient + Sync + Send> ChainProvider for OracleL1ChainProvider<T> {
    type Error = anyhow::Error;

    async fn header_by_hash(&mut self, hash: B256) -> Result<Header> {
        // Send a hint for the block header.
        self.oracle.write(&HintType::L1BlockHeader.encode_with(&[hash.as_ref()])).await?;

        // Fetch the header RLP from the oracle.
        let header_rlp =
            self.oracle.get(PreimageKey::new(*hash, PreimageKeyType::Keccak256)).await?;

        // Decode the header RLP into a Header.
        Header::decode(&mut header_rlp.as_slice())
            .map_err(|e| anyhow!("Failed to decode header RLP: {e}"))
    }

    // - snip -
}

In header_by_hash, we use the inner HintWriter to send a hint to the host to prepare the block hash preimage. Then, once we've received an acknowledgement from the host that the preimage has been prepared, we reach out for the RLP (which is the preimage of the hash). After the RLP is received, we decode the Header type, and return it to the user.

Custom Backends

Understanding the OP Stack STF

The OP Stack state transition is comprised of two primary components:

  • The derivation pipeline (kona-derive)
    • Responsible for deriving L2 chain state from the DA layer.
  • The execution engine (kona-executor)
    • Responsible for the execution of transactions and state commitments.
    • Ensures correct application of derived L2 state.

To prove the correctness of the state transition, Kona composes these two components:

  • It combines the derivation of the L2 chain with its execution in the same process.
  • It pulls in necessary data from sources to complete the STF, verifiably unrolling the input commitments along the way.

kona-client serves as an implementation of this process, capable of deriving and executing a single L2 block in a verifiable manner.

📖 Why just a single block by default?

On the OP Stack, we employ an interactive bisection game that narrows in on the disagreed upon block -> block state transition before requiring a fault proof to be ran. Because of this, the default implementation only serves to derive and execute the single block that the participants of the bisection game landed on.

Backend Traits

Covered in the FPVM Backend section of the book, kona-client ships with an implementation of kona-derive and kona-executor's data source traits which pull in data over the PreimageOracle ABI.

However, running kona-client on top of a different verifiable environment, i.e. a zkVM or TEE, is also possible through custom implementations of these data source traits.

op-succinct is an excellent example of both a custom backend and a custom program, implementing both kona-derive and kona-executor's data source traits backed by sp1_lib::io in order to:

  1. Execute kona-client verbatim, proving a single block's derivation and execution on SP-1.
  2. Derive and execute an entire Span Batch worth of L2 blocks, using kona-derive and kona-executor.

This section of the book outlines how you can do the same for a different platform.

Custom kona-derive sources

Before getting started, we need to create custom implementations of the following traits:

TraitDescription
ChainProviderThe ChainProvider trait describes the minimal interface for fetching data from L1 during L2 chain derivation.
L2ChainProviderThe ChainProvider trait describes the minimal interface for fetching data from the safe L2 chain during L2 chain derivation.
BlobProviderThe BlobProvider trait describes an interface for fetching EIP-4844 blobs from the L1 consensus layer during L2 chain derivation.

Once these are implemented, constructing the pipeline is as simple as passing in the data sources to the PipelineBuilder. Keep in mind the requirements for validation of incoming data, depending on your platform. For example, programs targeting zkVMs must constrain that the incoming data is indeed valid, whereas fault proof programs can offload this validation to the on-chain implementation of the host.

let chain_provider = ...;
let l2_chain_provider = ...;
let blob_provider = ...;
let l1_origin = ...;

let cfg = Arc::new(RollupConfig::default());
let attributes = StatefulAttributesBuilder::new(
   cfg.clone(),
   l2_chain_provider.clone(),
   chain_provider.clone(),
);
let dap = EthereumDataSource::new(
   chain_provider.clone(),
   blob_provider,
   cfg.as_ref()
);

// Construct a new derivation pipeline.
let pipeline = PipelineBuilder::new()
   .rollup_config(cfg)
   .dap_source(dap)
   .l2_chain_provider(l2_chain_provider)
   .chain_provider(chain_provider)
   .builder(attributes)
   .origin(l1_origin)
   .build();

From here, a custom derivation driver is needed to produce the desired execution payload(s). An example of this for kona-client can be found in the DerivationDriver.

kona-mpt / kona-executor sources

Before getting started, we need to create custom implementations of the following traits:

TraitDescription
TrieDBFetcherThe TrieDBFetcher trait describes the interface for fetching trie node preimages and chain information while executing a payload on the L2 chain.
TrieDBHinterThe TrieDBHinter trait describes the interface for requesting the host program to prepare trie proof preimages for the client's consumption. For targets with upfront witness generation, i.e. zkVMs, a no-op hinter is exported as NoopTrieDBHinter.

Once we have those, the StatelessL2BlockExecutor can be constructed like so:

#![allow(unused)]
fn main() {
let cfg = RollupConfig::default();
let provider = ...;
let hinter = ...;

let executor = StatelessL2BlockExecutor::builder(&cfg, provider, hinter)
   .with_parent_header(...)
   .build();

let header = executor.execute_payload(...).expect("Failed execution");
}

Bringing it Together

Once your custom backend traits for both kona-derive and kona-executor have been implemented, your final binary may look something like that of kona-client's. Alternatively, if you're looking to prove a wider range of blocks, op-succinct's range program offers a good example of running the pipeline and executor across a string of contiguous blocks.

kona-executor Extensions

The kona-executor crate offers a to-spec, stateless implementation of the OP Stack STF. However, due to the power of revm's Handler abstractions, the logic of the STF can be easily modified.

To register a custom handler, for example to add a custom precompile, modify the behavior of an EVM opcode, or change the fee handling, StatelessL2BlockExecutorBuilder::with_handle_register is your friend. It accepts a KonaHandleRegister, which can be used to take full advantage of revm's Handler API.

Example - Custom Precompile

const MY_PRECOMPILE_ADDRESS: Address = u64_to_address(0xFF);

fn my_precompile(input: &Bytes, gas_limit: u64) -> PrecompileResult {
   Ok(PrecompileOutput::new(50, "hello, world!".as_bytes().into()))
}

fn custom_handle_register<F, H>(
    handler: &mut EvmHandler<'_, (), &mut State<&mut TrieDB<F, H>>>,
) where
   F: TrieProvider,
   H: TrieHinter,
{
   let spec_id = handler.cfg.spec_id;

   handler.pre_execution.load_precompiles = Arc::new(move || {
      let mut ctx_precompiles = spec_to_generic!(spec_id, {
         revm::optimism::load_precompiles::<SPEC, (), &mut State<&mut TrieDB<F, H>>>()
      });

      let precompile = PrecompileWithAddress(
         MY_PRECOMPILE_ADDRESS,
         Precompile::Standard(my_precompile)
      );
      ctx_precompiles.extend([precompile]);

      ctx_precompiles
   });
}

// - snip -

let cfg = RollupConfig::default();
let provider = ...;
let hinter = ...;

let executor = StatelessL2BlcokExecutor::builder(&cfg, provider, hinter)
   .with_parent_header(...)
   .with_handle_register(custom_handle_register)
   .build();

The kona-derive Derivation Pipeline

kona-derive defines an entirely trait-abstracted, no_std derivation pipeline for the OP Stack. It can be used through the Pipeline trait, which is implemented for the concrete DerivationPipeline object.

This document dives into the inner workings of the derivation pipeline, its stages, and how to build and interface with Kona's pipeline. Other documents in this section will provide a comprehensive overview of Derivation Pipeline extensibility including trait-abstracted providers, custom stages, signaling, and hardfork activation including multiplexed stages.

What is a Derivation Pipeline?

Simply put, an OP Stack Derivation Pipeline transforms data on L1 into L2 payload attributes that can be executed to produce the canonical L2 block.

Within a pipeline, there are a set of stages that break up this transformation further. When composed, these stages operate over the input data, sequentially producing payload attributes.

In kona-derive, stages are architected using composition - each sequential stage owns the previous one, forming a stack. For example, let's define stage A as the first stage, accepting raw L1 input data, and stage C produces the pipeline output - payload attributes. Stage B "owns" stage A, and stage C then owns stage B. Using this example, the DerivationPipeline type in kona-derive only holds stage C, since ownership of the other stages is nested within stage C.

[!NOTE]

In a future architecture of the derivation pipeline, stages could be made standalone such that communication between stages happens through channels. In a multi-threaded, non-fault-proof environment, these stages can then run in parallel since stage ownership is decoupled.

Kona's Derivation Pipeline

The top-level stage in kona-derive that produces OpAttributesWithParent is the AttributesQueue.

Post-Holocene (the Holocene hardfork), the following stages are composed by the DerivationPipeline.

Notice, from top to bottom, each stage owns the stage nested below it. Where the L1Traversal stage iterates over L1 data, the AttributesQueue stage produces OpAttributesWithParent, creating a function that transforms L1 data into payload attributes.

The Pipeline interface

Now that we've broken down the stages inside the DerivationPipeline type, let's move up another level to break down how the DerivationPipeline type functions itself. At the highest level, kona-derive defines the interface for working with the pipeline through the Pipeline trait.

Pipeline provides two core methods.

  • peek() -> Option<&OpAttributesWithParent>
  • async step() -> StepResult

Functionally, a pipeline can be "stepped" on, which attempts to derive payload attributes from input data. Steps do not guarantee that payload attributes are produced, they only attempt to advance the stages within the pipeline.

The peek() method provides a way to check if attributes are prepared. Beyond peek() returning Option::Some(&OpAttributesWithParent), the Pipeline extends the Iterator trait, providing a way to consume the generated payload attributes.

Constructing a Derivation Pipeline

kona-derive provides a PipelineBuilder to abstract the complexity of generics away from the downstream consumers. Below we provide an example for using the PipelineBuilder to instantiate a DerivationPipeline.

// Imports
use std::sync::Arc;
use op_alloy_protocol::BlockInfo;
use op_alloy_genesis::RollupConfig;
use superchain_derive::*;

// Use a default rollup config.
let rollup_config = Arc::new(RollupConfig::default());

// Providers are instantiated to with localhost urls (`127.0.0.1`)
let chain_provider =
    AlloyChainProvider::new_http("http://127.0.0.1:8545".try_into().unwrap());
let l2_chain_provider = AlloyL2ChainProvider::new_http(
    "http://127.0.0.1:9545".try_into().unwrap(),
    rollup_config.clone(),
);
let beacon_client = OnlineBeaconClient::new_http("http://127.0.0.1:5555".into());
let blob_provider = OnlineBlobProvider::new(beacon_client, None, None);
let blob_provider = OnlineBlobProviderWithFallback::new(blob_provider, None);
let dap_source =
    EthereumDataSource::new(chain_provider.clone(), blob_provider, &rollup_config);
let builder = StatefulAttributesBuilder::new(
    rollup_config.clone(),
    l2_chain_provider.clone(),
    chain_provider.clone(),
);

// This is the starting L1 block for the pipeline.
//
// To get the starting L1 block for a given L2 block,
// use the `AlloyL2ChainProvider::l2_block_info_by_number`
// method to get the `L2BlockInfo.l1_origin`. This l1_origin
// is the origin that can be passed here.
let origin = BlockInfo::default();

// Build the pipeline using the `PipelineBuilder`.
// Alternatively, use the `new_online_pipeline` helper
// method provided by the `kona-derive-alloy` crate.
let pipeline = PipelineBuilder::new()
   .rollup_config(rollup_config.clone())
   .dap_source(dap_source)
   .l2_chain_provider(l2_chain_provider)
   .chain_provider(chain_provider)
   .builder(builder)
   .origin(origin)
   .build();

assert_eq!(pipeline.rollup_config, rollup_config);
assert_eq!(pipeline.origin(), Some(origin));

Producing Payload Attributes

Since the Pipeline trait extends the Iterator trait, producing OpAttributesWithParent is as simple as as calling Iterator::next() method on the DerivationPipeline.

Extending the example from above, producing the attributes is shown below.

#![allow(unused)]
fn main() {
// Import the iterator trait to show where `.next` is sourced.
use core::iter::Iterator;

// ...
// example from above constructing the pipeline
// ...

let attributes = pipeline.next();

// Since we haven't stepped on the pipeline,
// there shouldn't be any payload attributes prepared.
assert!(attributes.is_none());
}

As demonstrated, the pipeline won't have any payload attributes without having been "stepped" on. Naively, we can continuously step on the pipeline until attributes are ready, and then consume them.

#![allow(unused)]
fn main() {
// Import the iterator trait to show where `.next` is sourced.
use core::iter::Iterator;

// ...
// example from constructing the pipeline
// ...

// Continuously step on the pipeline until attributes are prepared.
let l2_safe_head = L2BlockInfo::default();
loop {
   if matches!(pipeline.step(l2_safe_head).await, StepResult::PreparedAttributes) {
      // The pipeline has succesfully prepared payload attributes, break the loop.
      break;
   }
}

// Since the loop is only broken once attributes are prepared,
// this must be `Option::Some`.
let attributes = pipeline.next().expect("Must contain payload attributes");

// The parent of the prepared payload attributes should be
// the l2 safe head that we "stepped on".
assert_eq!(attributes.parent, l2_safe_head);
}

Importantly, the above is not sufficient logic to produce payload attributes and drive the derivation pipeline. There are multiple different StepResults to handle when stepping on the pipeline, including advancing the origin, re-orgs, and pipeline resets. In the next section, pipeline resets are outlined.

For an up-to-date driver that runs the derivation pipeline as part of the fault proof program, reference kona's client driver.

Resets

When stepping on the DerivationPipeline produces a reset error, the driver of the pipeline must perform a reset on the pipeline. This is done by sending a "signal" through the DerivationPipeline. Below demonstrates this.

#![allow(unused)]
fn main() {
// Import the iterator trait to show where `.next` is sourced.
use core::iter::Iterator;

// ...
// example from constructing the pipeline
// ...

// Continuously step on the pipeline until attributes are prepared.
let l2_safe_head = L2BlockInfo::default();
loop {
   match pipeline.step(l2_safe_head).await {
      StepResult::StepFailed(e) | StepResult::OriginAdvanceErr(e) => {
         match e {
            PipelineErrorKind::Reset(e) => {
               // Get the system config from the provider.
               let system_config = l2_chain_provider
                  .system_config_by_number(
                     l2_safe_head.block_info.number,
                     rollup_config.clone(),
                  )
                  .await?;
               // Reset the pipeline to the initial L2 safe head and L1 origin.
               self.pipeline
                  .signal(
                      ResetSignal {
                          l2_safe_head: l2_safe_head,
                          l1_origin: pipeline
                              .origin()
                              .ok_or_else(|| anyhow!("Missing L1 origin"))?,
                          system_config: Some(system_config),
                      }
                      .signal(),
                  )
                  .await?;
               // ...
            }
            _ => { /* Handling left to the driver */ }
         }
      }
      _ => { /* Handling left to the driver */ }
   }
}
}

Learn More

kona-derive is one implementation of the OP Stack derivation pipeline.

To learn more, it is highly encouraged to read the "first" derivation pipeline written in golang. It is often colloquially referred to as the "reference" implementation and provides the basis for how much of Kona's derivation pipeline was built.

Provenance

The lore do be bountiful.

  • Bard XVIII of the Logic Gates

The kona project spawned out of the need to build a secondary fault proof for the OP Stack. Initially, we sought to re-use magi's derivation pipeline, but the ethereum-rust ecosystem moves quickly and magi was behind by a generation of types - using ethers-rs instead of new alloy types. Additionally, magi's derivation pipeline was not no_std compatible - a hard requirement for running a rust fault proof program on top of the RISCV or MIPS ISAs.

So, @clabby and @refcell stood up kona in a few months.

Trait-abstracted Providers

Kona's derivation pipeline pulls in data from sources that are trait abstracted so the pipeline can be generic over various data sources. Note, "data sources" is used interchangeably with "trait-abstracted providers" for the purpose of this document.

The key traits required for the pipeline are the following.

The kona-derive-alloy crate provides std implementations of these traits using Alloy's reqwest-backed providers.

Provider Usage

Although trait-abstracted Providers are used throughout the pipeline and its stages, the PipelineBuilder makes constructing the pipeline generic over the providers. An example is shown below, where the three required trait implementations are the providers stubbed with todo!().

#![allow(unused)]
fn main() {
use std::sync::Arc;
use op_alloy_genesis::RollupConfig;
use kona_derive::pipeline::PipelineBuilder;
use kona_derive::attributes::StatefulAttributesBuilder;

// The rollup config for your chain.
let cfg = Arc::new(RollupConfig::default());

// Must implement the `ChainProvider` trait.
let chain_provider = todo!("your chain provider");

// Must implement the `L2ChainProvider` trait.
let l2_chain_provider = todo!("your l2 chain provider");

// Must implement the `DataAvailabilityProvider` trait.
let dap = todo!("your data availability provider");

// Generic over the providers.
let attributes = StatefulAttributesBuilder::new(
   cfg.clone(),
   l2_chain_provider.clone(),
   chain_provider.clone(),
);

// Construct a new derivation pipeline.
let pipeline = PipelineBuilder::new()
   .rollup_config(cfg)
   .dap_source(dap)
   .l2_chain_provider(l2_chain_provider)
   .chain_provider(chain_provider)
   .builder(attributes)
   .origin(BlockInfo::default())
   .build();
}

Implementing a Custom Data Availability Provider

Notice

The only required method for the DataAvailabilityProvider trait is the next method.

#![allow(unused)]
fn main() {
use async_trait::async_trait;
use alloy_primitives::Bytes;
use op_alloy_protocol::BlockInfo;
use kona_derive::traits::DataAvailabilityProvider;
use kona_derive::errors::PipelineResult;

/// ExampleAvail
///
/// An example implementation of the `DataAvailabilityProvider` trait.
#[derive(Debug)]
pub struct ExampleAvail {
   // Place your data in here
}

#[async_trait]
impl DataAvailabilityProvider for ExampleAvail {
   type Item = Bytes;

   async fn next(&self, block_ref: &BlockInfo) -> PipelineResult<Self::Item> {
      todo!("return an AsyncIterator implementation here")
   }
}
}

Swapping out a Stage

In the introduction to the derivation pipeline, the derivation pipeline is broken down to demonstrate the composition of stages, forming the transformation function from L1 data into L2 payload attributes.

What makes kona's derivation pipeline extensible is that stages are composed using trait-abstraction. That is, each successive stage composes the previous stage as a generic. As such as long as a stage satisfies two rules, it can be swapped into the pipeline seamlessly.

  1. The stage implements the trait required by the next stage.
  2. The stage uses the same trait for the previous stage as the current stage to be swapped out.

Below provides a concrete example, swapping out the L1Retrieval stage.

Example

In the current, post-Holocene hardfork DerivationPipeline, the bottom three stages of the pipeline are as follows (from top down).

In this set of stages, the L1Traversal stage sits at the bottom. It implements the L1Retrieval trait called the L1RetrievalProvider. This provides generic methods that allow the L1Retrieval stage to call those methods on the generic previous stage that implements this provider trait.

As we go up a level, the same trait abstraction occurs. The L1Retrieval stage implements the provider trait that the FrameQueue stage requires. This trait is the FrameQueueProvider.

Now that we understand the trait abstractions, let's swap out the L1Retrieval stage for a custom DapRetrieval stage.

#![allow(unused)]
fn main() {
// ...
// imports
// ...

// We use the same "L1RetrievalProvider" trait here
// in order to seamlessly use the `L1Traversal`

/// DapRetrieval stage
#[derive(Debug)]
pub struct DapRetrieval<P>
where
    P: L1RetrievalProvider + OriginAdvancer + OriginProvider + SignalReceiver,
{
    /// The previous stage in the pipeline.
    pub prev: P,
    provider: YourDataAvailabilityProvider,
    data: Option<Bytes>,
}

#[async_trait]
impl<P> FrameQueueProvider for DapRetrieval<P>
where
    P: L1RetrievalProvider + OriginAdvancer + OriginProvider + SignalReceiver + Send,
{
    type Item = Bytes;

    async fn next_data(&mut self) -> PipelineResult<Self::Item> {
        if self.data.is_none() {
            let next = self
                .prev
                .next_l1_block()
                .await? // SAFETY: This question mark bubbles up the Eof error.
                .ok_or(PipelineError::MissingL1Data.temp())?;
            self.data = Some(self.provider.get_data(&next).await?);
        }

        match self.data.as_mut().expect("Cannot be None").next().await {
            Ok(data) => Ok(data),
            Err(e) => {
                if let PipelineErrorKind::Temporary(PipelineError::Eof) = e {
                    self.data = None;
                }
                Err(e)
            }
        }
    }
}

// ...
// impl OriginAdvancer for DapRetrieval
// impl OriginProvider for DapRetrieval
// impl SignalReceiver for DapRetrieval
// ..
}

Notice, the L1RetrievalProvider is used as a trait bound so the L1Traversal stage can be used seamlessly as the "prev" stage in the pipeline. Concretely, an instantiation of the DapRetrieval stage could be the following.

DapRetrieval<L1Traversal<..>>

Signals

Understanding signals first require a more in-depth review of the result returned by stepping on the derivation pipeline.

The StepResult

As briefly outlined in the intro, stepping on the derivation pipeline returns a StepResult. Step results provide a an extensible way for pipeline stages to signal different results to the pipeline driver. The variants of StepResult and what they signal include the following.

  • StepResult::PreparedAttributes - signals that payload attributes are ready to be be consumed by the pipeline driver.
  • StepResult::AdvancedOrigin - signals that the pipeline has derived all payload attributes for the given L1 block, and the origin of the pipeline was advanced to the next canonical L1 block.
  • StepResult::OriginAdvanceErr(_) - The driver failed to advance the origin of pipeline.
  • StepResult::StepFailed(_) - The step failed.

No action is needed when the prepared attributes step result is received. The pipeline driver may chose to consume the payload attributes how it wishes. Likewise, StepResult::AdvancedOrigin simply notifies the driver that the pipeline advanced its origin - the driver may continue stepping on the pipeline. Now, it becomes more involved with the remaining two variants of StepResult.

When either StepResult::OriginAdvanceErr(_) or StepResult::StepFailed(_) are received, the pipeline driver needs to introspect the error within these variants. Depending on the PipelineErrorKind, the driver may need to send a "signal" down through the pipeline.

The next section goes over pipeline signals by looking at the variants of the PipelineErrorKind and the driver's response.

PipelineErrorKind

There are three variants of the PipelineErrorKind, each groups the inner error based on severity (or how they should be handled).

  • PipelineErrorKind::Temporary - This is an error that's expected, and is temporary. For example, not all channel data has been posted to L1 so the pipeline doesn't have enough data yet to continue deriving payload attributes.
  • PipelineErrorKind::Critical - This is an unexpected error that breaks the derivation pipeline. It should cause the driver to error since this is behavior that is breaking the derivation of payload attributes.
  • PipelineErrorKind::Reset - When this is received, it effectively requests that the driver perform some action on the pipeline. Kona uses message passing so the driver can send a Signal down the pipeline with whatever action that needs to be performed. By allowing both the driver and individual pipeline stages to define their own behaviour around signals, they become very extensible. More on this in a later section.

The Signal Type

Continuing from the PipelineErrorKind, when the driver receives a PipelineErrorKind::Reset, it needs to send a signal down through the pipeline.

Prior to the Holocene hardfork, the pipeline only needed to be reset when the reset pipeline error was received. Holocene activation rules changed this to require Holocene-specific activation logic internal to the pipeline stages. The way kona's driver handles this activation is by sending a new ActivationSignal if the PipelineErrorKind::Reset type is a ResetError::HoloceneActivation. Otherwise, it will send the ResetSignal.

The last of the three Signal variants is the FlushChannel signal. Similar to ActivationSignal, the flush channel signal is logic introduced post-Holocene. When the driver fails to execute payload attributes and Holocene is active, a FlushChannel signal needs to forwards invalidate the associated batch and channel, and the block is replaced with a deposit-only block.

Extending the Signal Type

To extend the Signal type, all that is needed is to introduce a new variant to the Signal enum.

Once the variant is added, the segments where signals are handled need to be updated. Anywhere the SignalReceiver trait is implemented, handling needs to be updated for the new signal variant. Most notably, this is on the top-level DerivationPipeline type, as well as all the pipeline stages.

An Example

Let's create a new Signal variant that updates the RollupConfig in the L1Traversal stage. Let's call it SetConfig. The signal type would look like the following with this new variant.

#![allow(unused)]
fn main() {
/// A signal to send to the pipeline.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[allow(clippy::large_enum_variant)]
pub enum Signal {
    /// Reset the pipeline.
    Reset(ResetSignal),
    /// Hardfork Activation.
    Activation(ActivationSignal),
    /// Flush the currently active channel.
    FlushChannel,
    /// Updates the rollup config in the L1Traversal stage.
    UpdateConfig(ConfigUpdateSignal),
}

/// A signal that updates the `RollupConfig`.
#[derive(Debug, Default, Clone, Copy, PartialEq, Eq)]
pub struct ConfigUpdateSignal(Arc<RollupConfig>);
}

Next, all handling of the Signal type needs to be updated for the new UpdateConfig variant. For the sake of this example, we'll just focus on updating the L1Traversal stage.

#![allow(unused)]
fn main() {
#[async_trait]
impl<F: ChainProvider + Send> SignalReceiver for L1Traversal<F> {
    async fn signal(&mut self, signal: Signal) -> PipelineResult<()> {
        match signal {
            Signal::Reset(ResetSignal { l1_origin, system_config, .. }) |
            Signal::Activation(ActivationSignal { l1_origin, system_config, .. }) => {
                self.block = Some(l1_origin);
                self.done = false;
                self.system_config = system_config.expect("System config must be provided.");
            }
            Signal::UpdateConfig(inner) => {
               self.rollup_config = Arc::clone(&inner.0);
            }
            _ => {}
        }

        Ok(())
    }
}
}

Glossary

This document contains definitions for terms used throughout the Kona book.

Fault Proof VM

A Fault Proof VM is a virtual machine, commonly supporting a subset of the Linux kernel's syscalls and a modified subset of an existing reduced instruction set architecture, that is designed to execute verifiable programs.

Full specification for the cannon & cannon-rs FPVMs, as an example, is available in the Optimism Monorepo.

Fault Proof Program

A Fault Proof Program is a program, commonly written in a general-purpose language such as Golang, C, or Rust, that may be compiled down to a compatible Fault Proof VM target and provably executed on that target VM.

Examples of Fault Proof Programs include the OP Program, which runs on top of cannon, cannon-rs, and asterisc to verify a claim about the state of an OP Stack layer two.

Preimage ABI

The Preimage ABI is a specification for a synchronous communication protocol between a client and a host that is used to request and read data from the host's datastore. Full specifications for the Preimage ABI are available in the Optimism Monorepo.

Contributing

Thank you for wanting to contribute! Before contributing to this repository, please read through this document and discuss the change you wish to make via issue.

Dependencies

Before working with this repository locally, you'll need to install several dependencies:

Optional

Pull Request Process

  1. Before anything, create an issue to discuss the change you're wanting to make, if it is significant or changes functionality. Feel free to skip this step for trivial changes.
  2. Once your change is implemented, ensure that all checks are passing before creating a PR. The full CI pipeline can be run locally via the justfiles in the repository.
  3. Make sure to update any documentation that has gone stale as a result of the change, in the README files, the [book][book], and in rustdoc comments.
  4. Once you have sign-off from a maintainer, you may merge your pull request yourself if you have permissions to do so. If not, the maintainer who approves your pull request will add it to the merge queue.