Sequencer State Keeper

The sequencer orders pending transactions from the mempool, applies admission checks (signatures, nonce, gas/fee limits, and protocol restraints), and executes them on the zkEVM, which enforces signatures, nonce, intrinsic validity, and state transition rules.

As execution proceeds, the sequencer periodically seals executed transactions into small, frequent L2 blocks (called 'miniblocks' in DB tables). A contiguous sequence of L2 blocks is then aggregated into an L1 batch, which is the unit published on L1.

Sealing an L2 block provides soft finality on the L2 chain. Data availability finality is reached once the batch publication data is included on Celestia and its inclusion is recorded. Anchoring occurs once the batch inscription reaches the configured confirmation depth on the Bitcoin blockchain. State‑correctness finality is reached after the batch’s zk‑proof is generated and validated by the verifier network.

Batches are sealed when configured criteria are met (e.g., transaction slot, gas usage, pubdata size, transaction-encoding size, together with I/O-level thresholds such as payload-encoding size and timeouts).

Publication to the Data Availability Layer (Celestia) and inscriptions of batch/proof commitments on Bitcoin are handled by downstream components. The sequencer does not post to DA or inscribe commitments/proofs on Bitcoin. It outputs ordered, executed L2 blocks and sealed L1 batches that downstream components use for DA publications, proof generation, and commitment posting.

Transaction Lifecycle

Transaction processing loop

The state keeper runs a per-batch loop that advances in strictly ordered stages and never seals an empty L1 batch or an empty L2 block.

  • Inputs: candidate transactions from the mempool; configured thresholds (batch timeout, L2-block timeout, L2-block payload size, transaction slots, gas limits/thresholds, pubdata limits/thresholds, bootloader encoding limits/thresholds)

  • In-memory state tracked per iteration:

    • L1 batch: number, open timestamp, cumulative execution metrics (gas, pubdata), cumulative encoded size, pending L1 gas, list of constituent L2 blocks.

    • L2 blocks (miniblocks): index within the batch, open timestamp, list of executed transactions, cumulative payload-encoding size for the block.

    • Last transaction context: execution result/metrics, pubdata contribution, encoded size, gas-attribution.

The loop stages:

  1. Unconditional batch-timeout check

    • Precondition: The open L1 batch has executed at least one transaction.

    • Condition: (now - batch.open_timestamp) > batch timeout.

    • Action: seal the L1 batch immediately for timeout (skip further processing in this iteration),

  2. L2 block sealing (timeout and payload-size)

    • Precondition: The open L1 batch has executed at least one transaction.

    • Conditions (either is sufficient):

      • (now - l2_block.open_timestamp) > L2-block timeout, or l2_block.payload_encoding_size > block payload encoding limit

    • Action: Seal the L2 block and immediately start the next L2 block within the same open L1 batch (with a fresh timestamp).

    Note: This L2 block sealing does not decide inclusion, exclusion or non-executable status for the last transaction. It only sets the miniblock sealing.

  3. Intake and execute the next transaction

    • Wait for the next mempool transaction (with a poll window). If none arrive, loop back to re-run the sealing checks (batch-timeout and L2 block sealing, Steps 1 - 2).

    • Execute the transaction in the VM; Collect:

      • Per-tx execution metrics (including pubdata published, if provided by the VM version)

      • Encoded size of the transaction

      • L1 gas attribution for this transaction

    • Tentatively update state:

      • Push the just-executed transaction onto the current L2 block's list of executed transactions.

      • Update cumulative batch/blocks metrics

      • Compute SealData snapshots for "batch so far" and "just-executed (last) transaction"

  4. Batch sealing decision (most-restrictive outcome wins)

    • Evaluate independent criteria over the batch state + transaction:

      • Transaction slots: caps the number of transactions per batch.

      • Gas thresholds: per-tx reject bound; per-batch "close" threshold; hard per-batch limit.

      • Pubdata thresholds: per-tx reject bound; per-batch "close" threshold; hard per-batch limit.

      • Transaction-encoding threshold: per-tx reject bound; per batch "close" threshold; bootloader encoding capacity.

    • Combine results by precedence:

      • "unexecutable" > "exclude-and-seal" > "include-and-seal" > "no-seal".

    • Apply the final action (always with respect to the just-executed, last transaction:

      • "unexecutable": rollback and reject the last transaction; continue the loop (batch remains open unless another criterion is also required for sealing)

      • include-and-seal: keep the last transaction; seal the L1 batch now.

      • No-seal: Keep the last transaction; continue with the next iteration.

  5. Finalize and persist on seal

    • When sealing is triggered by step 1 or 4:

      • Finish the batch with the correct inclusion/exclusion of the last transaction.

      • Persist the sealed L1 batch and all its finalized L2 blocks to storage (including ordering and metadata).

      • Rollover: initialize a fresh L1 batch environment and continue processing.

Full control flow is implemented in the state keeper; see core/node/state_keeper/src/keeper.rs

L2 Blocks vs L1 Batches

L2 Blocks (miniblocks)

L2 blocks (miniblocks) are small, frequent sealing units that the sequencer uses to segment execution inside an open L1 batch. They rotate on I/O thresholds (timeouts, payload-encoding size) and do not decide to include/exclude/unexecutable for the last transaction. That logic belongs to batch-level sealing criteria. Miniblock headers are stored in the "miniblocks" table with the associated execution artifacts. See core/lib/dal/README.md.

  • Purpose

    • Short confirmation latency and API stability on L2; small, frequent sealing units that segment execution within an open L1 batch.

  • Frequency

  • Size

    • Small. Typically, a few to a hundred transactions, limited by payload-encoding size and time.

  • Sealing triggers

    • Timeouts and payload size caps only. These do not decide inclusion or exclusion for the last transaction; they only determine when to seal the current miniblock and start the next one. See thresholds in the section above.

  • Finality

    • Soft finality. A sealed miniblock can be superseded by the sequencer reorgs until its containing batch is sealed and anchored.

  • Storage and naming

    • Headers are recorded in the database table named "miniblocks". Associated execution artifacts are present per the invariants in core/lib/dal/README.md.

  • In-memory representation

    • The current batch maintains an ordered list of L2 blocks and their executed transactions. See BatchExecuteData

L1 batches

L1 batches aggregate miniblocks into the unit that is published to the DA layer (Celestia), anchored on Bitcoin, and proven as a single bootloader program. Batch sealing is driven by execution-metric criteria (slots, gas, pubdata, encoding size, plus a batch timeout). Data availability finality is reached when the batch’s public data is included on Celestia. Anchoring finality is reached when the batch inscription is confirmed to the configured depth on the Bitcoin blockchain. Later, the zk-proof and verifier checks establish state correctness and finality. See docs/specs/blocks_batches.md, dispatcher ViaDataAvailabilityDispatcher::dispatch(), client CelestiaClient::dispatch_blob() and confirmation policy core/lib/config/src/configs/via_btc_sender.rs.

  • Purpose

    • Aggregation unit for DA publication, Bitcoin anchoring, and proof generation. The VM executes a batch as a single program (bootloader). See docs/specs/blocks_batches.md.

  • Composition

    • A contiguous sequence of L2 blocks (miniblocks)

  • Sealing criteria (execution-metric driven)

  • Finality

  • Storage

    • Headers recorded in l1_batches. Commitment parts (e.g, event queue and bootloader memory, and commitments) are stored in commitments. See core/lib/dal/README.md.

Sealing criteria (batch level)

Batch sealing is evaluated after each executed transaction. Independent criteria are applied to the current batch state and the most recently executed transaction. The most restrictive outcomes wins ("unexecutable" > "exclude-and-seal" > "include-and-seal" > "no-seal").

  • Transaction slots

    • Purpose: cap the number of transactions in a batch.

    • Behavior: When the configured slot limit is reached, include the last executed transaction and seal the batch ("include-and-seal").

  • Gas usage thresholds

    • Inputs: cumulative batch gas (commit/prove/execute), gas attributable to the last tx, and configured gas bounds.

    • Behavior:

      • If the last tx's gas (without required overhead) exceeds the per-transaction reject bound, it is "unexecutable".

      • If including the last tx would push the batch above the hard per-batch gas limit, seal the batch without this tx ("exclude-and-seal").

      • If, after applying the just-executed (last) transaction, the cumulative batch gas crosses the configured “close” threshold (still below the hard per-batch limit), include that last executed transaction and seal the batch (“include-and-seal”).

      • Otherwise, continue ("no-seal").

  • Pubdata size thresholds

    • Inputs: the cumulative batch pubdata size before applying the just-executed transaction (the "last transaction"), the pubdata attributable to that last transaction, the required bootloader/batch-tip overhead, and the configured pubdata bounds (per-transaction reject bound, batch “close” threshold, and batch hard limit).

    • Behavior:

      • If the last transaction's effective pubdata (its own pubdata plus required overhead) exceeds the per-transaction reject bound ⇒ resolution = "unexecutable".

      • Otherwise, compute the prospective cumulative batch pubdata after including the last transaction. If that prospective size would exceed the hard batch pubdata limit ⇒ resolution = "exclude-and-seal".

      • Otherwise, if the prospective cumulative size is at or above the configured "close" threshold while still not exceeding the hard limit ⇒ resolution = "include-and-seal" (keep the last transaction and seal the batch)

      • Otherwise, resolution = "no-seal" (keep processing)

  • Transaction-encoding size thresholds

    • Inputs: The cumulative encoded size already used within the bootloader's encoding space before applying the last transaction, the encoded size of the last transaction and the configured geometry percentages (per-tx reject bound, "close", threshold and total capacity).

    • Behavior:

      • If the last transaction's encoded size exceeds the per transaction reject bound ⇒ resolution ⇒ "unexecutable".

      • Otherwise, compute the prospective cumulative encoded size after including the last transaction. If that prospective size would exceed the bootloader's total capacity ⇒ resolution ⇒ "exclude-and-seal" (do not keep the last transaction; seal the batch).

      • Otherwise, if the prospective cumulative size is at or above the configured "close" threshold while still not exceeding the capacity ⇒ resolution = "include-and-seal" (keep the last transaction and seal the batch).

      • Otherwise, resolution = "no-seal" (keep processing)

Decision combination

  • Each criterion yields a resolution of the current step. The final decision is the most restrictive among them, ordered: "unexecutable" > "exclude-and-seal" > "include-and-seal" > "no-seal"

  • Actions implied by the final decision (always referring to the just-executed, last transaction):

    • "Include and seal": keep the last transaction in the batch and seal it immediately.

    • "exclude and seal": roll back the last transaction from the batch (do not include it), then seal the batch immediately.

    • "unexecutable": reject the last transaction outright, depending on other criteria in the same step. The batch may or may not seal.

    • "no-seal": keep the last transaction in the batch and continue processing. the next transaction.

L2 block interval (I/O thresholds)

L2 block sealing (mini block interval) is determined by the I/O thresholds, which operate independently of the batch-level sealing criteria. These thresholds rotate L2 blocks ("miniblocks") within the same open L1 batch. They are independent of the batch-level sealing criteria and do not decide to include/exclude/unexecutable for the last transaction. They only decide when to seal the current L2 block and start the next one.

Configuration inputs:

Timeout-based L2 block sealing:

  • Preconditions: the current L2 block contains ≥ 1 executed transaction”.

    • Condition: now - current_l2_block.open_timestamp > l2_block_commit_deadline_ms as evaluated by TimeOutSealer::should_seal_l2_block in core/node/state_keeper/src/seal_criteria/mod.rs

    • Action: seal the L2 block and immediately start the next L2 block within the same batch L1 batch (with a fresh timestamp).

    • Outcome: The L1 batch remains open. Only the miniblock index is incremented.

Maximum L2 block payload encoding size:

  • Inputs: current_l2_block.payload_encoding_size (accumulated)

    • Condition: current_l2_block.payload_encoding_size ≥ l2_block_max_payload_size as evaluated by L2BlockMaxPayloadSizeSealer::should_seal_l2_block in core/node/state_keeper/src/seal_criteria/mod.rs

    • Action: seal the current L2 block. Start the next L2 block within the same batch L1 batch.

    • Outcome: prevents oversized miniblocks; the batch remains open.

Batch-level unconditional sealing (independent of the miniblock sealing interval):

  • Preconditions: the open L1 batch contains at least one executed transaction (the system never seals an empty batch).

    • Condition: now - l1_batch.open_timestamp > block_commit_deadline_ms, per TimeOutSealer::should_seal_l1_batch in core/node/state_keeper/src/seal_criteria/mod.rs

    • Action: seal the L1 batch immediately (regardless of execution metrics or miniblock size)

    • Outcome: closes the batch and rolls over to a new batch environment.

Important notes

  • L2 block thresholds never decide how to treat the "last transaction" (no include/exclude/unexecutable decisions)

  • If the current L2 block has zero executed transactions, timeout-based L2 block sealing is a no-op (do not seal empty miniblocks)

Concrete example

  • Given l2_block_commit_deadline_ms = 2000ms and the current L2 block opened at T0 with ≥1 executed tx, then T0 + 2001 ms the block is sealed and the next L2 block begins, while the L1 batch remains open.

  • Given l2_block_max_payload_size = 64 KiB, once current_l2_block.payload_encoding_size reaches 64 KiB (or above), the block is sealed, and a new L2 block begins while the L1 batch remains open.

Outputs and handoff to downstream components

When a batch-level seal is triggered, the state keeper does these steps:

  • Finalize step (based on the most restrictive decision from the batch criteria):

    • "include-and-seal": Keep the just-executed last transaction in the batch and seal the batch immediately.

    • "exclude-and-seal": Roll back the last transaction from the batch (do not include it), then seal the batch immediately.

    • "unexecutable": Reject the last transaction outright, regardless of whether another criterion concurrently requires sealing; the batch seals now. Otherwise continue.

  • Persist:

    • Write the sealed L1 batch and all of its finalized L2 blocks to the database, preserving ordering and metadata. DB invariants for the presence of associated artifacts are documented in core/lib/dal/README.md.

    • L2 blocks are stored under "miniblocks" tables. L1 batch headers under "1_batches" (see core/lib/dal/README.md).

  • Rollover:

    • Initialize a fresh L1 batch environment and resume transaction intake.

Downstream components (separate services)

After the state keeper seals and persists an L1 batch, the remainder of the pipeline runs asynchronously. These services do not modify the sealed batch contents. They derive commitments, publish data to the DA layer, and anchor references on Bitcoin.

Commitment generation (from sealed batches)

Produce a batch commitment artifacts and verification metadata from sealed L1 batches. Persist them to the database and mark each batch ready for DA publication and proof generation.

Preconditions

  • The L1 batch is sealed and persisted by the state keeper (with its ordered L2 blocks and execution artifacts).

  • Database invariant: for any l1_batch header, associated execution artifacts (e.g., initial_writes, protective_reads) exist. See core/lib/dal/README.md.

  • The commitment table is available to store commitment parts (even queue and bootloader memory commitments). See core/lib/dal/README.md.

Flow

  1. Select pending work

    • Poll the database for sealed L1 batches that do not yet have commitments generated (status-based selection).

    • For each qualifying batch, load:

      • The batch header from l1_batches

      • The complete, ordered L2 blocks sequence (miniblocks) and per-tx execution artifacts are persisted by the state keeper.

      • Execution-related artifacts required for commitment generation (e.g., storage, diffs and read/write sets, event queue content), which satisfy the invariants described in core/lib/dal/README.md.

  2. Compute commitment inputs

    • Assemble commitment inputs from persisted execution data:

      • Event queue content

      • Bootloader memory initial content

      • State change summary (e.g., initial_writes, protective_reads)

      • Any protocol-version-dependent data required by the commitment algorithm.

    • Implementation entry point: commitment_generator

  3. Generate commitments and metadata:

    • Compute cryptographic commitments for the L1 batch using the configured commitment algorithm.

    • Produce auxiliary verification metadata required downstream (for DA publication, proof generation and later verification).

  4. Persist results and mark readiness:

    • Persist commitment payloads:

      • Write commitment parts (including event queue and bootloader memory commitments) into the commitment table per core/lib/dal/README.md.

    • Update batch state:

      • Mark the batch as "ready for DA publication" so the DA dispatcher can discover it

Inputs

  • Sealed L1 batch number and metadata

  • Ordered L2 blocks (miniblocks) with per-tx execution outputs

  • Execution artifacts (e.g., initial_writes, protective_reads)

Outputs

  • Commitment payloads and verification metadata persisted in DB (including entries in the commitment table)

  • Batch state updated to "ready for DA publication"

  • Stored references for the proof pipeline to start generating proofs

Implementation: commitment_generator, high-level flow at docs/via_guides/data-flow.md.

Data Availability and Settlement Integration

Data Availability (Celestia)

The Data Availability stage publishes sealed L1 batch data (and later, the batch proofs) to Celestia, allowing anyone to reconstruct the batch content and verify its availability independently of the sequencer. This stage is handled by two independent components:

  • The DA dispatcher: a service loop that selects a sealed batch marked "ready for DA publication", submits their pubdata/proofs to Celestia, persists the returned blob identifier (blob_id) and polls for inclusion. See ViaDataAvailabilityDispatcher.

  • The Celestia client: a library client that connects to the Celestia node, sets the fixed VIA namespace and performs the blob submission and inclusion queries on behalf of the dispatcher. See CelestiaClient::new().

What is published to Celestia and how is it used?

  • For each L1 batch, the dispatcher submits the batch's pubdata bytes to Celestia and stores the returned blob_id in the database. Later, when a proof is available, it submits the proof bytes and stores the respective proof blob_id. Both blob_id are then used:

  • The dispatcher continuously polls for Celestia for inclusion data tied to each blob_id and persists that inclusion evidence. This provides an auditable link from an L1 batch number → blob_id → inclusion data that downstream consumers can use.

Execution and reliability

  • The dispatcher runs continuously and is idempotent with respect to batch selection: batches that have already been dispatched or are already holding blob_id/inclusion records are skipped. Transient network failures during submission are handled via bounded retries (see core/node/via_da_dispatcher/src/da_dispatcher.rs).

  • The sequencer/state keeper never calls Celestia directly. Its responsibility ends at sealing and persisting the batch as the dispatcher and client own publication and inclusion tracking.

Preconditions

  • The L1 batch is sealed and persisted by the state keeper and the commitment generator has marked for it "ready for DA publication".

  • The dispatcher and client are running as independent services. The sequencer does not call Celestia directly.

The DA dispatcher (selection, dispatch, and inclusion tracking)

  1. , Batch selection and dispatch:

    • Query the database for sealed L1 batches marked "ready for DA publication" and do not yet have a recorded blob_id.

    • For each selected batch, extract its pubdata bytes from storage.

    • Dispatch the pubdata to Celestia by calling ViaDataAvailabilityDispatcher::dispatch() which invokes the DA client with (l1_batch_number, pubdata)

    • Use retry logic with the configured maximum attempts to tolerate transient network failures. See retry implementation in core/node/via_da_dispatcher/src/da_dispatcher.rs.

  2. Record results:

    • On successful dispatch, persist the returned blob identifier (blob_id) and associated timestamps in the database.

    • Update batch's DA status to indicate that its pubdata has been published and is awaiting inclusion confirmation.

  3. Inclusion polling:

    • For each batch with a recorded blob_id but no inclusion data, query the DA client for inclusion information using that blob_id.

    • Persist the inclusion information status and inclusion data for the blob so the downstream components can rely on availability guarantees.

  4. Proof dispatch (subsequent lifecycle):

Celestia client (blob operations)

  1. Initialization:

    • On startup CelestiaClient::new() connects the configured Celestia node and does a connectivity check.

    • Set the fixed "VIA" namespace identifier for all blob operations. The namespace bytes are defined at CelestiaClient::new().

    • Enforce the configured blob_size limit to bound published payloads.

  2. Blob submission:

  3. Inclusion data retrieval:

    • During dispatcher polling, fetch inclusion data for a given blob_id and return it to the dispatcher.

    • Inclusion data is then persisted by the dispatcher for the downstream consumers, such as the BTC inscription path and the verifier network.

Inputs and outputs

  • Inputs to publish: l1_batch_number and pubdata bytes (and later, proof bytes).

  • Outputs from publish: blob_id per dispatch, plus inclusion data obtained later via polling. Both are stored in the database to support Bitcoin inscription and verification workflows.

Bitcoin Inscription Manager and Aggregator: commit/reveal anchoring of batch commitment and proof references

Anchor each sealed batch's commitment and its subsequent proof reference on Bitcoin, while embedding the Celestia blob identifiers that make the underlying data publicly retrievable, which creates a verifiable link between the DA content on Celestia and settlement anchoring on Bitcoin.

  • Components:

  • Message types and fields:

  • Execution model:

    • Aggregator selects inscription work items → constructs typed messages with the field above → InscriptionManager creates a commit and reveal transaction, signs and broadcasts them → database is updated with inscription ID and confirmation status according to the configured policy in ViaBtcSenderConfig.

Preconditions:

  • The L1 batch is sealed, and its pubdata has been published to Celestia. A blob_id is persisted for the batch.

  • The BTC inscription services are running independently of the sequencer. The state keeper (sequencer) does not talk to the Bitcoin blockchain at all. It does not open RPC/P2p connections, does not construct or sign Bitcoin transactions, and does not broadcast or monitor Bitcoin transactions or blocks.

  • The state keeper's responsibilities end at sealing and persisting the L1 batches. See the batch loop in process_l1_batches(). No Bitcoin code paths exist in that module or its I/O layer MempoolIO.

The state keeper sequences and seals. ViaBTCSender posts to Bitcoin. Verifier components read from Bitcoin. No Bitcoin RPC/transactions are issued by the state keeper.

Bitcoin inscription aggregator (commit/reveal scheduler and inscription message constructor) ViaAggregator

  1. Select what to inscribe

  2. Construct the typed inscription message

    • Build a tuped payload for the selected operating using ViaAggregator::construct_inscription_message().

    • Embed all required fields:

      • l1_batch_hash and l1_batch_index

      • da_identifier string for DA backend. The default is "celestia" and it is configurable in ViaBtcSenderConfig::da_identifier().

      • blob_id value that references Celestia blobs for batch pubdata and later, proof data.

        • prev_l1_batch_hash to chain batches by hash

      • Message schemas for L1BatchDAReference and ProofDAReference are specified in core/lib/via_btc_client/DEV.md. The Via protocol enforces signature checks for the sequencer's key inside the inscription script and encodes fields in Taproot witness data.

Inscription manager (transaction lifecycle and tracking)

  1. Create and broadcast the transactions

    • Build, commit, and reveal transactions for the inscription protocol, and sign them with the sequencer's key. The manager runs the loop in ViaBtcInscriptionManager::run().

    • Submit transactions to the Bitcoin network and record their identifiers.

  2. Update database status and confirmations

Inputs and outputs:

  • Inputs:

    • l1_batch_hash, l1_batch_index

    • da_identifier string from configuration

    • blob_id for the batch pubdata and later, the proof blob

    • prev_l1_batch_hash for chaining

  • Outputs:

    • Broadcast commit and reveal transaction on Bitcoin

    • Persisted inscription identifier and per-inscription confirmation status in the database

Protocol note

Node composition (service boundary)

Via runs as a multi-service node. Each service is responsible for a specific stage of the pipeline and communicates through the database and shared queues, rather than through direct in-process calls. This separation makes sure that sealing, commitment generation, data availability publication, and Bitcoin anchoring can progress asynchronously and be retried without blocking the sequencer.

The default component set

  • Started by via_server with the default component list defined Cli::components. The default includes:

    • api, btc, tree, tree_api, state_keeper, housekeeper, proof_data_handler, commitment_generator, celestia, da_dispatcher, vm_runner_protective_reads, vm_runner_bwip

Service responsibilities and boundaries

  • Sequencer / State Keeper

    • Purpose: execute transactions, rotate L2 blocks by I/O thresholds, seal L1 batches by execution-metric criteria, and persist sealed outputs.

    • Never post to Celestia or Bitcoin

  • Commitment Generator

    • Purpose: derive batch commitment and verification metadata from sealed batches. Mark batches ready for DA publication and proving.

  • Data Availability: DA Dispatcher + Celestia Client

  • Bitcoin Anchor: BTC Sender + BTC Inscription Manager

  • Proof Data Handler and Prover-facing services

    • Purpose: expose sealed batch artifacts and commitments to the prover pipeline and ingest proofs for later DA publication.

  • VM Runner services (vm_runner_protective_reads, vm_runner_bwip)

    • Purpose: produce per-batch execution derivatives used by downstream components (for example, protective reads and basic witness inputs).

    • Data structure example (batch input and L2 blocks container): BatchExecuteData

  • Housekeeper

    • Purpose: background maintenance tasks (cleanups, metrics, etc.) that keep the node healthy and the DB consistent with service expectations.

  • Tree / Tree API

Lifecycle ordering

  1. State Keeper executes transactions, rotates L2 blocks based on I/O thresholds, and seals an L1 batch when a sealing criterion triggers. It persists the sealed batch and its L2 blocks with final ordering and metadata. See process_l1_batch().

  2. Commitment Generator polls for newly sealed batches without commitments, computes commitment payloads and metadata, persists them and marks them "ready for DA publication" See commitment_generator.

  3. DA Dispatcher selects batches marked ready-for-DA, dispatches pubdata via the Celestia client, stores blob_id and polls for inclusion. Proof blobs are dispatched later in the life cycle. See ViaDataAvailabilityDispatcher::run().

  4. BTC Sender decides when to inscribe batch commitment and proof of references, constructs typed inscription messages with the batch identity and DA references, and broadcasts commit/reveal transactions, updating DB status on confirmations. See ViaAggregator::construct_inscription_message() and ViaBTCInscriptionManager::run().

  5. Prover-facing services and the verifier network consume the published data and inscriptions. Proofs are verified and may be attested, driving the state-correctness finality.

Operational model

  • Concurrency: Services run in parallel and operate on persisted tasks and statuses. Failures and retries in one stage can block the sequencer.

  • Backpressure: DA and BTC services use status fields (for example, blob_id presence, confirmation depth) to continue safely without reordering or duplication.

  • Configuration: The behavior of DA and BTC, such as da_identifier and confirmation depth, is controlled by the config reader code, as seen in ViaBtcSenderConfig::da_identifier() and ViaBtcSenderConfig::block_confirmations().

Finality

Finality defines, for each lifecycle stage of an L2 block L2 batch, exactly which properties are guaranteed, who or what could still change the state and what kinds of rollbacks remain possible. It is the contract between the system and its users about immutability and correctness at that stage.

Scope and units

  • L2 block (miniblocks): intra-batch sealing unit on L2. Headers are recorded in "miniblocks"

  • L1 batch: aggregation of consecutive L2 blocks. It is the unit for DA publication, Bitcoin anchoring and proof generation. See docs/specs/block_batches.md.

L2 block (miniblock) soft finality

  • What it means

    • A sealed L2 block is stable for UX/APIs, but may be superseded by sequencer reorgs until the containing L1 batch is sealed and anchored.

  • How is it produced

  • Observability

    • The L2 block header is written to the "miniblocks" table, and its execution data is present in the database as required by the invariants in core/lib/dal/README.md.

Data‑availability finality (Celestia inclusion)

  • What it means

    • The batch’s data is durably available once Celestia includes the batch pubdata, and inclusion data is recorded.

  • How it is achieved

    • DA publication and inclusion via the dispatcher; inclusion info persisted (blob_id and inclusion data).

Anchoring (Bitcoin confirmations)

  • What it means

    • The L1 anchoring is considered durable once the batch inscription reaches the configured confirmation depth on the Bitcoin blockchain.

  • How it is achieved

    • The inscription manager tracks confirmations on Bitcoin and records the status.

State-correctness finality

  • What it means

    • The batch's state transition is cryptographically proven and publicly verifiable. Anyone can reconstruct the expected post-state from the published inputs and verify the batch’s zk-proof against the verification key. Correctness depends only on the proof and data, and is independent of the sequencer's trust.

  • How it is achieved

    • Proof generation and publication

      • The prover pipeline generates a zk-proof for the batch. The dispatcher publishes the proof as a Celestia blob using ViaDataAvailabilityDispatcher::dispatch_proofs() and records blob_id and inclusion data. The BTC sender inscribes a proof reference using the same commit and reveal pattern as for the batch.

    • Verification and attestation

      • The verifier network detects proof inscriptions on Bitcoin, fetches public data and proof from blobs on Celestia, and verifies the proof. See Verified by the verifier network.

    • Observability

      • Proof blob_id and inclusion data are persisted by the dispatcher. Proof, inscription, and confirmation status are persisted by the BTC sender services. Verifier attestation follows verifier documentation.

Reorg and retry semantics

  • L2 reorgs

    • Possible until the L1 batch is sealed. After sealing, batch contents are fixed for the downstream components. See the batch loop in process_l1_batch().

  • Bitcoin reorgs

    • Inscriptions are treated as confirmed only after the configured confirmation depth is reached. The manager detects stuck or reorged inscriptions and retries. See ViaBTCInscriptionManager::run().

  • DA retries and inclusion

Finality summary

  1. Soft finality on L2

    • Trigger: L2 blocks sealed within an open batch

    • Guarantee: Subject to sequencer reorgs until batch sealer and anchoring.

  2. Data‑availability finality (Celestia inclusion)

    • Trigger: Batch pubdata included on Celestia and inclusion recorded.

    • Guarantee: Batch data is durably available; ordering commitments cannot be altered solely by L2.

  3. Anchoring (Bitcoin confirmations)

    • Trigger: Batch inscription reaches the configured confirmation depth on the Bitcoin network.

    • Guarantee: Anchoring is durable under the assumed Bitcoin finality policy.

  4. State‑correctness finality

    • Trigger: zk‑proof published and included on Celestia; proof inscription confirmed; validation by the verifier network.

    • Guarantee: Correctness of the state transition is independently validated and publicly attestable.

Build variants and downstream components

This document targets the via_server build (Bitcoin settlement + Celestia DA). The via_server entrypoint and default component set are defined in core/bin/via_server/src/main.rs.

Last updated