Sequencer State Keeper
The sequencer orders pending transactions from the mempool, applies admission checks (signatures, nonce, gas/fee limits, and protocol restraints), and executes them on the zkEVM, which enforces signatures, nonce, intrinsic validity, and state transition rules.
As execution proceeds, the sequencer periodically seals executed transactions into small, frequent L2 blocks (called 'miniblocks' in DB tables). A contiguous sequence of L2 blocks is then aggregated into an L1 batch, which is the unit published on L1.
Sealing an L2 block provides soft finality on the L2 chain. Data availability finality is reached once the batch publication data is included on Celestia and its inclusion is recorded. Anchoring occurs once the batch inscription reaches the configured confirmation depth on the Bitcoin blockchain. State‑correctness finality is reached after the batch’s zk‑proof is generated and validated by the verifier network.
Batches are sealed when configured criteria are met (e.g., transaction slot, gas usage, pubdata size, transaction-encoding size, together with I/O-level thresholds such as payload-encoding size and timeouts).
Publication to the Data Availability Layer (Celestia) and inscriptions of batch/proof commitments on Bitcoin are handled by downstream components. The sequencer does not post to DA or inscribe commitments/proofs on Bitcoin. It outputs ordered, executed L2 blocks and sealed L1 batches that downstream components use for DA publications, proof generation, and commitment posting.
Transaction Lifecycle
Transaction processing loop
The state keeper runs a per-batch loop that advances in strictly ordered stages and never seals an empty L1 batch or an empty L2 block.
Inputs: candidate transactions from the mempool; configured thresholds (batch timeout, L2-block timeout, L2-block payload size, transaction slots, gas limits/thresholds, pubdata limits/thresholds, bootloader encoding limits/thresholds)
In-memory state tracked per iteration:
L1 batch: number, open timestamp, cumulative execution metrics (gas, pubdata), cumulative encoded size, pending L1 gas, list of constituent L2 blocks.
L2 blocks (miniblocks): index within the batch, open timestamp, list of executed transactions, cumulative payload-encoding size for the block.
Last transaction context: execution result/metrics, pubdata contribution, encoded size, gas-attribution.
The loop stages:
Unconditional batch-timeout check
Precondition: The open L1 batch has executed at least one transaction.
Condition: (now - batch.open_timestamp) > batch timeout.
Action: seal the L1 batch immediately for timeout (skip further processing in this iteration),
L2 block sealing (timeout and payload-size)
Precondition: The open L1 batch has executed at least one transaction.
Conditions (either is sufficient):
(now - l2_block.open_timestamp) > L2-block timeout, or l2_block.payload_encoding_size > block payload encoding limit
Action: Seal the L2 block and immediately start the next L2 block within the same open L1 batch (with a fresh timestamp).
Note: This L2 block sealing does not decide inclusion, exclusion or non-executable status for the last transaction. It only sets the miniblock sealing.
Intake and execute the next transaction
Wait for the next mempool transaction (with a poll window). If none arrive, loop back to re-run the sealing checks (batch-timeout and L2 block sealing, Steps 1 - 2).
Execute the transaction in the VM; Collect:
Per-tx execution metrics (including pubdata published, if provided by the VM version)
Encoded size of the transaction
L1 gas attribution for this transaction
Tentatively update state:
Push the just-executed transaction onto the current L2 block's list of executed transactions.
Update cumulative batch/blocks metrics
Compute SealData snapshots for "batch so far" and "just-executed (last) transaction"
Batch sealing decision (most-restrictive outcome wins)
Evaluate independent criteria over the batch state + transaction:
Transaction slots: caps the number of transactions per batch.
Gas thresholds: per-tx reject bound; per-batch "close" threshold; hard per-batch limit.
Pubdata thresholds: per-tx reject bound; per-batch "close" threshold; hard per-batch limit.
Transaction-encoding threshold: per-tx reject bound; per batch "close" threshold; bootloader encoding capacity.
Combine results by precedence:
"unexecutable" > "exclude-and-seal" > "include-and-seal" > "no-seal".
Apply the final action (always with respect to the just-executed, last transaction:
"unexecutable": rollback and reject the last transaction; continue the loop (batch remains open unless another criterion is also required for sealing)
include-and-seal: keep the last transaction; seal the L1 batch now.
No-seal: Keep the last transaction; continue with the next iteration.
Finalize and persist on seal
When sealing is triggered by step 1 or 4:
Finish the batch with the correct inclusion/exclusion of the last transaction.
Persist the sealed L1 batch and all its finalized L2 blocks to storage (including ordering and metadata).
Rollover: initialize a fresh L1 batch environment and continue processing.
Full control flow is implemented in the state keeper; see core/node/state_keeper/src/keeper.rs
L2 Blocks vs L1 Batches
L2 Blocks (miniblocks)
L2 blocks (miniblocks) are small, frequent sealing units that the sequencer uses to segment execution inside an open L1 batch. They rotate on I/O thresholds (timeouts, payload-encoding size) and do not decide to include/exclude/unexecutable for the last transaction. That logic belongs to batch-level sealing criteria. Miniblock headers are stored in the "miniblocks" table with the associated execution artifacts. See core/lib/dal/README.md.
Purpose
Short confirmation latency and API stability on L2; small, frequent sealing units that segment execution within an open L1 batch.
Frequency
Target a every few seconds, enforced by an L2-block timeout and bounded by the payload-encoding size. In code
l2_block_commit_deadline_msand max payload checks. See L2 block sealing in path core/node/state_keeper/src/keeper.rs and sealers in core/node/state_keeper/src/seal_criteria/mod.rs.
Size
Small. Typically, a few to a hundred transactions, limited by payload-encoding size and time.
Sealing triggers
Timeouts and payload size caps only. These do not decide inclusion or exclusion for the last transaction; they only determine when to seal the current miniblock and start the next one. See thresholds in the section above.
Finality
Soft finality. A sealed miniblock can be superseded by the sequencer reorgs until its containing batch is sealed and anchored.
Storage and naming
Headers are recorded in the database table named "miniblocks". Associated execution artifacts are present per the invariants in core/lib/dal/README.md.
In-memory representation
The current batch maintains an ordered list of L2 blocks and their executed transactions. See BatchExecuteData
L1 batches
L1 batches aggregate miniblocks into the unit that is published to the DA layer (Celestia), anchored on Bitcoin, and proven as a single bootloader program. Batch sealing is driven by execution-metric criteria (slots, gas, pubdata, encoding size, plus a batch timeout). Data availability finality is reached when the batch’s public data is included on Celestia. Anchoring finality is reached when the batch inscription is confirmed to the configured depth on the Bitcoin blockchain. Later, the zk-proof and verifier checks establish state correctness and finality. See docs/specs/blocks_batches.md, dispatcher ViaDataAvailabilityDispatcher::dispatch(), client CelestiaClient::dispatch_blob() and confirmation policy core/lib/config/src/configs/via_btc_sender.rs.
Purpose
Aggregation unit for DA publication, Bitcoin anchoring, and proof generation. The VM executes a batch as a single program (bootloader). See docs/specs/blocks_batches.md.
Composition
A contiguous sequence of L2 blocks (miniblocks)
Sealing criteria (execution-metric driven)
Independent criteria evaluated after each executed transaction: transaction slots, gas usage, pubdata, size, transaction-encoding size and a batch timeout. The most restrictive outcome wins ("unexecutable" > "exclude-and-seal" > "include-and-seal" > "no-seal"). See batch sealing in core/node/state_keeper/src/keeper.rs and criteria implementations under core/node/state_keeper/src/seal_criteria/conditional_sealer.rs
Finality
Data‑availability finality when the batch pubdata has a blob_id and its inclusion data is recorded on Celestia. Anchoring finality when the batch inscription is confirmed to the configured depth on Bitcoin. State‑correctness finality when the proof is published to Celestia (with blob_id and inclusion), inscribed on Bitcoin, and validated by the verifier network (see the "Finality semantics" section). See dispatcher and Celestia client in core/node/via_da_dispatcher/src/da_dispatcher.rs and core/lib/via_da_clients/src/celestia/client.rs, and Bitcoin confirmation policy in core/lib/config/src/configs/via_btc_sender.rs.
Storage
Headers recorded in
l1_batches. Commitment parts (e.g, event queue and bootloader memory, and commitments) are stored incommitments. See core/lib/dal/README.md.
Sealing criteria (batch level)
Batch sealing is evaluated after each executed transaction. Independent criteria are applied to the current batch state and the most recently executed transaction. The most restrictive outcomes wins ("unexecutable" > "exclude-and-seal" > "include-and-seal" > "no-seal").
Transaction slots
Purpose: cap the number of transactions in a batch.
Behavior: When the configured slot limit is reached, include the last executed transaction and seal the batch ("include-and-seal").
Gas usage thresholds
Inputs: cumulative batch gas (commit/prove/execute), gas attributable to the last tx, and configured gas bounds.
Behavior:
If the last tx's gas (without required overhead) exceeds the per-transaction reject bound, it is "unexecutable".
If including the last tx would push the batch above the hard per-batch gas limit, seal the batch without this tx ("exclude-and-seal").
If, after applying the just-executed (last) transaction, the cumulative batch gas crosses the configured “close” threshold (still below the hard per-batch limit), include that last executed transaction and seal the batch (“include-and-seal”).
Otherwise, continue ("no-seal").
Pubdata size thresholds
Inputs: the cumulative batch pubdata size before applying the just-executed transaction (the "last transaction"), the pubdata attributable to that last transaction, the required bootloader/batch-tip overhead, and the configured pubdata bounds (per-transaction reject bound, batch “close” threshold, and batch hard limit).
Behavior:
If the last transaction's effective pubdata (its own pubdata plus required overhead) exceeds the per-transaction reject bound ⇒ resolution = "unexecutable".
Otherwise, compute the prospective cumulative batch pubdata after including the last transaction. If that prospective size would exceed the hard batch pubdata limit ⇒ resolution = "exclude-and-seal".
Otherwise, if the prospective cumulative size is at or above the configured "close" threshold while still not exceeding the hard limit ⇒ resolution = "include-and-seal" (keep the last transaction and seal the batch)
Otherwise, resolution = "no-seal" (keep processing)
Transaction-encoding size thresholds
Inputs: The cumulative encoded size already used within the bootloader's encoding space before applying the last transaction, the encoded size of the last transaction and the configured geometry percentages (per-tx reject bound, "close", threshold and total capacity).
Behavior:
If the last transaction's encoded size exceeds the per transaction reject bound ⇒ resolution ⇒ "unexecutable".
Otherwise, compute the prospective cumulative encoded size after including the last transaction. If that prospective size would exceed the bootloader's total capacity ⇒ resolution ⇒ "exclude-and-seal" (do not keep the last transaction; seal the batch).
Otherwise, if the prospective cumulative size is at or above the configured "close" threshold while still not exceeding the capacity ⇒ resolution = "include-and-seal" (keep the last transaction and seal the batch).
Otherwise, resolution = "no-seal" (keep processing)
Decision combination
Each criterion yields a resolution of the current step. The final decision is the most restrictive among them, ordered: "unexecutable" > "exclude-and-seal" > "include-and-seal" > "no-seal"
Actions implied by the final decision (always referring to the just-executed, last transaction):
"Include and seal": keep the last transaction in the batch and seal it immediately.
"exclude and seal": roll back the last transaction from the batch (do not include it), then seal the batch immediately.
"unexecutable": reject the last transaction outright, depending on other criteria in the same step. The batch may or may not seal.
"no-seal": keep the last transaction in the batch and continue processing. the next transaction.
L2 block interval (I/O thresholds)
L2 block sealing (mini block interval) is determined by the I/O thresholds, which operate independently of the batch-level sealing criteria. These thresholds rotate L2 blocks ("miniblocks") within the same open L1 batch. They are independent of the batch-level sealing criteria and do not decide to include/exclude/unexecutable for the last transaction. They only decide when to seal the current L2 block and start the next one.
Configuration inputs:
l2_block_commit_deadline_ms: max open time for an L2 block before timeout-based sealing from config viaTimeOutSealerin core/node/state_keeper/src/seal_criteria/mod.rsl2_block_payload_size: maximum cumulative payload-encoding size allowed per L2 block enforced byL2BlockPayloadSizeSealerin core/node/state_keeper/src/seal_criteria/mod.rs
Timeout-based L2 block sealing:
Preconditions: the current L2 block contains ≥ 1 executed transaction”.
Condition:
now - current_l2_block.open_timestamp > l2_block_commit_deadline_msas evaluated byTimeOutSealer::should_seal_l2_blockin core/node/state_keeper/src/seal_criteria/mod.rsAction: seal the L2 block and immediately start the next L2 block within the same batch L1 batch (with a fresh timestamp).
Outcome: The L1 batch remains open. Only the miniblock index is incremented.
Maximum L2 block payload encoding size:
Inputs:
current_l2_block.payload_encoding_size(accumulated)Condition:
current_l2_block.payload_encoding_size ≥ l2_block_max_payload_sizeas evaluated byL2BlockMaxPayloadSizeSealer::should_seal_l2_blockin core/node/state_keeper/src/seal_criteria/mod.rsAction: seal the current L2 block. Start the next L2 block within the same batch L1 batch.
Outcome: prevents oversized miniblocks; the batch remains open.
Batch-level unconditional sealing (independent of the miniblock sealing interval):
Preconditions: the open L1 batch contains at least one executed transaction (the system never seals an empty batch).
Condition:
now - l1_batch.open_timestamp > block_commit_deadline_ms, perTimeOutSealer::should_seal_l1_batchin core/node/state_keeper/src/seal_criteria/mod.rsAction: seal the L1 batch immediately (regardless of execution metrics or miniblock size)
Outcome: closes the batch and rolls over to a new batch environment.
Important notes
L2 block thresholds never decide how to treat the "last transaction" (no include/exclude/unexecutable decisions)
If the current L2 block has zero executed transactions, timeout-based L2 block sealing is a no-op (do not seal empty miniblocks)
Concrete example
Given
l2_block_commit_deadline_ms= 2000ms and the current L2 block opened at T0 with ≥1 executed tx, then T0 + 2001 ms the block is sealed and the next L2 block begins, while the L1 batch remains open.Given
l2_block_max_payload_size= 64 KiB, once current_l2_block.payload_encoding_size reaches 64 KiB (or above), the block is sealed, and a new L2 block begins while the L1 batch remains open.
Outputs and handoff to downstream components
When a batch-level seal is triggered, the state keeper does these steps:
Finalize step (based on the most restrictive decision from the batch criteria):
"include-and-seal": Keep the just-executed last transaction in the batch and seal the batch immediately.
"exclude-and-seal": Roll back the last transaction from the batch (do not include it), then seal the batch immediately.
"unexecutable": Reject the last transaction outright, regardless of whether another criterion concurrently requires sealing; the batch seals now. Otherwise continue.
Persist:
Write the sealed L1 batch and all of its finalized L2 blocks to the database, preserving ordering and metadata. DB invariants for the presence of associated artifacts are documented in core/lib/dal/README.md.
L2 blocks are stored under "miniblocks" tables. L1 batch headers under "1_batches" (see core/lib/dal/README.md).
Rollover:
Initialize a fresh L1 batch environment and resume transaction intake.
Downstream components (separate services)
After the state keeper seals and persists an L1 batch, the remainder of the pipeline runs asynchronously. These services do not modify the sealed batch contents. They derive commitments, publish data to the DA layer, and anchor references on Bitcoin.
Commitment generation (from sealed batches)
Produce a batch commitment artifacts and verification metadata from sealed L1 batches. Persist them to the database and mark each batch ready for DA publication and proof generation.
Preconditions
The L1 batch is sealed and persisted by the state keeper (with its ordered L2 blocks and execution artifacts).
Database invariant: for any l1_batch header, associated execution artifacts (e.g., initial_writes, protective_reads) exist. See core/lib/dal/README.md.
The commitment table is available to store commitment parts (even queue and bootloader memory commitments). See core/lib/dal/README.md.
Flow
Select pending work
Poll the database for sealed L1 batches that do not yet have commitments generated (status-based selection).
For each qualifying batch, load:
The batch header from l1_batches
The complete, ordered L2 blocks sequence (miniblocks) and per-tx execution artifacts are persisted by the state keeper.
Execution-related artifacts required for commitment generation (e.g., storage, diffs and read/write sets, event queue content), which satisfy the invariants described in core/lib/dal/README.md.
Compute commitment inputs
Assemble commitment inputs from persisted execution data:
Event queue content
Bootloader memory initial content
State change summary (e.g., initial_writes, protective_reads)
Any protocol-version-dependent data required by the commitment algorithm.
Implementation entry point: commitment_generator
Generate commitments and metadata:
Compute cryptographic commitments for the L1 batch using the configured commitment algorithm.
Produce auxiliary verification metadata required downstream (for DA publication, proof generation and later verification).
Persist results and mark readiness:
Persist commitment payloads:
Write commitment parts (including event queue and bootloader memory commitments) into the commitment table per core/lib/dal/README.md.
Update batch state:
Mark the batch as "ready for DA publication" so the DA dispatcher can discover it
Inputs
Sealed L1 batch number and metadata
Ordered L2 blocks (miniblocks) with per-tx execution outputs
Execution artifacts (e.g., initial_writes, protective_reads)
Outputs
Commitment payloads and verification metadata persisted in DB (including entries in the commitment table)
Batch state updated to "ready for DA publication"
Stored references for the proof pipeline to start generating proofs
Implementation: commitment_generator, high-level flow at docs/via_guides/data-flow.md.
Data Availability and Settlement Integration
Data Availability (Celestia)
The Data Availability stage publishes sealed L1 batch data (and later, the batch proofs) to Celestia, allowing anyone to reconstruct the batch content and verify its availability independently of the sequencer. This stage is handled by two independent components:
The DA dispatcher: a service loop that selects a sealed batch marked "ready for DA publication", submits their pubdata/proofs to Celestia, persists the returned blob identifier (blob_id) and polls for inclusion. See
ViaDataAvailabilityDispatcher.The Celestia client: a library client that connects to the Celestia node, sets the fixed
VIAnamespace and performs the blob submission and inclusion queries on behalf of the dispatcher. SeeCelestiaClient::new().
What is published to Celestia and how is it used?
For each L1 batch, the dispatcher submits the batch's pubdata bytes to Celestia and stores the returned blob_id in the database. Later, when a proof is available, it submits the proof bytes and stores the respective proof blob_id. Both blob_id are then used:
By the BTC Sender to embed DA references inside the inscription message (da_identifier + blob_id) for on-chain anchoring. See
ViaAggregator::construct_inscription_message().
The dispatcher continuously polls for Celestia for inclusion data tied to each blob_id and persists that inclusion evidence. This provides an auditable link from an L1 batch number → blob_id → inclusion data that downstream consumers can use.
Execution and reliability
The dispatcher runs continuously and is idempotent with respect to batch selection: batches that have already been dispatched or are already holding blob_id/inclusion records are skipped. Transient network failures during submission are handled via bounded retries (see core/node/via_da_dispatcher/src/da_dispatcher.rs).
The sequencer/state keeper never calls Celestia directly. Its responsibility ends at sealing and persisting the batch as the dispatcher and client own publication and inclusion tracking.
Preconditions
The L1 batch is sealed and persisted by the state keeper and the commitment generator has marked for it "ready for DA publication".
The dispatcher and client are running as independent services. The sequencer does not call Celestia directly.
The DA dispatcher (selection, dispatch, and inclusion tracking)
, Batch selection and dispatch:
Query the database for sealed L1 batches marked "ready for DA publication" and do not yet have a recorded blob_id.
For each selected batch, extract its pubdata bytes from storage.
Dispatch the pubdata to Celestia by calling
ViaDataAvailabilityDispatcher::dispatch()which invokes the DA client with (l1_batch_number, pubdata)Use retry logic with the configured maximum attempts to tolerate transient network failures. See retry implementation in core/node/via_da_dispatcher/src/da_dispatcher.rs.
Record results:
On successful dispatch, persist the returned blob identifier (blob_id) and associated timestamps in the database.
Update batch's DA status to indicate that its pubdata has been published and is awaiting inclusion confirmation.
Inclusion polling:
Periodically run
ViaDataAvailabilityDispatcher::poll_for_inclusion()For each batch with a recorded blob_id but no inclusion data, query the DA client for inclusion information using that blob_id.
Persist the inclusion information status and inclusion data for the blob so the downstream components can rely on availability guarantees.
Proof dispatch (subsequent lifecycle):
When proofs become available for previously published batches, dispatch proof blobs using
ViaDataAvailabilityDispatcher::dispatch_proofs()Follow the same dispatch → record → inclusion-poll cycle for proof blobs as for pubdata blobs.
Celestia client (blob operations)
Initialization:
On startup
CelestiaClient::new()connects the configured Celestia node and does a connectivity check.Set the fixed "VIA" namespace identifier for all blob operations. The namespace bytes are defined at CelestiaClient::new().
Enforce the configured blob_size limit to bound published payloads.
Blob submission:
When invoked by the dispatcher via
CelestiaClient::dispatch_blob(), construct a Celestia Blob from the input bytes and submit it to the network (blob construction and send are done in core/lib/via_da_clients/src/celestia/client.rs).On successful submission, return a dispatch response that includes the blob_id used for subsequent tracking and inclusion queries.
Inclusion data retrieval:
During dispatcher polling, fetch inclusion data for a given blob_id and return it to the dispatcher.
Inclusion data is then persisted by the dispatcher for the downstream consumers, such as the BTC inscription path and the verifier network.
Inputs and outputs
Inputs to publish: l1_batch_number and pubdata bytes (and later, proof bytes).
Outputs from publish: blob_id per dispatch, plus inclusion data obtained later via polling. Both are stored in the database to support Bitcoin inscription and verification workflows.
Bitcoin Inscription Manager and Aggregator: commit/reveal anchoring of batch commitment and proof references
Anchor each sealed batch's commitment and its subsequent proof reference on Bitcoin, while embedding the Celestia blob identifiers that make the underlying data publicly retrievable, which creates a verifiable link between the DA content on Celestia and settlement anchoring on Bitcoin.
Components:
ViaAggregator: decides what to inscribe and when (batch commitment vs proof reference) based on time/quantity criteria and readiness checks and proof of selection at core/node/via_btc_sender/src/aggregator.rs.
ViaBtcInscriptionManager: builds, signs, broadcasts and tracks the commit/reveal transaction that carries the inscription payload. See core/node/via_btc_sender/src/btc_inscription_manager.rs.
Message types and fields:
L1BatchDAReference: (for batches) and ProofDAReference (for proofs) constructed by ViaAggregator::construct_inscription_message(), embedding:
l1_batch_hash and l1_batch_index,
da_identifier (string, default "celestia", configurable in Rust.ViaBtcSenderConfig::da_identifier()),
Celestia blob_id values for batch pubdata and (later) proof data
prev_l1_batch_hash to chain batches (hash of the previous batch's commitment, used for chaining).
Schema and Taproot witness encoding are specified in the DEV guide at core/lib/via_btc_client/DEV.md.
Execution model:
Aggregator selects inscription work items → constructs typed messages with the field above → InscriptionManager creates a commit and reveal transaction, signs and broadcasts them → database is updated with inscription ID and confirmation status according to the configured policy in ViaBtcSenderConfig.
Preconditions:
The L1 batch is sealed, and its pubdata has been published to Celestia. A blob_id is persisted for the batch.
The BTC inscription services are running independently of the sequencer. The state keeper (sequencer) does not talk to the Bitcoin blockchain at all. It does not open RPC/P2p connections, does not construct or sign Bitcoin transactions, and does not broadcast or monitor Bitcoin transactions or blocks.
The state keeper's responsibilities end at sealing and persisting the L1 batches. See the batch loop in process_l1_batches(). No Bitcoin code paths exist in that module or its I/O layer MempoolIO.
The state keeper sequences and seals. ViaBTCSender posts to Bitcoin. Verifier components read from Bitcoin. No Bitcoin RPC/transactions are issued by the state keeper.
Bitcoin inscription aggregator (commit/reveal scheduler and inscription message constructor) ViaAggregator
Select what to inscribe
Evaluate configured aggregation criteria to decide when to create inscriptions for.
Batch commitment on-chain.
Proof reference on-chain.
Selection logic and criteria are implemented in
ViaAggregator::new()and proof of readiness path atViaAggregator::get_next_ready_operation().Time-based thresholds use
ViaBTCSenderConfig::block_time_to_commit()andViaBTCSenderConfig::block_time_to_proof().
Construct the typed inscription message
Build a tuped payload for the selected operating using ViaAggregator::construct_inscription_message().
Embed all required fields:
l1_batch_hash and l1_batch_index
da_identifier string for DA backend. The default is "celestia" and it is configurable in ViaBtcSenderConfig::da_identifier().
blob_id value that references Celestia blobs for batch pubdata and later, proof data.
prev_l1_batch_hash to chain batches by hash
Message schemas for L1BatchDAReference and ProofDAReference are specified in core/lib/via_btc_client/DEV.md. The Via protocol enforces signature checks for the sequencer's key inside the inscription script and encodes fields in Taproot witness data.
Inscription manager (transaction lifecycle and tracking)
Create and broadcast the transactions
Build, commit, and reveal transactions for the inscription protocol, and sign them with the sequencer's key. The manager runs the loop in ViaBtcInscriptionManager::run().
Submit transactions to the Bitcoin network and record their identifiers.
Update database status and confirmations
Track inflight inscriptions and update their status using the manager's iteration and status update paths, for example, the inflight set retrieval at ViaBtcInscriptionManager::update_inscription_status().
Mark inscriptions are confirmed once the configured number of confirmations is reached per ViaBtcSenderConfig::block_confirmations().
Detect and handle stuck inscriptions according to ViaBtcSenderConfig::stuck_inscription_block_number(). Retries are performed by the manager loop.
Inputs and outputs:
Inputs:
l1_batch_hash, l1_batch_index
da_identifier string from configuration
blob_id for the batch pubdata and later, the proof blob
prev_l1_batch_hash for chaining
Outputs:
Broadcast commit and reveal transaction on Bitcoin
Persisted inscription identifier and per-inscription confirmation status in the database
Protocol note
The inscription schema for L1BatchDAReference and ProofDAReference messages is defined in the via_btc client DEV guide.
Node composition (service boundary)
Via runs as a multi-service node. Each service is responsible for a specific stage of the pipeline and communicates through the database and shared queues, rather than through direct in-process calls. This separation makes sure that sealing, commitment generation, data availability publication, and Bitcoin anchoring can progress asynchronously and be retried without blocking the sequencer.
The default component set
Started by via_server with the default component list defined Cli::components. The default includes:
api, btc, tree, tree_api, state_keeper, housekeeper, proof_data_handler, commitment_generator, celestia, da_dispatcher, vm_runner_protective_reads, vm_runner_bwip
Service responsibilities and boundaries
Sequencer / State Keeper
Purpose: execute transactions, rotate L2 blocks by I/O thresholds, seal L1 batches by execution-metric criteria, and persist sealed outputs.
Never post to Celestia or Bitcoin
Core loop: process_l1_batch()
Commitment Generator
Purpose: derive batch commitment and verification metadata from sealed batches. Mark batches ready for DA publication and proving.
Entry point: commitment_generator
Data Availability: DA Dispatcher + Celestia Client
Purpose: publish batch pubdata and later proof blobs, record blob_id, poll for inclusion, and persist inclusion data.
Dispatcher run loops: ViaDataAvailabilityDispatcher::run()
Blob submisson: CelestiaClient::dispatch_blob(), client init and namespace: CelestiaClient::new()
Bitcoin Anchor: BTC Sender + BTC Inscription Manager
Purpose: decide when to inscribe batch commitments and proof references; construct typed inscription messages; build, sign, and broadcast commit and reveal transactions; track confirmations and update DB status.
Aggregation criteria and construction: ViaAggregator::new(), ViaAggregator::construct_inscription_message()
Inscription lifecycle: ViaBtcInscriptionManager::run()
Proof Data Handler and Prover-facing services
Purpose: expose sealed batch artifacts and commitments to the prover pipeline and ingest proofs for later DA publication.
Service listing: core/bin/via_server/src/main.rs
VM Runner services (vm_runner_protective_reads, vm_runner_bwip)
Purpose: produce per-batch execution derivatives used by downstream components (for example, protective reads and basic witness inputs).
Data structure example (batch input and L2 blocks container): BatchExecuteData
Housekeeper
Purpose: background maintenance tasks (cleanups, metrics, etc.) that keep the node healthy and the DB consistent with service expectations.
Service listings: core/bin/via_server/src/main.rs
Tree / Tree API
Purpose: Merkle tree calculations and API exposed by dependent subsystems.
Service listings: core/bin/via_server/src/main.rs
Lifecycle ordering
State Keeper executes transactions, rotates L2 blocks based on I/O thresholds, and seals an L1 batch when a sealing criterion triggers. It persists the sealed batch and its L2 blocks with final ordering and metadata. See process_l1_batch().
Commitment Generator polls for newly sealed batches without commitments, computes commitment payloads and metadata, persists them and marks them "ready for DA publication" See commitment_generator.
DA Dispatcher selects batches marked ready-for-DA, dispatches pubdata via the Celestia client, stores blob_id and polls for inclusion. Proof blobs are dispatched later in the life cycle. See ViaDataAvailabilityDispatcher::run().
BTC Sender decides when to inscribe batch commitment and proof of references, constructs typed inscription messages with the batch identity and DA references, and broadcasts commit/reveal transactions, updating DB status on confirmations. See ViaAggregator::construct_inscription_message() and ViaBTCInscriptionManager::run().
Prover-facing services and the verifier network consume the published data and inscriptions. Proofs are verified and may be attested, driving the state-correctness finality.
Operational model
Concurrency: Services run in parallel and operate on persisted tasks and statuses. Failures and retries in one stage can block the sequencer.
Backpressure: DA and BTC services use status fields (for example, blob_id presence, confirmation depth) to continue safely without reordering or duplication.
Configuration: The behavior of DA and BTC, such as da_identifier and confirmation depth, is controlled by the config reader code, as seen in ViaBtcSenderConfig::da_identifier() and ViaBtcSenderConfig::block_confirmations().
Finality
Finality defines, for each lifecycle stage of an L2 block L2 batch, exactly which properties are guaranteed, who or what could still change the state and what kinds of rollbacks remain possible. It is the contract between the system and its users about immutability and correctness at that stage.
Scope and units
L2 block (miniblocks): intra-batch sealing unit on L2. Headers are recorded in "miniblocks"
L1 batch: aggregation of consecutive L2 blocks. It is the unit for DA publication, Bitcoin anchoring and proof generation. See docs/specs/block_batches.md.
L2 block (miniblock) soft finality
What it means
A sealed L2 block is stable for UX/APIs, but may be superseded by sequencer reorgs until the containing L1 batch is sealed and anchored.
How is it produced
The state keeper seals the current L2 block when time/payload thresholds trigger. That is either:
Timeout-based block sealing: TimeoutSealer::should_seal_l2_block()
Payload-encoding size cap: L2BlockMaxPayloadSizeSealer::should_seal_l2_block()
Observability
The L2 block header is written to the "miniblocks" table, and its execution data is present in the database as required by the invariants in core/lib/dal/README.md.
Data‑availability finality (Celestia inclusion)
What it means
The batch’s data is durably available once Celestia includes the batch pubdata, and inclusion data is recorded.
How it is achieved
DA publication and inclusion via the dispatcher; inclusion info persisted (blob_id and inclusion data).
Anchoring (Bitcoin confirmations)
What it means
The L1 anchoring is considered durable once the batch inscription reaches the configured confirmation depth on the Bitcoin blockchain.
How it is achieved
The inscription manager tracks confirmations on Bitcoin and records the status.
State-correctness finality
What it means
The batch's state transition is cryptographically proven and publicly verifiable. Anyone can reconstruct the expected post-state from the published inputs and verify the batch’s zk-proof against the verification key. Correctness depends only on the proof and data, and is independent of the sequencer's trust.
How it is achieved
Proof generation and publication
The prover pipeline generates a zk-proof for the batch. The dispatcher publishes the proof as a Celestia blob using ViaDataAvailabilityDispatcher::dispatch_proofs() and records blob_id and inclusion data. The BTC sender inscribes a proof reference using the same commit and reveal pattern as for the batch.
Verification and attestation
The verifier network detects proof inscriptions on Bitcoin, fetches public data and proof from blobs on Celestia, and verifies the proof. See Verified by the verifier network.
Observability
Proof blob_id and inclusion data are persisted by the dispatcher. Proof, inscription, and confirmation status are persisted by the BTC sender services. Verifier attestation follows verifier documentation.
Reorg and retry semantics
L2 reorgs
Possible until the L1 batch is sealed. After sealing, batch contents are fixed for the downstream components. See the batch loop in process_l1_batch().
Bitcoin reorgs
Inscriptions are treated as confirmed only after the configured confirmation depth is reached. The manager detects stuck or reorged inscriptions and retries. See ViaBTCInscriptionManager::run().
DA retries and inclusion
The dispatcher retries blob submission and transient errors and continuously polls for inclusion. See core/node/via_dispatcher/src/da_dispatcher.rs.
Finality summary
Soft finality on L2
Trigger: L2 blocks sealed within an open batch
Guarantee: Subject to sequencer reorgs until batch sealer and anchoring.
Data‑availability finality (Celestia inclusion)
Trigger: Batch pubdata included on Celestia and inclusion recorded.
Guarantee: Batch data is durably available; ordering commitments cannot be altered solely by L2.
Anchoring (Bitcoin confirmations)
Trigger: Batch inscription reaches the configured confirmation depth on the Bitcoin network.
Guarantee: Anchoring is durable under the assumed Bitcoin finality policy.
State‑correctness finality
Trigger: zk‑proof published and included on Celestia; proof inscription confirmed; validation by the verifier network.
Guarantee: Correctness of the state transition is independently validated and publicly attestable.
Build variants and downstream components
This document targets the via_server build (Bitcoin settlement + Celestia DA). The via_server entrypoint and default component set are defined in core/bin/via_server/src/main.rs.
Default components include: api, btc, tree, tree_api, state_keeper, housekeeper, proof_data_handler, commitment_generator, celestia, da_dispatcher, vm_runner_protective_reads, vm_runner_bwip
Downstream components and flow:
Commitment generation:
commitment_generatorDA publication (Celestia): ViaDataAvailabilityDispatcher::dispatch() via CelestiaClient::dispatch_blob()
Bitcoin anchoring: ViaAggregator::construct_inscription_message() and ViaBtcInscriptionManager::run()database
Last updated