01

SILICAPROTOCOL

A parallel data plane for high-throughput message publication. Payloads travel through lane committees while only compact cryptographic commitments are included in blocks.
02

HOW IT
WORKS

Obsidian separates execution from data transport. EVM transactions are computationally intensive but relatively small. The Silica Protocol uses spare network capacity to carry larger, non-executable data payloads that need minimal CPU processing.

Message payloads are propagated as data sidecars. These are gossiped peer-to-peer, erasure-coded, and verified by lane committees. They are part of the blockchain protocol but intentionally kept outside the block body so they do not slow down block propagation or voting.

Where is the data stored? Validators hold erasure-coded chunks during the retention window (~18 days). After that, archive nodes preserve the full history. Anyone can run an archive node and earn from the Archive Pool (50% of PM fees) for serving historical data.

Key Benefits

  • Transactions remain fast and predictable
  • Data throughput scales independently
  • Validators verify availability without downloading all payloads
  • ~84 TB annual capacity at full utilization
Block N
HEADER
parent_hash
state_root
timestamp
lane_headers_root
BODY
EVM Transactions
...
Lane Headers (commitments only)
L0
L1
L2
L3
references →
SIDECARS (off-block)
Lane 0
Lane 1
Lane 2
Lane 3
Actual payloads stored by lane committees
03

PARALLEL
LANES

The network is divided into parallel lanes. Each lane has a rotating committee of validators assigned to it. Messages are deterministically routed based on sender address:

lane = Hash("LANE_V2", chain_id, sender) mod lane_count

So all messages from a single sender go to the same lane. Nodes can immediately determine which lane committee should receive a message without knowing the current slot or RANDAO state.

Parallel Lane Processing
Lane 0
Committee 0
AC0
Lane 1
Committee 1
AC1
Lane 2
Committee 2
AC2
AC = Availability Certificate (committee quorum proof)

Message Lifecycle

01
Submit
User signs and submits message
02
Route
Protocol routes to assigned lane
03
Batch
Lane leader batches and erasure codes
04
Distribute
Chunks sent to committee members
05
Attest
Committee signs availability votes
06
Include
Proposer includes certificate in block
04

MESSAGE
TYPES

PM

Priority Messages

BID > MinPMBid

Include a fee bid and are sorted by bid amount (highest first) within each lane batch. Signed debit authorization means the fee is deducted at inclusion time.

Ordering
Sorted by bid (highest first)
Use cases: Oracle updates, trading signals, time-sensitive alerts
SM

Standard Messages

BID == 0 + VDF PROOF

No bid required. A VDF (Verifiable Delay Function) proof provides spam resistance instead. VDFs are sequential computations that cannot be parallelized, creating a natural rate limit.

Ordering
FIFO within lane
Use cases: Social posts, messaging, background sync, open participation

Message Structure

Sender Address
Account submitting the message
Payload Bytes
Actual data content (opaque)
Target Block
Block targeting for inclusion
Nonce
Sequence number for deduplication
Bid or VDF Proof
Fee bid (PM) or compute proof (SM)
Signature
ECDSA proof of sender authorization
05

AVAILABILITY
MODEL

Most DA solutions use Data Availability Sampling (DAS), where light clients randomly sample chunks to probabilistically verify availability. Obsidian takes a different approach: committee attestation is the acceptance criteria.

AspectDAS-Primary (Ethereum)Committee-Primary (Obsidian)
Block acceptanceRequires successful samplingRequires committee QC (2/3 threshold)
Primary verificationLight clients sample randomlyCommittee members prove chunk possession
ParallelismSingle blob space per blockMultiple lanes with dedicated committees
AcceptanceProbabilisticDeterministic

Availability Certificate

A valid certificate proves that a supermajority of the lane committee had the data when they signed. With erasure coding, this means the data can always be reconstructed.

Lane Reference
slot, lane, sequence
Data Commitment
Merkle root of chunks
Aggregated Signatures
Committee attestations
Signer Bitmap
Which validators attested
06

STORAGE
& SHARDING

Data doesn't live in blocks. It lives in the Silica data plane. Blocks only contain compact cryptographic commitments (Merkle roots) that reference the actual payloads stored elsewhere.

This separation allows for sharded storage: archive nodes can specialize by storing only specific epoch ranges or lane subsets. A node might store epochs 1-1000, another epochs 1001-2000. Together they preserve the full history without any single node needing all ~84 TB/year.

Storage Tiers

Validators(Retention window)
Erasure-coded chunks for availability verification
Full Archive(All history)
Complete data for all epochs and lanes
Sharded Archive(Epoch ranges)
Subset of history, lower hardware requirements
Erasure Coding

Each message batch is split into data chunks and expanded with parity chunks using Reed-Solomon encoding. The original data can be reconstructed from any 50% of chunks.

Original Batch → Encoded Chunks
D0
D1
D2
D3
P0
P1
P2
P3
Data chunks (4)Parity chunks (4)
50%
Recovery threshold
2x
Redundancy factor

Sharded Archive Model

At ~84 TB/year, traditional "store everything" archives become impractical. Obsidian solves this with epoch-range sharding: instead of every archive storing all history, nodes store specific epoch ranges.

Shard Group 1
Epochs 0-1000
Node A, Node B
Shard Group 2
Epochs 1001-2000
Node C, Node D
Shard Group 3
Epochs 2001-3000
Node E, Node F
Accessible participation: Run an archive with consumer hardware by storing a subset of history
Horizontal scaling: More epoch ranges served by adding shard groups
Redundancy: Multiple nodes per shard group keep data available
Proportional rewards: Nodes earn from Archive Pool based on coverage and uptime
07

FEE
DISTRIBUTION

Priority Message fees are distributed among protocol participants. Rather than prepaying fees, Priority Messages include a signed debit authorization. The proposer deducts the bid from the sender's balance at inclusion time.

Distribution Split

Block Proposer30%
Including lane headers
Lane Leader20%
Batching and encoding
Archive Pool50%
Long-term storage

Node Roles

Block Proposer
Creates canonical blocks containing transactions, lane headers, and availability certificates.
Validator
Participates in consensus and serves on lane committees. Assignment rotates each epoch.
Lane Leader
Collects messages, forms batches, distributes chunks, aggregates votes into quorum certificate.
Archive Node
Stores data beyond the retention window. Serves historical queries. Can be sharded by epoch range.
08

TECHNICAL
SPECS

12s
Slot Time
Block production interval
32 MB
Per Slot
16 MB PM + 16 MB SM
10K+
Msg/sec
Throughput capacity
~84 TB
Per Year
Maximum annual storage
64 OBS
Min Stake
Validator requirement
~12.8 min
Finality
Casper FFG finalization
2/3
Quorum
Committee threshold
Full EVM
Compatibility
Standard Ethereum tooling
Read More

Dive into the Whitepaper

Full technical specification of the Silica Protocol, consensus modifications, and economic model.