Sharding and Danksharding in Ethereum 2.0

Sharding & Danksharding in Ethereum 2.0: How Proto-Danksharding (EIP-4844) Changes Ethereum Scaling

Ethereum’s long-term scaling roadmap has two pillars: execution off-chain (rollups) and data availability on-chain (blobs and eventually full Danksharding).
This masterclass explains classic sharding, why Ethereum pivoted to rollup-centric scaling, how Proto-Danksharding (EIP-4844) introduces a new “blob” resource that slashes rollup costs, and where Danksharding with data availability sampling (DAS) takes the network next. We’ll cover the design goals, economics, MEV realities, security assumptions, and practical impacts for L2 builders and everyday users.

Introduction: The Two Bottlenecks of Blockchain Throughput

To scale a public blockchain safely you must answer two questions. First: Can we verify state transitions efficiently? Second: Can everyone fetch the data necessary to reconstruct the state? The first is an execution problem; the second is a data availability (DA) problem. Ethereum’s roadmap tackles execution with rollups (do the heavy lifting off-chain and provide succinct proofs or fraud proofs). For DA, Ethereum is evolving the base layer to publish rollup data efficiently using a separate, cheaper resource today: blobs via Proto-Danksharding (EIP-4844); tomorrow: full Danksharding with massive blob bandwidth verified by data availability sampling (DAS).

Classic sharding proposed splitting the chain into many execution shards. But with the rise of rollups, Ethereum pivoted: keep consensus and settlement unified on one chain, outsource most execution to L2s, and scale DA so rollups can post their batches cheaply. This reduces complexity, avoids cross-shard synchronous headaches, centralizes security at L1, and lets L2s compete on execution VMs and UX while inheriting Ethereum’s settlement finality.

Rollups → Off-chain execution
Blobs → Cheap on-chain data
Settlement → Ethereum L1
Rollup-centric scaling: L2s compute; L1 guarantees data availability & finality.

Classic Sharding vs Rollup-Centric Roadmap

Classic sharding (pre-rollup zeitgeist) envisioned tens or hundreds of execution shards, each processing transactions in parallel. Cross-shard communication required complex protocols to ensure atomicity and liveness. This approach promised parallel throughput but introduced significant engineering and security overheads, including validator load, state bloat, and synchronization risk.

Rollup-centric scaling flips the problem: keep L1 simple, finality, settlement, DA, while L2 rollups handle execution heterogeneity (EVM, zkVMs, app-specific runtimes). Instead of executing more per block on L1, we publish more data per block so L2s can verify and update their states trust-minimally. The question becomes: how can Ethereum supply vast, cheap, verifiable data bandwidth?

That answer unfolds in two stages: Proto-Danksharding (EIP-4844) introduces blobs as a new, gas-separated resource; Danksharding expands blob capacity and verification via DAS. Both preserve a single beacon chain and single proposer per slot, avoiding the coordination pitfalls of many parallel proposers.

Proto-Danksharding (EIP-4844): Blobs, Commitments & the New Resource

EIP-4844 adds a new transaction type that carries one or more blobs, large chunks of opaque data intended primarily for rollups. Blobs are not general EVM calldata; they are handled by the consensus layer, priced by a separate fee market, and kept only for a limited time (ephemeral availability). Rollups post their batch data in blobs; L1 stores commitments to those blobs and ensures the data was available in the network when the block was finalized.

Why blobs? Calldata is expensive because every byte persists forever in the state history accessible to EVM. Blobs are cheaper because they are:

  • Ephemeral: Nodes must serve them for a set retention window (e.g., weeks), not forever, which drastically reduces long-term storage costs.
  • Separated fee market: They do not compete with gas for EVM execution; a distinct mechanism targets blob supply/demand.
  • Commitment-verified: Builders produce cryptographic commitments (e.g., KZG) that tie the block to the data; light and full nodes can verify inclusion and availability properties with those commitments.
Rollup Batch → Blob
Commitment in Block
Ephemeral Storage Window
Blobs de-couple DA from EVM gas and enable cheaper L2 posting.

From a rollup’s perspective, EIP-4844 is transformative: the dominant cost of L2 transactions is the data posting, not proving. Cheap blobs translate directly to lower per-tx fees for users. Critically, the contract logic that verifies or references the blob data can remain minimal on L1; the heavy data moves through the consensus layer pathways designed for bandwidth.

Blob Fee Market & Economics: Why Your L2 Is (Much) Cheaper

EIP-4844 introduces a separate blob base fee that adjusts according to demand, analogous to EIP-1559 for gas but applied to blobs. The base fee targets a specific number of blobs per block on average. When utilization exceeds the target, the blob base fee rises; when it’s below, it falls. The effect is a market-clearing price for bandwidth that reflects rollup demand without crowding out ordinary L1 transactions.

Implications for economics:

  • Predictability: Rollups can project costs based on blob price trends rather than volatile calldata gas markets.
  • Competition: Many rollups bidding for limited blob space form a competitive landscape; efficient compression, batch sizing, and proof recursion lower each rollup’s blob needs.
  • Macro scaling: As Danksharding increases blob capacity, equilibrium prices should fall, especially if L2 usage grows slower than capacity.
  • Congestion dynamics: Peak activity (airdrops, NFT mints) can still spike blob prices, but the spike is contained within the blob market rather than spilling uncontrollably into L1 gas.
Target Blobs/Block
Base Fee Adjusts
Rollups Optimize Compression
A dedicated 1559-style market for blobs aligns price with demand without starving L1.

What Is Danksharding? From Proto to Full Bandwidth

Danksharding is the end-state vision: one proposer per slot selects a block that includes many blobs (far more than Proto-Danksharding), while a large set of validators participate in verifying availability via data availability sampling (DAS). Unlike classic sharding with many parallel proposers, Danksharding maintains a single block builder path, simplifying block construction and mitigating cross-shard MEV games.

In practice, Danksharding means:

  • Huge blob capacity: Orders of magnitude more data per slot compared to 4844, sufficient for many L2s to post at web-scale throughput.
  • Sampling guarantees: Validators sample random chunks across all blobs; with high probability, if a block finalizes, the data must have been broadly available.
  • Unified security: No separate execution shards; one chain, one fork choice rule, one proposer per slot, which preserves Ethereum’s existing mental model.

Proto-Danksharding is the minimal subset that ships earlier: it adds blobs and the fee market but not the full sampling or massive capacity. As client teams harden networking and storage, and as sampling machinery matures, the network can lift blob caps safely.

Data Availability Sampling (DAS): Why Finality Implies Data Was Available

Data availability sampling is a trick: instead of every node downloading every byte of a block’s data, each validator randomly samples a few small chunks (with redundancy via erasure coding). If enough independent validators can pull random pieces successfully, we can be confident the entire dataset was available. With erasure coding, reconstructing the whole dataset requires a threshold of chunks; if any significant portion were missing, many samplers would fail, and the block would not finalize.

Key ingredients:

  • Erasure coding: Encode data into many chunks such that any k of n chunks can reconstruct the original. This protects against partial withholding.
  • Random sampling: Validators request random chunk positions. An attacker can’t predict all sampling choices in advance.
  • Light-client friendliness: Even low-resource nodes can sample a tiny amount and gain high confidence, enabling decentralized verification at the edge.
Erasure-coded Blobs
Random Sampling
High Prob. Availability
Many small checks by many validators → strong confidence data wasn’t withheld.

Proposer-Builder Separation (PBS), MEV & Inclusion Lists

As bandwidth and block complexity grow, MEV (maximal extractable value) incentives push specialized actors to build optimal blocks. Proposer-builder separation (PBS) formalizes this: builders assemble blocks and bid; the randomly chosen proposer for the slot selects the highest bid without needing sophisticated infrastructure. PBS reduces centralization pressure on proposers and can integrate inclusion lists, a tool for proposers to guarantee certain transactions are included, improving censorship resistance.

Danksharding pairs naturally with PBS: one proposer chooses one block from competitive builders, with both EVM execution and large blob payloads considered in a unified market. Over time, on-chain PBS variants can reduce reliance on trusted relays and align incentives with protocol-level guarantees.

Implications for Rollups: Optimistic vs ZK, Calldata vs Blobs, Validium vs Volition

For rollups, the biggest bill is data publication. EIP-4844 slashes it, regardless of whether you are an optimistic or a ZK rollup. The differences:

  • Optimistic rollups (OP): Rely on fraud proofs and challenge windows. They still need to post transaction data so challengers can reconstruct state. Blobs reduce the cost of posting that data. Latency is governed by the challenge period, not by blobs.
  • ZK rollups (ZK): Post validity proofs that immediately finalize state transitions (after L1 verification). They also need DA for verifiability and for users to reconstruct state proofs. Blobs reduce that cost dramatically; recursion can further compress proofs and data.
  • Validium/Volition: Some systems move DA off-chain to cut costs further (Validium) or choose per-tx (Volition). With cheap blobs, many teams will re-evaluate how much DA to keep on L1; blobs may be “cheap enough” to justify on-chain DA for higher-value flows while still using off-chain DA for bulk traffic.

Practical impact: expect lower L2 fees, higher throughput, and more frequent batch posting. Cheaper DA means smaller batches (lower latency) become economical. Ecosystem-wide, this makes L2 UX feel closer to Web2 while retaining L1 settlement guarantees.

Security Considerations & Tradeoffs

Proto-Danksharding doesn’t change Ethereum’s execution semantics; it adds a new data lane. Key security points:

  • Availability window: Blobs are ephemeral. Clients and rollups must fetch and archive required data within the retention period. Indexers and L2 nodes shoulder that responsibility.
  • Commitment integrity: Builders commit to blobs; consensus links commitments to blocks. If a builder withholds data, the block should not finalize under the sampling regime (full Danksharding). In Proto-Danksharding, networking and enforcement still matter; clients reject blocks missing blob data.
  • DoS & bandwidth: Higher blob capacity stresses peer-to-peer networking. Client teams throttle, prioritize, and rate-limit to maintain liveness.
  • MEV centralization: Without PBS, specialized builders can centralize. PBS helps, but careful protocol design (e.g., inclusion lists) further protects censorship resistance.
  • Bridging risks: L2 bridges must treat blob commitments as canonical sources for exit proofs. Force-inclusion and escape hatches remain essential for censorship and liveness issues.

Operations: Running Nodes in the 4844 Era

Node operators should expect:

  • More disk/network I/O: Blobs increase bandwidth and short-term storage requirements. Ensure adequate disk, IOPS, and network throughput.
  • Client diversity: Always run a diversity of execution and consensus clients to avoid correlated bugs especially as blob logic evolves.
  • Monitoring: Track blob import times, missing blob warnings, peer counts, and p2p health. Alert on unusual blob base fee spikes that may indicate congestion.
  • Archival strategy: If you run an L2 or indexer, archive blobs relevant to your rollup. Don’t rely solely on the network after the retention horizon.
Bandwidth
Storage (Ephemeral)
Observability
Post-4844 nodes handle more transient data; plan capacity and monitoring.

UX: Fees, Throughput & Migration Paths for dApps

For users, the headline is simple: L2 fees go down. But UX wins extend beyond price:

  • Lower latency: Cheaper blobs allow more frequent batch publishing, reducing time-to-finality (especially for ZK rollups).
  • Predictable fees: The blob market smooths volatility; wallets can show clearer estimates for L2 data fees.
  • Choice of security tiers: In Volition systems, fees and security can be tuned per transaction, use blob DA for important updates; cheaper venues for bulk traffic.
  • Migration story: dApps can start on a high-throughput L2, then move premium actions to blob-backed rollup mode without rewriting everything. Over time, as Danksharding expands capacity, more of the app can shift to on-chain DA without fee shock.

Engineering Playbook for L2 Teams

  1. Rework data pipelines for blobs: Switch calldata posting to blob transactions. Profile compression (RLP/SSZ/zstd), batch sizing, and scheduling to hit blob price minima.
  2. Recursion & proof engineering: Use recursive proofs to aggregate many batches into one verification, minimizing on-chain verification costs. Balance proof time vs batch frequency.
  3. Exit design: Bind exit proofs to blob commitments. Provide watchers to detect missing blobs and trigger emergency procedures.
  4. Sequencer neutrality: Implement forced inclusion so censored users can post directly to L1. Integrate with PBS-aware builders if possible.
  5. Data retention: Build your own blob archive service; don’t rely on the public network past the retention window.
  6. UX & comms: Label DA and finality expectations in wallets/explorers. Explain why fees are lower and what could make them spike (blob contention events).
  7. Roadmap to Danksharding: Keep parameters abstract; when blob capacity increases, your system should scale without code rewrites, just new config limits.
Blobs
Proofs
Exits
PBS/MEV
Treat DA, proofs, exits, and block-building as one system, not four silos.

FAQ

Does EIP-4844 help L1 users directly?

Indirectly, yes. By moving rollup data into blobs and out of calldata, L1 gas markets experience less competition from L2 posting. That can reduce fee spikes for ordinary L1 transactions during rollup surges. But the main benefit accrues to L2 users whose fees drop substantially.

Are blobs persistent forever?

No. Blobs are designed to be ephemeral. Validators and nodes must hold and serve them for a defined period, ample for rollups and indexers to download and archive. Long-term persistence is an L2 or third-party responsibility, not the L1’s job. This is a core reason blobs are cheap.

If blobs are ephemeral, how can users trust exits?

Exit proofs reference commitments that were included in finalized blocks. As long as the rollup or its ecosystem archives the blob data within the retention window, users can generate inclusion proofs for exits. Good rollups run multiple independent archives and offer force-publish/forced inclusion paths so users can always recover.

Why not just increase L1 block size?

Bigger blocks raise the cost of running a full node, centralizing the network. Blobs + DAS scale data bandwidth without forcing all nodes to store all data forever. Sampling keeps verification light while preserving decentralization.

Is Danksharding the final step?

It’s a major plateau, but Ethereum’s roadmap is iterative. Beyond Danksharding, expect refinements in PBS, inclusion lists, statelessness, verkle tries, and protocol-native MEV mitigation. The guiding principle remains: keep L1 minimal and credibly neutral; push innovation and scale to L2s.

Glossary

  • Blob: Large, ephemeral data object attached to a block via a separate fee market (EIP-4844). Used primarily by rollups to post batches.
  • Proto-Danksharding: Initial deployment of blob transactions and pricing, without full data availability sampling or maximal capacity.
  • Danksharding: Final design with high blob capacity and DAS, one proposer per slot, and unified block building.
  • DAS (Data Availability Sampling): Validators sample random chunks of erasure-coded data to infer global availability.
  • PBS (Proposer-Builder Separation): Builders construct blocks; proposers select the best bid, reducing centralization pressures.
  • MEV: Maximal extractable value; profit from transaction ordering and inclusion choices within blocks.
  • Rollup: L2 that executes transactions off-chain and posts data/proofs to L1 for security (OP uses fraud proofs; ZK uses validity proofs).

Key Takeaways

  • Two problems, two tools: Rollups solve execution scale; blobs solve data scale. Together they deliver massive throughput without sacrificing decentralization.
  • EIP-4844 is here now: Blobs create a cheaper, separate resource for rollup data and immediately lower L2 fees.
  • Danksharding scales further: With DAS and larger blob budgets, Ethereum can support web-scale L2 data publication.
  • PBS + inclusion lists: Align block building with decentralization and censorship resistance as data bandwidth grows.
  • Operational readiness: L2s should archive blobs, optimize batches, implement forced inclusion/forced publish, and communicate DA expectations clearly to users.
  • Future-proof: Design your stack so increasing blob capacity is a configuration change, not a rewrite.