Layer 2 On Ethereum

Layer 2 On Ethereum

Scaling

The main goal of scalability is to increase transaction speed (faster finality) and transaction throughput (higher number of transactions per second) without sacrificing decentralization or security. For a long time, sharding the blockchain was expected to scale Ethereum (onchain scaling). However, the rapid development of layer 2 rollups and the invention of Danksharding (adding blobs of rollup data to Ethereum blocks that can be very efficiently verified by validators) has led the Ethereum community to favour rollup-centric scaling instead of scaling by sharding (Offchain scaling).

Layer 2

Layer 2 is a collective term for solutions designed to help scale your application by handling transactions off the Ethereum Mainnet (layer 1) while taking advantage of the robust decentralized security model of Mainnet. Most layer 2 solutions are centered around a server or cluster of servers, each of which may be referred to as a node, validator, operator, sequencer, block producer, or similar term. Transactions are submitted to these layer 2 nodes instead of being submitted directly to layer 1 (Mainnet).

Rollups

Rollups perform transaction execution outside layer 1 and then the data is posted to layer 1 where consensus is reached. As transaction data is included in layer 1 blocks, this allows rollups to be secured by native Ethereum security.

There are two types of rollups with different security models:

  • Optimistic rollups: assumes transactions are valid by default and only runs computation, via a fraud proof, in the event of a challenge. The rollup contract keeps track of its entire history of state roots and the hash of each batch. If anyone discovers that one batch had an incorrect post-state root, they can publish a proof to chain, proving that the batch was computed incorrectly. The contract verifies the proof, and reverts that batch and all batches after it.
    Optimistic rollups: TODO。Rollup 合约会记录其所有历史状态根及每个批次的哈希值。一旦有人发现某个批次的后状态根不正确,他们可以向链上提交证明,证实该批次的计算有误。合约将验证该证明,并回滚该批次及其之后的所有批次。
  • Zero-knowledge rollups: runs computation offchain and submits a validity proof to the chain. Every batch includes a cryptographic proof called a ZK-SNARK, which proves that the post-state root is the correct result of executing the batch. No matter how large the computation, the proof can be very quickly verified on-chain.
    ZK Rollups: TODO。每个批次都包含一个称为 ZK-SNARK 的加密证明,用以证明执行该批次后生成的状态根是正确的。无论计算规模多大,该证明都能在链上被快速验证。

Rollups move computation (and state storage) off-chain, but keep some data per transaction on-chain. To improve efficiency, they use a whole host of fancy compression tricks to replace data with computation wherever possible. The result is a system where scalability is still limited by the data bandwidth of the underlying blockchain, but at a very favorable ratio: whereas an Ethereum base-layer ERC20 token transfer costs ~45000 gas, an ERC20 token transfer in a rollup takes up 16 bytes of on-chain space and costs under 300 gas.

Rollup 将计算(及状态存储)转移至链下,但将每笔交易的部分数据保留在链上。为提升效率,Rollup 运用一系列精妙的压缩技术,在可能之处以计算替代数据。最终形成的系统虽然仍受底层区块链数据带宽的扩展性限制,但实现了极高的优化比例:以太坊基础层的 ERC20 代币转账需要消耗约 45000 gas,而 Rollup 中的 ERC20 代币转账仅占用 16 字节链上空间,gas 消耗低于 300。

The fact that data is on-chain is key (note: putting data “on IPFS” does not work, because IPFS does not provide consensus on whether or not any given piece of data is available; the data must go on a blockchain). Putting data on-chain and having consensus on that fact allows anyone to locally process all the operations in the rollup if they wish to, allowing them to detect fraud, initiate withdrawals, or personally start producing transaction batches. The lack of data availability issues means that a malicious or offline operator can do even less harm (eg. they cannot cause a 1 week delay).

数据上链这一事实至关重要(注:将数据“放在 IPFS 上”是无效的,因为 IPFS 无法就特定数据是否可获取达成共识;数据必须存储在区块链上)。通过将数据置于链上并就此达成共识,任何用户都可以选择在本地处理 Rollup 中的所有操作,从而实现欺诈检测、发起提款或自行开始生成交易批次。数据可用性问题的消除意味着恶意或离线运营商能造成的损害更小(例如无法导致长达一周的延迟)。

Danksharding

What is Proto-Danksharding?

Proto-Danksharding, also known as EIP-4844, is a way for rollups to add cheaper data to blocks.

Proto-Danksharding,也称为 EIP-4844,是一种让 rollup 以更经济的方式向区块添加数据的方法。

Historically, posting transactions in CALLDATA is expensive because it is processed by all Ethereum nodes and lives onchain forever, even though rollups only need the data for a short time. Proto-Danksharding introduces data blobs that can be sent and attached to blocks. The data in these blobs is not accessible to the EVM and is automatically deleted after a fixed time period (set to 4096 epochs at time of writing, or about 18 days).

以前将交易发布在 CALLDATA 中的方案是一种昂贵的方法,因为数据经过所有以太坊节点处理,并且永远存在于链上,即使 rollup 只在很短的时间内需要这些数据。Proto-Danksharding 引入了可以发送并附加到区块上的数据 blob。 这些 blob 中的数据不可通过以太坊虚拟机访问,并且在固定的时间(在写入时设置为 4096 个时段,或大约 18 天)后会自动删除。

Why is it OK to delete the blob data?
Rollups post commitments to their transaction data onchain and also make the actual data available in data blobs. This means provers can check the commitments are valid or challenge data they think is wrong. At the node-level, the blobs of data are held in the consensus client. The consensus clients attest that they have seen the data and that it has been propagated around the network. If the data was kept forever, these clients would bloat and lead to large hardware requirements for running nodes. Instead, the data is automatically pruned from the node every 18 days. The consensus client attestations demonstrate that there was a sufficient opportunity for provers to verify the data. The actual data can be stored offchain by rollup operators, users or others.

为什么可以删除 blob 的数据?
Rollup 将其交易数据承诺发布在链上,并且在数据 blob 中提供实际的数据。这意味着证明者可以检查承诺是否有效或质疑他们认为错误的数据。在节点层面,数据 blob 保存在共识客户端中。共识客户端证明他们已经检查了数据,并且数据已经在网络中传播。如果数据永远保留,这些客户端会变得臃肿并导致运行节点的硬件要求很高。然而,数据每隔 18 天将从节点中自动删除。共识客户端的认证表明,证明者有足够的机会来验证数据。实际数据可以由卷叠运营商、用户或其他人在链下存储。

How is blob data verified?

Rollups post the transactions they execute in data blobs. They also post a “commitment” to the data. They do this by fitting a polynomial function to the data. This function can then be evaluated at various points. A prover applies the same function to the data and evaluates it at the same points. If the original data is changed, the function will not be identical, and therefore neither are the values evaluated at each point. These points are defined by the random numbers generated in the KZG ceremony.

Rollups 将执行的交易发布在数据 Blob 中。它们还会发布对这些数据的”承诺”。实现方式是将数据拟合为多项式函数。随后可在不同点上评估此函数。证明者对原始数据应用相同函数并在相同点位进行计算。若原始数据被篡改,函数将发生变化,导致各点位的计算结果也不一致。这些点由 KZG 仪式中生成的随机数定义。

What is Danksharding?

Danksharding is the full realization of the rollup scaling that began with Proto-Danksharding. However, full Danksharding is several years away. In the meantime, the KZG ceremony has concluded with over 140,000 contributions, and the EIP for Proto-Danksharding has matured. This proposal has been fully implemented in all testnets, and went live on Mainnet with the Cancun-Deneb (“Dencun”) network upgrade in March 2024.

Danksharding 是以 Proto-Danksharding 为起点的 Rollup 扩容方案的全面实现。然而,完全实现 Danksharding 还需要几年时间。与此同时,KZG 仪式在经过超过 14 万份贡献后结束,Proto-Danksharding 的 EIP 也已成熟,该提案已在所有测试网中全面实施,并于 2024 年 3 月随着 Cancun-Deneb(“Dencun”)网络升级在主网上线。

Optimistic Rollup: Arbitrum

Arbitrum.note

arbitrum

Overview

  • Step 1: Submitting a transaction
    User send transaction to Sequencer (a specialized node that orders them and issues quick confirmations) (by public RPC / 3rd party RPC / arbitrum node (through Sequencer Feed)) or Delayed inbox contract on Ethereum (Call sendL2Message, wait for processing (within about 10min (once these transactions are finalized on the Ethereum chain) ) or force inclusion after 24 hours (call forceInclusion function on the SequencerInbox contract)).

  • Step 2: Ordering and broadcasting: The Sequencer

    • Sequencing and Broadcasting

      1. Real-Time Sequencer Feed
        By subscribing to this feed, nodes and clients can:
        • Receive Immediate Notifications
        • Process Transactions Promptly
        • Benefit from Soft Finality
      2. Batches Posted on the Parent Chain
        • Batching
        • Compression
        • Submitting to the Sequencer Inbox Contract
          This process involves the Batch Poster, an Externally Owned Account (EOA) controlled by the Sequencer. The Batch Poster is responsible for submitting the compressed transaction batches to the Sequencer Inbox Contract on parent chain.
          Using Blobs with addSequencerL2BatchFromBlobs or Calldata with addSequencerL2Batch

      Sequencer operations

  • Step 3: Execution phase: State Transition Function
    In the STF, transactions follow a structured workflow: ArbOS first validates format and funds, then charges gas for Layer 2 execution and Layer 1 posting. Geth executes per EVM standards, after which ArbOS updates states and cross-chain elements, generating receipts and logs to conclude the process.

  • Step 4: Finality: Soft finality & Hard finality

  • Step 5: Ensuring correctness: Validation and dispute resolution
    The BoLD (Bounded Liquidity Delay) protocol, an advanced dispute framework enabling permissionless validation.

  • Step 6: Bridging: Cross-chain communication

    • Bridging from a parent chain to a child chain

      1. Native token bridging: Refers to depositing a child chain’s native token from the parent chain to the child chain.
      2. Transaction via the Delayed Inbox
      3. Retryable tickets are Arbitrum’s canonical mechanism for creating parent-to-child messages–transactions initiated on a parent chain that trigger execution on a child chain.Parent to child messaging
    • Child to parent chain messaging

      • Flow: Sending a message from a child to parent chain (Call sendTxToL1) -> Executing the message on the parent chain (After the seven-day Challenge period)

      • Withdrawing Ether & ERC-20 token withdrawal

  • Step 7: The Economics of execution: Gas and fees

  • Step 8: Advanced features

ZK-Rollup

Polygon Zero Transactions