Skip to main content

· 7 min read
Willem Olding

One way to think of a blockchain is as a system for verifying provable statements to alter a global state. Examples of provable statements include:

  • Proving you hold a private key corresponding to a public key by signing some data. This is how transactions are authorized to mutate the ledger.
  • A proof-of-work (PoW)
  • Proving you know the pre-image of a hash
  • Proving result of executing a known, deterministic program on known data. The blockchain executes the program itself to check the result
  • A SNARK proof of the result of executing a known program on (possibly unknown) data

To verify a proof, the blockchain performs the verification computation inside its execution and this can result in changes to global state. Other on-chain applications can treat the result as if it is truth.

Some statements have a lovely property in that they are succinct. This means it is easier to verify them than it is to prove them (PoW, SNARK). Others take the same amount of work either way (deterministic arbitrary computation).

What you might have spotted from above is that some of these are in theory provable statements but they are impractical to prove on a blockchain. For example hashing an extremely large piece of data, or checking the result of a long and complex computation.

Dispute games are multi-party protocols that, under certain economic conditions, can allow a blockchain to be convinced of a statement while only having to execute a small part of a computation when disputes arise.

One-step Fraud Proving

For statements that could be proven on-chain there is an elegant solution that can avoid expensive verification costs. In a one-step fraud proof Ashley submits a statement on-chain (e.g. the result of this computation is X) along with a bond. The bond should be greater than the expected gas cost of having the chain verify the proof.

The statement is considered pending by the chain for a length of time known as the challenge period. During this time anyone (Bellamy) can pay the gas and force the chain to perform the check in full. This will then determine if Ashley was indeed correct and update the state accordingly. If Ashley is proven to be wrong their bond is slashed and given to Bellamy.

Importantly, if no one challenges Ashley during the challenge period it is safe to assume the statement is true under the assumption that there is at least one observer who would have proven Ashely false if they could.

This approach is great for on-going protocols like rollups where it is important to keep the running cost low. It is relatively simple in its implementation and can be permissionless.

This design was used by the early versions of Optimism such that the rollup state transition was only computed on L1 in the case of a dispute. Another clever application is in the Eth-NEAR Rainbow Bridge where one-step fraud proofs are used to avoid performing expensive Ed25519 signature checks on Ethereum under regular operation. In recent months some projects such as Fuel have proposed using on-step fraud proofs to avoid performing expensive SNARK verification unless there is a dispute.

The downside to one-step proofs is they are not applicable in every case. It must be possible to fall back to executing the verification on-chain. Some types of computation are simply too large or require too much data for this to be feasible. What can be done in this case?

2-party bisection dispute games

The Arbitrum paper first popularized bisection dispute games in the blockchain space. To understand a bisection game you first need to understand emulators and the computation trace.

A program can be expressed as a sequence of instructions to be executed on a processor. This processor also has registers and/or memory to store state.

Executing the program and recording the registers+memory at after each instruction is called an execution trace. This trace may be much longer than the original program as instructions may branch or loop. For a program that terminates this execution trace can be considered a very verbose proof of the final state. A program that knows how to execute all the possible instructions and update the state accordingly (an emulator) can verify this proof.

A program trace for any non-trivial program is absurdly huge, but it can be compressed using two clever tricks.

The first is that the full CPU+memory state need not be included at each step. Instead a commitment of the state (e.g. Merkle root) can be recorded instead.

The second applies when there are two conflicting parties in dispute about the result of executing a program. A two-party protocol can be used to reduce the trace down to the first instruction that the parties disagree upon the result of. A third trusted arbitrator (usually a blockchain runtime) can then execute only a single instruction in the trace resolve a dispute over an entire program.

This last trick of framing the dispute as being between two parties allows proving faults in programs in a number of steps that is logarithmic in the length of the trace, followed by the execution of a single instruction. This is incredibly powerful and has been developed by Arbitrum and the Optimism Bedrock prototype for proving rollup state transitions, along with Cartesi for proving arbitrary computations on-chain.

Problems

The problem with this approach as described is that once two players are engaged in a dispute they must both remain live in order to resolve it. There is also no way to ensure that both participants are honest so it must be possible for multiple disputes to be open on a single claim at once for the honest minority assumption to hold.

A malicious challenger may open many dispute games on a single assertion and although they will lose their bond if they have more available funds than the defender they can eventually prevent them from being able to participate. I have written about this issue in another article.

The usual solution to this is to limit who can participate in dispute games. This is the approach used by Arbitrum and the Optimism Bedrock prototype. This compromise places these dispute game systems in an in-between trusted federation and fully permissionless dispute games.

Permissionless Bisection Dispute Games

So can bisection style dispute games be made fully permissionless? A recent paper by Nehab and Teixeira proposes a modification to the dispute games where the participants must submit a vector commitment to the entire execution trace before the game begins. Once this has been done the game becomes permissionless as anyone can submit a step along with its inclusion proof.

This is an excellent solution however it has a major drawback. Execution traces are incredibly large, commonly several trillion instructions. Producing a merkle tree and associated proofs for such a long vector is prohibitive in most cases. The authors solution to this is to split the trace into stages and run

More recently Optimism has proposed another design which structures dispute game is structured as an n-degree tree rather than a binary tree. This allows other participants to fork off an existing dispute game when they believe participants to be fraudulent.

Bond is added incrementally at each step of the tree allowing permissionless participation. Once a game has concluded the player who asserted correctly against any branch that has been proven false can collect the bond on each of those nodes in the tree.

This design gives the best of both worlds allowing permissionless participation without needing to compute trace commitments. This is at the cost of increased complexity in implementation.

Conclusion

Dispute games are conceptually simple but designing them to be permissionless is much more challenging. Despite dispute games being proposed for use in blockchain systems more than 5 years ago, and claiming to be permissionless, there has not been a single permissionless dispute game used in production.

Optimism has made excellent progress in the last year design dispute games that can be safe and permissionless and these will hopefully be deployed in production in the near future.

· 6 min read
Willem Olding

In theory optimistic systems promise an honest minority assumption. Any actor can watch the state of the system progress and report fraud by challenging a particular state transition. This comes with the guarantee that they will always win a challenge game against an invalid state transition.

In practice though many of the optimistic rollups in production do not currently have this property. In Arbitrum for example the ability to challenge state transitions is limited to a permissioned validator set1.

I spent some time thinking about some potential attacks against permissionless optimistic systems and, at least for the model system I considered, it appears the honest minority assumption comes with some strings attached when real world considerations such as gas costs come into play.

Model System

Lets consider an abstracted optimistic system. This is not a rollup but rather a general iterative computation without inputs. Each state can be transitioned to the next state by executing the state transition function.

You could consider such a system similar to a programming loop with the state being the loop variables and for each execution of the loop code the hash of the new loop state is submitted on-chain.

The challenge game itself has the following properties:

  • Anyone can submit a new state commitment at a given sequence number (permissionless updates). This requires including a bond BB
  • The first state commitment to have no open challenges after the finalization period, TfinalT_{final}, becomes finalized. Other submissions for the same sequence number are discarded and their bonds slashed.
  • Anyone can challenge an update that has not finalized (permissionless challenges). This also requires submitting a bond BB. The submitter and the challenger are now in a challenge game.
  • No new challenges can be submitted after TfinalT_{final}. This prevents stalling attacks.
  • Any number of challenges can be opened on an update at once. This is required to ensure the honest minority assumption. If only a limited number of challenges are allowed then dishonest challengers could occupy all the slots.
  • The challenge game is a 2-party interactive game and requires on-chain submissions by both the defender (formerly submitter) and challenger. The gas cost for the defender is gdg_d and for the challenger is gcg_c. Each participant has time to respond TchalT_{chal}.
  • The winner of a challenge game receives half of the losers bond. The remainder of the loser bond is burnt.

Lets also define some properties of the host chain:

  • Each block has a gas limit GG
  • The block time is tblockt_{block}
  • A stable gas price of pp

First Attack - Challenger DoS

This one is nothing new but will serve as a good first example using our model.

For this attack a malicious submitter submits an invalid state transition. They then prevent any challenger from challenging them by purchasing enough blockspace that the challenge transaction cannot fit for every block in the finalization period TfinalT_{final}. The cost of this attack is

c=(Ggc)Tfinaltblockpc = (G - g_c) \frac{T_{final}}{t_{block}} p

It can be seen that the only system specific parameters this depends on is the finalization time and the gas cost of a challenge response. Since the gas cost is usually fixed this attack is typically protected against by using a very large finalization time (e.g. 7 days).

The finalization time should be selected such that the cost of executing the attack exceeds the exploitable value ever expected to be held in the system.

Second Attack - Challenge Flooding

This is an attack on an honest submitter. It is designed to steal their stake and delay finalization of the system.

It works by a malicious challenger submitting many (NN) challenges on a single valid submission. The defender must respond to all of these challenges in order to defend their bond. The attack succeeds if the defender does not have sufficient funds to pay the gas to respond to all these challenges. Because the challenges are overlapping the defender cannot use the winnings from one to pay the gas for the others 2.

The cost of this attack to the challenger depends on if it succeeds or fails. If it succeeds the challenger receives the defenders bond and prevents the update from finalizing. If it succeeds the cost is

c=Npgc.c = N p g_{c}.

If it fails the cost to the challenger is

c=N(B+pgd)c = N (B + p g_{d})

and the defender wins NB2\frac{N B}{2}. There is also a possibility of partial success where the defender can take some of the challenges but not others.

The conditions for this attack to be possible require an attacker with vastly more funds than the defender. The capital required to conduct this attack is

Cattack=N(B+pgc)C_{attack} = N (B +p g_{c})

and to defend against it is

Cdefend=NpgdC_{defend} = N p g_{d}

and so to successfully conduct this attack requires an attacker with more funds than the defender by a factor of

B+pgcpgd.\frac{B + p g_{c}}{p g_d}.

The implications for this are quite significant. If we consider the existence of mega-whales (like Coinbase). These actors could successfully conduct this attack at a very low cost on almost any submitter. By doing this repeatedly they can prevent any valid updates from being accepted and stall the system.

If we assume the gas costs cannot be changed the only way to mitigate this attack is to increase the bond size. With a sufficiently large bond relative to the defender gas cost this simultaneously limits which actors are able to make this attack while also limiting the accounts that can participate at all.

From this I conclude that for the described system the unbounded honest minority assumption does not hold. It should be reduced to an honest minority in the set of accounts with funds within a factor of pgdB+pgc\frac{p g_d}{B+ p g_{c}} of the largest gas token holder.

Looking Forward

A recent article 3 was published (after I wrote this blog post!) that proposes a modification to the interactive game that could allow honest actors to band together and share the gas costs. See my next article for how this could be used to protect against the challenge flooding attack.


  1. https://l2beat.com/scaling/projects/arbitrum/
  2. At first glance one might think that the submitter could use the reward from one challenge game to pay the gas of the next one. This would be true if the challenge game was single-step (e.g. non-interactive) however the additional timeout period introduced by the interactive game means that the defender must wait to receive their half of the challengers bond and in this time they must respond to the other challenges or risk losing the interactive game through a timeout.
  3. https://arxiv.org/pdf/2212.12439.pdf

· 10 min read
Willem Olding

For the hackathon at EthBogota the ChainSafe team developed a new bridge prototype we called Zipline. This bridge uses fault proofs, the technology behind optimistic rollups, to construct a bridge. But is this actually a good idea? This article investigates some potential issues but also some paths forward to a practical version of such a bridge construction.

· One min read
Willem Olding

This is my talk at the Toronto Ethereum Developers Meetup (EDMTo) on ways to avoid the challenge period delay (typically 7 days) when transferring tokens from an optimistic rollup to its host chain.