“If you add even more assumptions (specifically, you require observers to also be actively watching the consensus, and not just downloading its output after the fact), you can increase fault tolerance all the way to 99%.”
So says Vitalik Buterin in a highly technical article reviewed by Emin Gun Sirer which suggests that instead of requiring 50% of miners or stakers to be honest, there can be a method which requires only 1% of them to be honest.
The dreaded 51% attack, thus, would under this consensus algorithm become a 99% attack, which means it would effectively be impossible.
The way it does so, at a high level view and as far as we understand, is by requiring listening nodes, or independent observers.
You’re probably familiar with the first seen rule in bitcoin. If not, here is Satoshi Nakamoto explaining it on July 17th 2010 and we’ll quote in full:
“I believe it’ll be possible for a payment processing company to provide as a service the rapid distribution of transactions with good-enough checking in something like 10 seconds or less.
The network nodes only accept the first version of a transaction they receive to incorporate into the block they’re trying to generate. When you broadcast a transaction, if someone else broadcasts a double-spend at the same time, it’s a race to propagate to the most nodes first. If one has a slight head start, it’ll geometrically spread through the network faster and get most of the nodes.
A rough back-of-the-envelope example:
So if a double-spend has to wait even a second, it has a huge disadvantage.
The payment processor has connections with many nodes. When it gets a transaction, it blasts it out, and at the same time monitors the network for double-spends. If it receives a double-spend on any of its many listening nodes, then it alerts that the transaction is bad. A double-spent transaction wouldn’t get very far without one of the listeners hearing it.
The double-spender would have to wait until the listening phase is over, but by then, the payment processor’s broadcast has reached most nodes, or is so far ahead in propagating that the double-spender has no hope of grabbing a significant percentage of the remaining nodes.”
Buterin is proposing something similar, but for blocks. He clarified in a statement after this article was published that, as he makes it clear in his original proposal, he did not invent 99% fault tolerant consensus, Leslie Lamport did, with Lamport being a computer scientist who received a Turing award for his work in distributed systems.
“Vitalik is proposing that if an independent observer of the network traffic (i.e. just the blockchain client a user is running, not a miner/validator) watches what’s happening in real time and pays attention to when messages appear, they can detect “foul play” by miners performing a 51% attack and this can provide additional safety guarantees that can protect against such an attack.”
Buterin himself adds that “another use case for this is that it can be used as a tool for detecting 51% censorship attacks and coordinating around using minority soft forks to get out of them, without the tool needing much human-driven social coordination to pick one action or the other in edge cases.”
That sounds like fraud proofs, a mythical creature that some Bitcoin Core devs say don’t exist. Nakamoto’s paper, however, mentions such fraud proofs to explain why certain data needs not be kept or why light nodes can be very safe.
In a nutshell, as the name suggests, fraud proofs are proofs of fraud or deception or misbehavior or someone not being in consensus.
If you run a full node, you’re validating everything, so the node just rejects misbehavior. If, however, you’re not validating everything because say you’re on a smart-phone, then in some rare circumstances you can be fed incorrect data without you having a way of knowing they are incorrect.
Fraud proofs provide such way, and the suggestion here, if we understood well (corrections welcomed otherwise), seems to be that you can know the data is incorrect because of the independent observers who are keeping a watch.
All of this is, of course, done by the software or the code itself. As in the code “knows” some data is incorrect and therefore automatically rejects it. Buterin says:
“If 5% of validators are honest, there is only a roughly 1 in 1 trillion chance that none of the 512 randomly selected nodes will be honest, and so as long as the network latency plus clock disparity is less than D/2 the above algorithm will work, correctly coordinating nodes on some single finalized value, even if multiple conflicting finalized values are presented because the fault tolerance of the threshold-dependent algorithm is broken.”
The focus here is of course on Casper, and this proposal appears to be a suggestion of its potential incorporation in Proof of Stake. If it does work in such context, then the ethereum blockchain might become a lot more secure.
Article updated with additional comments by Buterin.