Bitcoin is splitting into two coins, Bitcoin Cash (BCC) and Bitcoin Core, with the latter retaining the name of bitcoin and the BTC ticker, at least for now. But what exactly is Bitcoin Cash and why are they splitting?
To truly understand the reason we have to go back as far as the 90s when many expected the internet will bring digital cash. There were many attempts at it, but they all failed to the point where it was taken as given that decentralized digital cash was an impossible dream.
But that conclusion was not reached before b-money, very much just a concept which never went anywhere, established what can be called a common assumption that if there is to be a decentralized currency, there needs to be a base layer which is inefficient, with efficient layers on top to be used for ordinary payments.
No formal analysis was provided for this assumption. Neither then nor since. With the efforts and discussions, at the time and often even now, seemingly more chat-like, informal, through mailing list or real time chat channels.
As such, it is probable Nakamoto was unaware of it because it is likely he or the team under that pseudonym (from now on just he for ease of reference) come from a formal academic background.
This is shown by the fact that, from known history, he apparently was unaware of Szabo, or many of the others, including potentially Wei Dai of b-money. So seemingly being introduced to them by Hal Finney, known for his work on PGP.
He, therefore, was probably unaware of this common assumption or did not share it, with the first relevant discussion on the matter taking place around the day Nakamoto announced the bitcoin whitepaper.
A pseudonymous cryptographer, going by the nickname of James A. Donald, argued that bitcoin could not scale because it:
“Requires each peer to have most past transactions, or most past transactions that occurred recently. If hundreds of millions of people are doing transactions, that is a lot of bandwidth – each must know all, or a substantial part thereof.”
Nakamoto rejected his argument, stating that “At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware,” with everyone connecting to it through Simplified Payment Verification wallets as detailed at section 8 of the whitepaper.
Donald wasn’t convinced and goes on to argue his point in a number of posts, while also introducing what is today referred to as the Lightning Network. He says:
“Let us call a bitcoin bank a bink. The bitcoins stand in the same relation to account money as gold stood in the days of the gold standard. The binks, not trusting each other to be liquid when liquidity is most needed, settle out any net discrepancies with each other by moving bit coins around once every hundred thousand seconds or so, so bitcoins do not change owners that often. Most transactions cancel out at the account level. The binks demand bitcoins of each other only because they don’t want to hold account money for too long. So a relatively small amount of bitcoins infrequently transacted can support a somewhat larger amount of account money frequently transacted.”
It is an argument that was repeated many times, with Nakamoto rejecting it each time to the point where he looses his cool and gave us one of his most famous lines:
“If you don’t believe me or don’t get it, I don’t have time to try to convince you, sorry.”
As such, everyone expected bitcoin to scale on-chain along the lines roughly laid out by Nakamoto who implemented a dual soft-limit and hard-limit.
The soft-limit was initially 250kb, which miners raised with no problem, despite the objections of some like Peter Todd and Luke JR, to 500kb, 750kb and then 1MB.
The 1MB hard-limit requires a hard-fork to be lifted. Something some Bitcoin Core developers, like Todd, Luke-Jr and Gregory Maxwell, had been arguing since around 2013 should not occur, with their points being a re-hash of Donald’s arguments.
Which is what led to the more than two years long scalability debate as the two visions are clearly irreconcilable. You can’t have both a settlement system with everyone transacting through second layers at the same time as free on-chain capacity for anyone to use, unless the market freely chooses second layers, but that’s not the Bitcoin Core plan.
Their plan is to change bitcoin’s blockchain from a payment system to a swift-like settlement system where blocks are usually, if not always, full, with only bitbanks or hubs transacting on bitcoin’s blockchain at a cost of $1,000 or maybe $10,000 fees per transaction.
Something which is strongly opposed by big blockers who are of the view that everyone should be able to transact on-chain in a peer-to-peer manner rather than through intermediary hubs.
The debate over the past two years has been an attempt by both sides to convince miners and businesses that their approach is best and therefore the limit should be lifted or not lifted, but the debate ended in a stalemate until the “compromise” of segwit2x which somehow tries to have both a full blocks and non-full blocks system, which is nonsensical.
As such, many see the segwit2x compromise as a decision to follow the full blocks and settlement approach for two main reasons.
Firstly, miners did not activate segwit at the same time as increasing the hard-limit of 1MB. Instead, they activated segwit only, with the hard-limit to be lifted in three months.
But that exact same compromise was reached last year and led to the merger of segwit only with the hardfork code not delivered. Likewise, history is again repeating with Maxwell stating the lifting of 1MB will happen “at roughly the same time as hell freezing over.”
But the main reason why big blockers see it as a complete capitulation to the settlement vision is because segwit will be activated with the one parameter that has been the main if not sole point of contention, the 75% discount.
To slightly increase capacity, segwit uses a trick whereby 4MB of data is counted as 1MB because signatures are in effect not counted in the rules, although of course they still will add data loads and bandwith requirements as normal.
That’s where the name comes from, segregated witnesses, segwit. Witnesses being signatures, simply explained as the key hashes you use to ascertain ownership of your bitcoins.
From an on-chain scalability perspective, the problem here is considerable because someone could attack the network through using signature only transactions. Thus creating 4MB blocks, while for ordinary use, since signatures are less than 50% of the data, actual on-chain capacity increases to only around 1.7mb.
At 8MB, that translates to an attack vector of 32MB. At 16MB, an attacker can create 64MB blocks. So to increase on-chain capacity after segwit is enforced, the network needs to consider whether it can handle the max-blocksize attack.
Which translates to an artifical limitation of on-chain capacity as rather than considering whether the network can handle 16MB or 32Mb blocks, we’d need to consider whether it can handle 64MB, with 32MB or more not used at all except for as an attack vector.
From a small blockers perspective, the reason is obvious. They’d like on-chain capacity to be as low as possible because currently every node has to process the exact same amount of data. Thus, if the data-load increases, some may be unable to run a node due to lack of resources.
From a big blockers perspective, the discount has little technical reason and is mainly political to ensure the settlement vision is followed now and for decades to come.
That’s because without the discount, a 4MB block would have 4 times the transaction capacity of a 1MB block. While with the segwit discount, a 4MB block would have only 1.7x transaction capacity over a 1MB block.
It’s implementation, therefore, is seen by them as being a rejection of Nakamoto’s roadmap, with the network instead following the roadmap of developers who never thought this thing could work and failed in all their attempts to create something like it.
So big blockers are rejecting segwit. Chain-split hard-forking to Bitcoin Cash, which continues the soft-limit and hard-limit approach laid down by Nakamoto and followed by the bitcoin network for much of its existence.
That is, miners can now choose a 2MB soft limit with an 8MB hard-limit configurable through a GUI interface in the BitcoinABC client, as well as in a compatible version of Bitcoin Unlimited and Bitcoin Classic which are soon to launch.
Miners can then increase the soft-limit to 3MB or 4MB and the hard-limit to 10-12MB, dependent on demand for on-chain transactions, thus turning the matter into a non-issue as it has been for much of bitcoin’s existence.
In regards to nodes, they don’t have to process the same data because of sharding. Instead of settling thousands of transactions into one blockchain transaction, sharding allows 1,000 or 100 nodes to process 1,000 transactions or parts of a transaction, with all the parts or transactions then combined into a network hole, thus increasing capacity considerably to the point of as good as unlimited while also allowing anyone to run a node on their modern laptop.
So solving the trade-off between scalability and decentralization. But small blockers don’t think it would work despite not trying it, while big blockers don’t think the settlement system would work.
As such, with the two visions irreconcilable, they are splitting to get the bitcoin they want. With big blockers focused on bringing Nakamoto’s roadmap to life, while small blockers focus on creating something very new, yet quite similar to the current financial system.
The market will then determine at each point which is best: peer-to-peer digital cash, or a payment network with an unusable blockchain for ordinary bitcoiners and with intermediary bitbanks.
Interestingly, both sides seem to welcome the split. Which may mean the in-fighting will subside as now they have to compete in an open manner for the uncensorable judgment of the free market, the ultimate 51%.