After two years of arguing, the bitcoin community appears to be reaching a climax, nearing a decision point on the most contentious debate so far – how to increase capacity.
In many ways, it’s a good problem to have. There is so much demand, bitcoin can’t server everyone, so it has to upgrade to increase capacity. Up to this point, everyone is in agreement.
The problem is the question of how to upgrade because the answers on the table have long term consequences with a lot of known unknowns and unknown unknowns, while some of the consequences are foreseeable.
More short term, choosing one or the other means choosing winners and losers because they aid certain business models while harming, to the point of nearly obliterating, others.
The question of a compromise, therefore, much talked about, isn’t realistic because no one thinks there should be no on-chain and off-chain scaling. The point of contention is which one should be dominant.
Long before bitcoin was announced, a b-money sketch proposal was made, which suggested an inefficient, decentralized, base layer, complemented by a more efficient, but somewhat centralized, second layer.
This one and two proposal didn’t go anywhere in implementation, but conceptually it appears to have become dominant to the point of taken for granted that a digital currency would have an inefficient layer complemented by a second layer on top.
This suggestion was repeated to Nakamoto a number of times by a number of individuals soon after he made the announcement during Autumn 2008 and thereafter, with Nakamoto rejecting it at each point.
The argument goes something like this. All nodes must verify, process, and store all data, which means one data point has to be replicated in verification and processing by 7,000 bitcoin nodes, all of which are performing the same identical task.
As such, adding one data point needs to be multiplied by 7,000, but if too much data is added, some of the nodes may be unable to keep up, thus reducing their numbers, reducing decentralization.
So far, there are some points to be made and will be made later, but largely everyone agrees. The point of contention comes because of a jump from the above statements to an attempt at predicting the future.
The argument goes something like this. As the amount of data to be processed becomes bigger and bigger, ordinary individuals will no longer have the resources to run a node. A switch, therefore, will happen, from individuals verifying the network to businesses in data centres.
Those businesses can then change the rules as they please or as demanded by governments, with individuals having no say, thus turning bitcoin into an inefficient version of PayPal.
It’s a compelling argument on the surface. Nor can anyone say for certain that it is wrong or right because we simply don’t know as something like bitcoin has never existed before.
The difficulty is compounded by the fact that we necessarily have to predict the future, something which is impossible. So different versions of such future are suggested, with segwit’s version summarized above.
The Segwit Solution
To prevent a future where what happened to mining happens to nodes too, segwit proposes an upgrade which makes increasing on-chain capacity more difficult and costly so that it can be kept as small as possible.
In combination, it adds a number of new capabilities that facilitate second layer networks like the Lightning Network, Sidechains, or other extensions to bitcoin’s base blockchain.
Much of these second layers have been developed and are planned to be developed by Blockstream. Segwit, for example, had been deployed and implemented in one of their projects, Liquid, a year or so before it was proposed for bitcoin.
Sidechains, of course, are Blockstream’s big ticket product. The Lightning Network has a number of implementations, but a prominent one is by Blockstream.
These second layers or sideway layers can be useful in and of themselves. The Lightning Network, for example, can facilitate less than a dollar transactions, while Sidechains are interesting because it can allow for interoperability of private blockchains.
However, the Lightning Network (LN) especially, but Sidechains too, would probably greatly benefit from limited on-chain capacity as that would mean services are provided through LN or Sidechains.
The idea here seems to be for the base layer to serve second layers, rather than being an optional and complementary service. Bitcoin’s base blockchain, in the segwit version, is inaccessible to individuals, with fees probably planned to increase to $100, $1,000 or maybe even $10,000 for one on-chain transaction.
Normal transactions are meant to happen on the second layer, with individuals needing not even be aware of the base layer. Those second layer transactions are then bundled together, at some point, into one on-chain transactions. So 10,000 transactions, which would take around 2MB, become one transaction taking around 400 bytes.
As for sidechains, the idea here is to peg other bitcoin based sidechain tokens at 1:1 with btc. The sidechain is sort of an ordinary blockchain, allowing you to set whatever rules you like, with your bitcoin able to be converted at 1:1 to the new sidechain token.
So you could have an ethereum based sidechain with its own, let’s say BIT token, convertible at 1:1 for a btc, interchangeable to and fro as you please.
That’s the idea, which Rootstock is trying to implement, but in practice there are many ifs and buts and advantages and disadvantages which are more appropriate for another article.
If we return to the base blockchain, it’s capacity will of course be increased in the segwit version and in fact it does do so to around 1.8MB. It most probably will further be increased later on, but any such increase will likely depend on the needs of second layers.
A final point to be made here is the nodes. Since few, and in any event less, new data will be added on-chain, resources to run a node will also be lower, thus an individual with an ordinary laptop can continue running it.
In the short to medium term, in any event. Eventually, however small the chain is kept, there may well come a point when running a node would require up-front investment.
The Bigger Blocks Way
Who runs or would run these nodes is a very interesting question. Since nodes require no identification and can be faked, we don’t really know, so will have to go by analyzing incentives to form a good guess.
Miners, of course, have to run a node and each miner probably runs many, say 100. Bitcoin businesses that heavily depend on bitcoin payments most probably run many nodes too.
Here segwit supporters say businesses can delegate the node running function to just one or few providers and that is true. In ethereum, for example, INFURA provides such facility and is used by a number of daaps.
Academics and universities probably run nodes too. There is an increased demand for blockchain university courses, so the number of nodes run by universities will probably increase further, especially in computer science departments, because they’ll probably want to show the real thing to their students and allow them to play around with it.
Coders working in the bitcoin industry probably run a node too for testing, optimization, etc. It’s likely their work environment, so most probably necessary.
Then there are individuals who may need or want a higher level of privacy and security who probably run a node through tor for whatever reason, including potentially because they are in countries where bitcoin has been declared illegal.
Bigger block supporters argue that if we increase the number of these – let’s call them constituencies – then the number of nodes would also increase because there would be more individuals and businesses incentivized to run a node.
That is probably true. The Ethereum spring, for example, which has seen a considerable increase in the number of its users, has also seen a significant increase in its node numbers.
Ethereum, however, like bitcoin, does not require any up-front investment to run a node. So we’ll have to engage in an analysis of extrapolation to reach some reasonable conclusions regarding how incentives may change as resources increase in the future.
Let us say, for example, you need an up-front investment of $1,000 to run a node, while at the same time the number of users has increased 10x. Would the node numbers increase or decrease?
Both, of course. The question is which more and how. A question which can’t have a right or wrong answer because we simply don’t know, but big block supporters argue that as the number of businesses increases, the number of nodes will also increase, regardless of the costs, because they simply have to run a node.
The counter argument is that they can just delegate it. The counter-counter argument is that a company like Coinbase, for example, or BitPay, would be irresponsible to delegate it.
In any event, big blockers argue, even if these nodes end up in datacentres, that would be fine because they would be spread across jurisdictions, requiring global co-ordination for any change of rules.
Datacentre nodes suggest mainstream use and adoption of bitcoin, big blockers say, which means that multinationals, household brands, medium and small businesses, universities, even government departments, across the globe, would be running a node.
That is plenty decentralized, they say, because if 10,000 nodes are spread across the globe no one can just change the rules, they argue, so requiring a co-ordination of all these node operators.
As for verification, there would be researches, academics, analysis and data companies, early adopters, coders, and others, who would become aware of any rule change, thus requiring consent by bitcoin holders and miners.
So on-chain capacity should just be increased, they argue, with second layers being optional because such second layers are untested and unproven and will be centralized.
Moreover, since ordinary individuals are not using the base blockchain in the second layer scenario, there would be no incentive to run a node, they say, so node numbers may fall if on-chain capacity does not keep up with demand.
Sharding or the Ethereum Way
The Joker card here is sharding. A protocol upgrade that will likely be the biggest breakthrough in this space since bitcoin’s invention or since Ethereum’s smart contracts.
While second layers bundle many transactions into one on-chain transaction, sharding bundles nodes, giving them x number of transactions or x part of a transaction for verification.
To simplify, if we have 10,000 nodes, 100 of them verify 10k txs or lets say 10% of part of a transaction, then all these 100 nodes combine all these transactions together allowing the total network of 10,000 nodes to process far more – as good as unlimited, according to Vitalik Buterin.
This addresses an inefficiency in the way the blockchain network currently operates. All of bitcoin’s 7,000 nodes and ethereum’s 30,000 nodes are processing the same exact transaction.
However, that is far too much redundant work as the same job can be done with as good a level of security and decentralization by 100 nodes. So we split up their tasks, sharding the network into bundles of nodes, processing x transactions or x aspect of a transaction, then combine them all together, giving the network the same level of security and decentralization as if it was all the 30,000 nodes processing each aspect, while allowing for far greater capacity at far lower resource requirements.
This is an approach that everyone would support, but segwit supporters say Sharding either wouldn’t work or is uncertain whether it can be achieved.
Ethereum developers think it will work and will be achieved. It is in their roadmap. Bigger block supporters would also take the same approach. If bitcoin goes the segwit way, it is unclear whether sharding would be deployed, even if it is successfully implemented in a secure and decentralized manner as they appear to prefer bundling transactions rather than nodes.
As you can see, the debate about bitcoin’s scalability is about different visions and approaches, with much of it relying on predictions of an unpredictable future.
No one can say either approach is right or wrong, with the question being which approach is likely to be more right or more wrong.
Neither side thinks capacity increases will be exclusively on-chain or off-chain, with both approaches probable by both sides. The debate is about whether capacity is predominantly increased on-chain or off-chain.
Segwit supporters think the base blockchain should operate under capacity so that fees can increase to support miners, fees paid by bundled transactions.
Bigger Block supporters say on-chain capacity should remain above demand, with second layers implemented and being optional, while security is to be provided by many transactions paying a small fee rather than few transactions paying a high fee.
These two visions are likely irreconcilable. You can’t have the base chain operate both above and under capacity. It’s an either/or question, which is probably why no compromise has been reached so far.
That means either one side gives in or the chain splits. As the debate has been going on for two years, with both sides maintaining significant support, the latter is more likely and bitcoin may this summer split into two coins, Bitcoin Core and Bitcoin Unlimited.
If it does split, both coins would probably be listed on all exchanges, with the market deciding their relative value as measured by price. Both sides are likely to claim bitcoin’s name and btc, with the event followed in the short term by a probable trending frenzy and volatility the likes of which the world has never seen.
This may last for a few weeks/months, then the prices will probably stabilize, but the two communities are likely to remain at each other’s throat for some time as they try and persuade the market that their approach is best.
Which Coin is Likely to Become Dominant?
That’s a difficult question because there are many factors that come into play. A big one is r/bitcoin, a centrally controlled, highly censored, public forum which consistently has far more online users than r/btc.
Then, Bitcoin Core has many of the experienced developers which have considerable plans for a number of new features and innovations with the downside being many of them are employed by a for-profit company and are probably rewarded in fiat rather than bitcoins.
Bitcoin Unlimited et al. has simplicity. On-chain capacity is simply increased, with nothing else changing but the ability to make more transactions at far lower end-user cost.
Their roadmap, afterwards, is not very clear. They would probably implement sharding, but likely after the Ethereum developers have figured it all out. They’d probably implement LN, but their priorities would probably be increasing on-chain efficiency and capacity.
Should Bitcoin Fork?
The two communities have very different visions, therefore they might not have a choice but to split. However, such fork could not come at a worse time.
It might have been a good idea back in 2015 or maybe even in 2016, but a chaotic split at this stage might significantly precipitate bitcoin’s irrelevance as everyone runs off to ethereum or fiat.
Moreover, back in 2015 when ethereum wasn’t a thing, the market probably required a choice between the two visions. Now, the market arguably does have that choice because Ethereum in many ways follows the bigger blocks approach.
It has an algorithmically adjustable blocksize. It plans to implement a version of the Lightning Network, while also increasing on-chain scalability through Proof of Stake and Sharding.
So the market can choose between ethereum’s on-chain scalability or bitcoin’s second layers, with one approach eventually becoming dominant as time and experience shows which way is best.
Ethereum, however, isn’t exactly bitcoin, so bitcoiners may want their own on-chain scalability, but at a cost of two coins without a united front such split may be far too dear.
Sufficiently dear, perhaps, for bitcoin to come a thing of the past as ethereum’s momentum is increased further with the currency and platform racing ahead especially as bitcoin’s value would probably be split too, meaning ethereum would likely become dominant in market cap, something which it is very close to achieving even as things stand.
So a split might not be a great idea at this point, but it may well be the case that bitcoin has no choice as a flag-day soft-fork known as UASF is to be activated on August the 1st with potentially the support of some miners.
Considering around 40% of the mining hashrate is on Bitcoin Unlimited, UASF would likely lead to a chaotic chain split, so perhaps a fork is unavoidable regardless of whether it should or should not happen.