Bitcoin Cash Handles the Highest Level of Transactions of Any Public Blockchain – Trustnodes

Bitcoin Cash Handles the Highest Level of Transactions of Any Public Blockchain


Bitcoin Cash has managed to overtake ethereum’s record of 1.4 million transactions a day by mining 2.2 million during a stress test of the network that showed its limits and its capabilities.

Starting with capabilities, it looks like bitcoin cash is able to handle 10MB blocks somewhat comfortably. That translates to about 50,000 transactions a block or some 300,000 transactions an hour or circa 7 million a day.

The network has however managed to mine a 21MB block, the biggest ever by any public blockchain, with it handling 100,000 transactions which would give the network a capacity of about 14 million txs a day, 10x the current rate for eth and some 50x the current capacity of bitcoin.

The biggest block yet, Sep 2018.

That 21MB seems to be the current limit, with the mempool seemingly unable to go much above 22MB. Moreover only one such block was mined, making it somewhat of an outlier, while there were quite a few blocks at 10MB.

A small number of nodes at about 200 did face some problems, primarily from the BitcoinABC client, while BU nodes managed better. An apparent node operator says:

“ABC nodes that had few connected peers were doing just fine with minimal CPU usage. ABC nodes that had many (50+) peers connected were performing awfully, essentially spinning 100% on one CPU core and becoming unresponsive, even with oodles of CPU power and RAM.

I didn’t have any analytics running, but I’m guessing there was some single-threaded message processing stuff going on – most likely related to inv message spam, which would naturally be quadratic to the number of peers and transactions being sent. This is however just a qualified guess.”

The 21MB block had no problem processing, but some BCH visualizers were down as it seems they were getting their data from Blockchain the company, which throttled bandwidth soon after the stress test started.

Yet overall there was no problem in as far as the network was able to keep running with transactions confirmed and with some 2,000 nodes staying on.

That allowed it to process an average of 25.5 transactions a second with miners collecting $3,749 in sub-penny fees. Thats for the entire 24 hours and it is about half of one bch block reward currently worth circa $8,000.

Sub-penny fees are however arguably too low. One penny per transaction would be pretty much free. So that would have given $37,000 within 24 hours.

That’s at 2.2 million transactions with capacity at 10MB standing at 10 million txs while at 21 its at 14 million. If we simplify for brevity and say 22 million transactions a day at one penny each, that would have given $370,000, not far off from half of the current daily block reward worth about $1.1 million.

That suggests transaction fees of one penny would match block rewards at 64MB, but even at current capacity they would offset much of the halving that is probably to occur for BCH sometime next year, or during early 2020.

That is assuming miners would raise fees to one penny if demand picks up. If instead fees remain at current levels, then blocks would have to be circa 220MB for half the block reward and about 500MB for the full block reward.

Bitcoin Cash stress test stats, September 2018.

However, to more efficiently maintain even the current practical capacity limit of 21MB, the stress test has shown some improvements that can be made, especially to Graphene.

The compression data from Graphene were shown to be pretty good and a lot better than xThin, but there were more decoding failures than expected. A dev working on Graphene, Brian Levine from UMass Amherst, says:

“We are happy with the compression numbers, but that’s more decode failures than we would like for sure. We are working on it.”

BitcoinABC devs have proposed an even greater compression method called canonical ordering which in addition allows for further parallelization in block processing. Jonathan Toomim, a miner-dev, has now come out in favor of this proposal, publicly stating:

“Yesterday we observed that on average, 37 of the 43 kB per block in Graphene messages is order informataion that would be eliminated by CTOR. Now, 37 kB is not a lot at all, but it’s still 86%, and as we scale it eventually might grow to the point where it matters. I think this is the strongest reason for CTOR. Whether that CTOR is lexical or topological is a separate question.

Concerns have been raised that lexical orders would make block validation more difficult, most notably by Tom Zander and Awemany. I implemented a version of the outputs-then-inputs algorithm for topological block orders, and so far have found the serial version is only 0.5% slower than the standard topoological algorithm.

My code has a much greater chance for parallelization, and I’m working on getting that done soon. Once parallelized, it’s plausible that the parallel version may be 350% faster on a quad core machine than the standard algorithm, but this depends on what Amdahl has to say on the matter. I think this shows the fear-mongering about the lexical ordering to be unjustified, and suggests that there will be some tangible benefits soon.”

That may effectively end any technical debate on canonical ordering now that the above benchmarks have been provided, with much more work left to do to increase practical capacity even further.

Work that may assist other chains as bitcoin cash shows the current technical limits, at least under test conditions. If ethereum, for example, was running at the equivalent of 10MB per 10 minutes, rather than the current 1MB, it would have been able to handle 17 million transactions a day, plenty to buy time for sharding.

Likewise, if Bitcoin Core had increased capacity to 8MB, it would have managed about five million transactions a day which may have been sufficient to provide the many newcomers during November and December with an excellent experience of on-chain transacting.

Yet for real global scale, transactions sharding is probably needed, something which BCH might pursue while seemingly having plenty of capacity to accommodate many use cases in the meantime.



Notify of
Inline Feedbacks
View all comments