Shammah Chancellor, a developer at one of the main Bitcoin Cash client, BitcoinABC, detailed this Monday plans to implement vertical scaling in BCH by taking advantage of additional CPU cores for parallel processing. He says:
“In order to make use of additional cores effectively, the data which they use must be process localized. This process of organizing data for local processing is called sharding.
However, Bitcoin currently employs data structures for computing the Merkle root of the block header which prevent data from being localized. By changing the ordering of the Merkle root computation via a canonicalization, the data may be sharded.”
Currently, you can process a block only on one server or laptop with Amaury Sechet, lead dev of BitcoinABC, stating that some things can already be parallelized, but not the verification of the ordering. He says:
“We want to be able to process a block using several CPU, or even several machines. So when block gets bigger, you need more machines but it doesn’t take longer.
Current ordering constraint imposes a serial step in the processing. So it’s not possible. Removing current ordering constraint and replacing them by another one solve that problem.”
Canonicalization is a proposed re-ordering of transactions in Bitcoin Cash that has been subject to some debate recently with Tom Zander of Bitcoin Classic stating “you can implement this strategy just fine without changing the transaction ordering in the Bitcoin protocol.”
Sechet had a fairly strong response to that, stating: “He is wrong or is twiting the truth. He never present his algorithm, so I suspect he is just playing stupid games.”
Tensions are high between different clients in Bitcoin Cash as arguments continue over pretty much every single ABC proposal for this November hardfork, from canonical ordering, to an op_code, to a 100 byte minimum requirement for a transaction.
We were curious to know, however, whether they have further plans on sharding, including perhaps chain sharding, but Sechet did not have any more time.
The current BCH sharding proposal is simple sharding or as some call it private sharding as it allows you to in effect add more processing units, so giving you more “power.”
Far more difficult is to shard the data itself, or the nodes, with only ethereum, as far as we are aware, working on what can be called full sharding.
Scalability, however, is a multifaceted problem with many ways to tackle it. Every gain helps. Asked when can we expect this to be implemented, Sechet says: “I don’t know, but never if datastructures used do not allow for it.”
The article says that sorting the transactions into ranges over the canonical ordering of transactions would allow them to calculate certain data by only knowing certain data. Like 7+x=8.
That means, at a high level and in a very simplified way, that one machine can calculate certain things, and another can calculate other things, with it then all combined.
Sechet says this was “on ABC’s rodmap since day 1, I presented work on the topic as early as 2016.” Whether it will be fully implemented, however, remains to be seen.