Bitcoin Unlimited Merges Graphene Compression to Address Scalability – Trustnodes

Bitcoin Unlimited Merges Graphene Compression to Address Scalability


After what looks like five months of work, Bitcoin Unlimited has this Tuesday merged a block compression method that tackles bandwidth bottlenecks to address scalability.

Graphene, proposed in November 2017 by Gavin Andresen, former Bitcoin Core lead developer, takes a 1MB block and reduces it to 2.6KB.

The way it does so is fairly complex, but as we explained at the time, instead of sending the full block to nodes, you send just the instructions on how to build the block based on the data the node mostly already has.

Peter Rizun, who describes himself as Chief Scientist at Bitcoin Unlimited, calls it “crazy math” of invertible-Bloom look-up tables (IBLTs). For the more technically inclined, he explains Graphene as follows:

“In Graphene, when a node requests a block from a peer, the node sends the size of its mempool (rather than a Bloom filter of its mempool as was the case with Xthin).

The peer than sends the node a custom Bloom filter of the block’s contents, and an IBLT of the transaction hashes in the block.

The receiving node next filters its mempool with the Bloom filter to find the transactions that are likely to be in the block, and then use the IBLT to find exactly which transactions are in the block.

I said ‘crazy math’ in the previous paragraph because the receiving node is able to determine all of the transaction IDs in the block even if he does not already have all of the transactions.

It’s almost like the IBLT contains ‘stem cells’ that can transform into missing TXID X for Node A, but transform into a different missing TXID for Node B.”

The aim is to address one of the bottlenecks to scalability. There are quite a few, with storage being one. Storage however is becoming very cheap, with terabyte disks now available for a tenner. Yet SSD storage remains somewhat expensive and node synchronization is somewhat related to the amount of data that needs to be synchronized.

Bandwidth is another aspect, especially where miners are concerned. A block needs to propagate quickly, otherwise another miner might find a block at the same time and win the “race.”

To avoid that risk of losing the race miners might want to become bigger and bigger, an incentive they’d have in any event just to get the reward.

The way this bandwidth problem is addressed in Bitcoin Core is through a centralized network for miners called The Fast Internet Bitcoin Relay Engine (FIBRE) which is described as “a protocol and implementation designed to relay blocks within a network of nodes with almost no delay beyond the speed of light through fiber.”

Graphene sort of does the above, and perhaps a bit more, but in a decentralized manner, which should to a certain extent address bandwidth bottlenecks.

A real solution to storage and many other aspects would be fraud proofs, which some current Bitcoin Core devs claim are impossible, but ethereum has suggested they’ll be working on them and Andresen has been wanting to work on them.

The real solutions to most of these aspects is of course sharding because 10,000 nodes validating the same thing is a bit redundant when you can have 10 sets of 1,000 nodes for 10x scalability.

That’s a complex endeavor which ethereum is tackling and Bitcoin Cash clients may too, while Bitcoin Core isn’t really proposing any new efficiency developments of scale to address scalability beyond the Lightning Network as far as we can see.



Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>