The removal of long term history through EIP-4444 is coming to ethereum according to a speech at EthCC by ethereum’s co-founder Vitalik Buterin.
“If we want scalability and if you want decentralization, the ability to run nodes easily, then you just can’t require nodes to store this constant ever-growing amount of space,” he said.
Making this the first time that pruning, a long suggested partial solution to scalability, is semi-officially made part of ethereum’s roadmap.
From the above roadmap, purging history appears to be a somewhat medium to long term endeavor, perhaps five years, as Verkle trees may have to come first for state expiry.
“Verkle trees are a powerful upgrade to Merkle proofs that allow for much smaller proof sizes,” Buterin said last year.
This allows for greater scalability, which purging might compound because node syncing will become much faster as a new node starts at a Weak Subjectivity Checkpoint, rather than at the genesis block.
Just one year of data will be served through this Ethereum Improvement Proposal (EIP), rather than all data all the way back to 2015.
“By pruning the history, this proposal reduces the disk requirements for users. Pruning history also allows clients to remove code that processes historical blocks. This means that execution clients don’t need to maintain code paths that deal with each upgrade’s compounding changes,” the EIP says.
No major blockchain has employed pruning, yet, but sooner or later all blockchains will have to run on pruned history if new nodes are to sync as the state of an ever-growing ledger is not quite sustainable within the perspective of decades or even centuries.
The archive then can be served by the Graph, a somewhat decentralized solution, Buterin said, and dapps can also parse older history from block explorers.
This in itself can provide some scalability because in setting the gas limit, stakers can target storage requirements, say 1TB in current capacity, and perhaps 1 petrabyte in future years, among other criteria like giga fibre for bandwidth.
Unlike currently where the gas limit, the blocksize, has to account not for just current storage capacity, but also total history, or how much storage requirements will increase in future years due to current usage.
In the half an hour long speech, Buterin also mentioned that zkEVMs can be adopted in the future at the base layer once they fully mature.
At the end of the above roadmap, ethereum may handle as much as 100,000 transactions a second, Buterin said, as opposed to the current 13 transactions per second.
That’s through a three legged approach, or in less technical terms, by throwing the whole kitchen sink at the problem as anything that can be utilized is planned to be utilized.
That is data sharding, pruning, and second layers which are now greatly assisted by the new invention of zk tech.
The latter are coming first. We’ll get zkSync’s zkEVM in just three months. Starknet will launch theirs too. Polygon has it on testnet, with their zkEVM probably out in the first half of 2023.
At that point, and with a longer term perspective, the conceptual problem of second layers may become more clear in as far as these are new networks, and contained ecosystems, requiring ‘proof of bother’ to further participate after entering the eth ecosystem.
Hence the suggested solution that their tech can just be adopted at the base layer, which can still allow second layers to act as amplifiers because they can still keep compressing even further the now compressed blockchain.
Making the roadmap simple from a high level view, yet coding all this and having it run on the live network may take some time as data sharding for example has four stages while Verkle trees require three forks.
But the journey can be half the fun and, more importantly, it’s all moving in the right direction when no one else seems to quite be moving.
In addition with zk-tech now, it may well be the blockchain space breaks frontiers in how database parallelization can be done in the twenties.