“This is a very interesting topic. If a solution was found, a much better, easier, more convenient implementation of Bitcoin would be possible.”
Those are the few words that bitcoin’s inventor Satoshi Nakamoto has stated on Zero Knowledge (ZK) technology before further adding:
“It’s hard to think of how to apply zero-knowledge-proofs in this case.
We’re trying to prove the absence of something, which seems to require knowing about all and checking that the something isn’t included.”
In 2010, when Nakamoto made the above statements, zero knowledge technology was pretty much non-existent.
Just a few years later however, in 2016, Zcash implemented what Nakamoto found as “hard to think.” That being proving that you own a coin and can transfer it, without revealing the transfer or your address on the public blockchain.
Yet an even bigger breakthrough came starting in 2019 when Trustnodes was the first to cover the launch of some new testnets that were using zk tech to scale capacity.
Ethereum’s co-founder Vitalik Buterin said at the time that ‘we didn’t even know you could do this,’ with that breakthrough of sorts so changing ethereum’s plans for long term scaling from sharding to second layers.
Within months, the first live-net zkEVM is set to launch by zkSync, though from the timeline we’d expect Q1 and maybe Q2 for a full-on proper launch.
The excitement is contained in these pages because we’re used to being disappointed, but Jordi Baylina gave us a rare moment of an enthusiastic and perhaps historic applause at EthCC in Paris two weeks ago when he revealed Ploygon has just published the open source code of their entire zkEVM which they had been developing in stealth.
The entire speech is worth watching as this is the first time we have seen an indepth, but still somewhat high level, of just what it takes to zkEVM.
We don’t know what happened to Nakamoto, currently one of the richest man on earth, or whether he is still following these developments.
So it’s anyone’s guess what he’d make of it, but the speed of development in this space in hard computer science, going from nothing to the live thing in just a decade, must be impressive to any Nakamoto.
Not least because we’re soon about to see effectively the entire blockchain re-written, though mainly in form rather than in structure, and the blockchain/s we’ll use in the decades to come may be more of the new ones that are first going out as second layers.
As it’s breaking new grounds, we’d expect them to be closer to prototypes than gigafiber initially, but we don’t often see the potential promise of broadband in our space being testnet hardened and just months away.
Baylina himself deserves some credit for making it all feel more real with Polygon an overlooked project, but his presentation indicates they have some significant skill and competence.
They too will probably launch sometime next year when we’ll be fully ready to be disappointed, but after seven years now of trying to tackle scalability, it may well be its time has come.
And ahead of that breakthrough, it is only apt to pay homage to the man or group that started it all, with our Nakamoto stating:
“Originally, a coin can be just a chain of signatures. With a timestamp service, the old ones could be dropped eventually before there’s too much backtrace fan-out, or coins could be kept individually or in denominations. It’s the need to check for the absence of double-spends that requires global knowledge of all transactions.
The challenge is, how do you prove that no other spends exist? It seems a node must know about all transactions to be able to verify that. If it only knows the hash of the in/outpoints, it can’t check the signatures to see if an outpoint has been spent before. Do you have any ideas on this?”
He is talking about hiding the transaction so the public can’t see it, but clearly some Nakamoto – or maybe the same one – saw that it wouldn’t be too much different from condensing those transactions.
As to how, Baylina has plenty of ideas on how to do it, as do others who have been working on this for now years.
For Baylina, the how is that same C++ as for Nakamoto, with his description coming across as more how you describe a circuit board than a software program.
They check for double spends by requiring proof that you have the asset, with verification condensed through cryptography.
Clearly all this was far too new for Nakamoto. At the time a proper zk-algo didn’t even exist, let alone it having some more concrete shape.
But the frontier has advanced and we’re right in the middle of not knowing whether it’s an historic advance or not quite of that scale.
However, if applauses say something, it echoes some of the geekiest applauses which we sometime replay.
It may well be the excitement had the intensity just because it was all open sourced when no zkEVM is yet, though zkSync plans to open source it all once it goes live.
Or maybe those geeks know something that we can only smell by the party in some corner of Pari.