White Knight Reveals Compound Bug, Funds Safu, But Can Natively Digital Insurance Provide Solutions?

1

One of ethereum’s rising dapp that lets you natively borrow assets had an unexploited bug of an unclear nature according to its founder, Robert Leshner, who stated:

“A member of the community alerted our team to a potential defect in the Compound protocol. After analyzing the report, we have confirmed that (1) there is a defect that was undetected in the audit process, (2) a potential exploit exists, (3) the potential exploit has never been used, (4) it’s necessary to temporarily disable new borrowing, to eliminate risk to users.”

The smart contract had two audits, one by Trail of Bits and another by Certora, according to Leshner. They apparently didn’t catch this bug which is seemingly implied to be in the interest rate algorithm.

Funds are safe they say and can be withdrawn as one pleases, with only a relatively small amount of 28,000 eth contained in the smart contract.

They are now considering first how to fix the bug and if it has been fixed how to move forward. “Options include a patch, or deploying another version of the protocol,” according to Leshner.

Who revealed the bug is unknown and it isn’t currently very clear whether they want to reveal themselves. Understandably they’re in a somewhat difficult position because there’s much acclaim on the one hand to be able to say I found a bug that two auditors couldn’t. On the other hand if it is exploited then perhaps they unfairly become a “suspect.”

Their decision and theirs alone as far as we’re concerned. Leshner said they’ll receive a bounty and further stated that “it’s a sign that the Ethereum community is rad and has a lot of good karma.”

Which is indeed something we have to rely on, that default of most people are good, especially smart people as obviously they can be rewarded far more by utilizing their skills honestly.

However, there might be ways of addressing this natural complexity of a project being new and perhaps interesting and something people want to play with live with real eth, at the same time as having bugs as something to be expected and almost guaranteed, rather than a surprise or rarity.

As time passes and as more eyes look at it perhaps you reach a stage where you can say unhackable code does exist, but to get to there it is usually a question of not if there will be a bug, but when will it be found.

To address that question some suggested decentralized natively digital insurance. Now obviously that insurance smart contract can itself be hacked, but if we assume time has passed and it is now unhackable, then you can imagine a contract where projects put in by default whatever amount of eth as insurance.

The amount put down by each project might be small, but if we combined all those small amounts, then it might be sufficient to cover quite a lot.

Now there was a project, which was seemingly discontinued, that kind of does the same thing, but for bounties. Created in a hackathon in 2016, one of its team members, Makoto Inoue, described back then Bountymax as:

“Bountymax is a ‘smart contract bounty smart contracts’ platform. Traditionally bounty sites are non standardised, and reporting process is manual and can cause disputes (who claimed first and whether the claim is legit).

We provide a platform and sandbox environment where smart contract owners can post their smart contract (target contract) as well as another contract which checks whether the contract can be hackable or not (invariant contract) with some rewards of Ether [if it is hackable].

Any security researches (or white hackers) can submit their exploit contract to prove that it can hack the contract. Once the hack is proved, the security researcher can get reward automatically.”

The pooling of eth is the easy part, but coding the smart contract in such a fashion that it knows through a proof whether another smart contract can be hacked or has been hacked – and thus know to automatically pay – sounds a bit like rocket science.

Hence perhaps why it was discontinued, or maybe the timing was simply not right. In 2016 there kind of was no dapp. Now some very useful ones are beginning to prove themselves.

Another safeguarding layer might perhaps be what we would like to think happened here, even though it probably isn’t the case.

We would like to think that when we find an interesting project, like Compound, and they have a public real time forum and we have the time, then we would like to think our job at that point is to engage in a “first review.”

Without revealing ourselves we ask difficult questions to the best of our ability with the aim there being the answers of course, but also the reaction.

If we’re satisfied to a fairly reasonable standard of hmm this is interesting and maybe could work, or rather if in our subjective view they pass first review – and obviously sometime we can be wrong, but let’s say in an ideal world – then we introduce the project in a feature article where we describe it objectively – as best as we can – as well as its pros and cons as we see them, and then we might follow up.

At that point we’d like to think that this then goes to “second review,” at least for those projects that appear to be very interesting, whereby expert coders – the very few of them around – then go through every dot and comma of the code to see how they can break it.

The aim there is to break it, and we do very very much want them to break it in every possible codable way, but obviously we don’t want them to break it for real (as in the exploit the bug). Then we’d want them to go to the project manager and say this is how anon can break it – gib mune (bounty) – where the project is of a financial nature and they should have or should be able to afford bounties.

That creates a sort of voluntary, but not quite voluntary, immune system. We say not quite voluntary because obviously we have an incentive to do what we do and we have a paywall to be able to afford it. While voluntarily of course because we love what we do and we’re obviously doing it out of free will.

Likewise, for the second review, to find a bug that two auditors couldn’t find is quite something. If it is someone unknown, their reputation shoots up instantly, and with that come many perks that money might not be able to buy. If they are known then it rises further. And then there is the incentive in the bounty to allow them to afford it.

In this way we kind of create a new ecosystem together. With mistakes on the way of course, but as long as we learn from those mistakes, then we can and should wear them proudly.

Copyrights Trustnodes.com

 

1
Leave a Reply

100000
1 Comment threads
0 Thread replies
1 Followers
 
Most reacted comment
Hottest comment thread
1 Comment authors
  Subscribe  
newest oldest most voted
Notify of

Hi! Completely agree with you on needing financial compensation on the bugs – but we’re not making it automated. Our approach is to provide a market based mechanism for people to asses whether there has been a hack rather than rely on automated tools.