How we Need to Rethink Data Protection for the 21st Century – Trustnodes

How we Need to Rethink Data Protection for the 21st Century


This is a guest editorial by Jason Cohen, CEO of Big Data Block. Needless to say, all views expressed below are solely those of the author.

As more and more of our daily activities move online, the need for greater data protection and security has never been more pressing. Few people however realise just how much of their personal data is passed on to corporate entities, who no matter how seriously they take their data management responsibilities will make mistakes. Given the scale of data siloes managed by corporates they provide too big a prize for hackers to ignore no matter how much time is spent shoring up security measures.    

The only realistic solution to solving the enterprise data loss problem is data decentralization.  I once heard a talk by a great CSO and his comment really stuck with me. Hackers can fail 999 times out of 1000 and still be very successful but we have to succeed every single time or we have failed.  The reality is that the best and brightest companies get hacked and it’s not because they aren’t skilled to prevent them. It’s simply a numbers game. Eventually, systems get so large and so many entry points exist that even the best and brightest can’t stop the 1 in 1000.  Once a hacker gets access to any data they likely get all of it. Why is this the case? It’s because most data is stored in a single database or in a single file repository. The model is flawed. The rise of blockchain technologies and the model of decentralization should be a guide on how to do this better.

What does it mean to have your data decentralized?  That your processing and data reside all over on possibly many thousands of machines.  If this sounds a little scary let’s not forget what’s already happening in the “less scary” environment that exists today where Equifax can expose everyone that has any credit.  Is that really a less scary model? This concept can be done in steps. Consider moving data to nodes all over the company and to many locations as a start. This is still centralized to some degree but at least it’s much harder to make all the hops necessary to get access to a large dataset.  Imagine if Equifax had only exposed a small portion of this data instead of all of it. The impact would have been much less severe.

Building a completely decentralized and distributed system to store and process data has significant security benefits. If we assume that all the best intentions and smartest people still have problems locking hackers out then we must consider how to best limit exposure. If I am a hacker and I manage to access a company’s infrastructure or cloud instance I have likely gained access to all the data in the exposed systems.  If, instead, this data is distributed all over the world and that same hacker is only able to get access to a single node then how much data could possibly be exposed? The other significant benefit to data sharing is that the data can be further scrambled so data access on a single node won’t even expose a single record. At best some components of an individual record will be exposed but not the complete record rendering the data potentially useless.

The other significant advantage to this approach is that threat isolation becomes much simpler to perform.  It’s easier to stop someone from accessing many systems if those systems aren’t centrally connected to one another.  This is why decentralization is appealing. If nodes are decentralized then once an attack is recognized that node can be immediately brought offline and the threat is contained.  In my personal experience, this isn’t the case when a breach happens at a very centralized data center for example. The effort to determine where the access happened and what systems are compromised can often take weeks or even months to remediate.  It’s the difference between having to check one node or possibly thousands.

It’s time to stop pretending we can lock things down so tight that nobody can get in.  We have seen most major brands suffer from successful hacks so the best approach isn’t to assume it won’t happen but instead to assume it will and protect as much data as possible when it does.  The only way to do this is to have the information so highly distributed and ideally decentralized to many locations so that the ability for a single attack to cause significant damage is very small. It’s only by reconceptualising the way in which data is stored in a decentralised format that we can then provide a more honest and realistic alternative for individuals’ data security. 


Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>