We trust in data right? Figures don’t lie, right? Einstein’s general theory, Newtonian physics – all that classic stuff – accurately describes the world we live in, right? The data of life accumulates at a staggering rate with tons of data, trillions of gigabytes of data piling up all the time. According to Forbes magazine, in only the last two years, 90% of all the data in the world has been generated. Everything from your tax number to the trajectory of meteors is data, being churned out every second of every day. Make that every nanosecond.
But what about errors?
It stands to simple reason that in all that data smog there will be a great deal of information which is corrupted, incorrect, or exists in lines of code which were put together in an unsatisfactory way. The software is written by humans so of course there will be errors. It’s more than that though, because now our machines are in on the act too. AI allows machines to create new algorithms, and if there is a fault in the original programming – which there will be – then that fault can be perpetuated for all time. With the exponential growth of data and therefore data errors it becomes ever more difficult to track where any problem actually resides. If an autonomous car crashes, is it because of a screw-up in the original programming by a human, or because AI has taken the task over and perhaps come to perfectly logical but perfectly wrong conclusions about the rules of the road?
Right now the cryptosphere is largely still ruled by the brains and hearts of entrepreneurs. People create projects because they really love the idea they want to broach to the world, and they then use their intelligence to make the best of their offer. The machines will take over sooner or later however because that’s just the way it is. Look at how High Frequency Trading has become the norm in some areas of banking. The Rules of Engagement were originally written by humans but have long been superseded by rules written by machines, because they operate at speeds and densities far beyond the abilities of even the most lightning fast programmers.
How will we follow the data trail in the future?
And so inevitably the same will come to apply to the cryptosphere, as the data smog around crypto grows ever more dense, and it gets harder and harder to investigate the trail of who wrote which piece of code, and where that code now resides. No one even cares about following the data trail because hey, this is the new world order and we’re not interested in the old world name-and-blame stuff, right?
Crypto is already notoriously volatile, but if and when the machines take over and start making investment punts in fractions of a second, the current steep rises and falls in the market could seem like a quiet picnic in the park. Unpicking the data trail then will be quite some task.