End of the digital rainbow can reserve dystopias for society
May 20, 2016
End of the digital rainbow: The potential dangers of a datageddon and the implications of information bubbles on society's future.
You don't need to look it up in the dictionary. Datageddon isn't an entry (yet), and in fact, it's a term that suggests more laughter than fear as it emerged in a classic quote from an HBO series. However, worse than a ‘bubble’ in the world of technology that can bring great economic losses, a digital catastrophe can deliver your home address into the hands of criminal gangs - and no, we're not talking about the government.
"It will be nothing less than a catastrophe: lack of information, rationing of information, black markets of information or, more succinctly, a datageddon." The quote is from Gavin Belson, the CEO of a technological giant (but already seen as decadent) of the series "Silicon Valley". Belson is not the leader or visionary he believes himself to be, but the definition of "data armageddon" or datageddon is uncomfortably pertinent. And not fun. Datageddon is a very possible dystopian scenario.
First, let's talk about the bubble itself. The term is much better known than datageddon because economic bubbles have occurred over the last hundred years. What is a bubble? The definition from Wikipedia is sufficiently correct: "A speculative bubble forms in a market when the only thing that sustains the progression of the market is the entry of new participants, in a natural pyramid scheme. Since the number of possible participants is finite, all bubbles have a predictable end although it is generally difficult to establish its moment". In summary: the growth of a certain market that does not have the necessary fundamentals to sustain it. That is: while the market grows, a lot of money is made, but when the bubble bursts, a gigantic amount of money ceases to exist, in proportions of up to 90%.
Despite being incredibly destructive and to some extent, predictable, bubbles are a part of the capitalist cycle that function as an adaptation of reality. It is estimated that the 2008 bubble made about US$ 1 trillion disappear (about R$3.5 trillion, ⅓ of the Brazilian GDP).
Venture capitalists (VCs) are the promoters of bubbles in digital capitalism. They invest in promising companies and reap the results when they get it right one in 10 or more attempts. They bankroll bubbles, however, they rarely lose the investment. Just like a pyramid, a frisson is created around a company or niche, pumped up by the calculated potential they have, not based on revenue. That is: by investing in a start-up, they not only buy part of the business but inject into the market the optimism that will make the valuation of their investment grow.
It is a significant facet of the schizophrenia of capitalism: the simple fact that a company could have a surreal income makes it attractive enough to receive hundreds of millions of dollars in investment. Exponential growth of the user base and a neat business plan have already left many VCs billionaires and naive investors with nothing. Companies whose market valuation exceeds US$1 billion are called unicorns because they are technically as rare as the mythical animal - this sensational WSJ infographic shows that even this is no longer true. Some of these unicorns rise to stratospheric valuations with zero revenue and there are more of them now than ever before.
Few analysts disagree that there is a bubble, but it is not certain what its impact will be (read: how much money it will melt when it bursts) or when it will erupt.
But leaving aside the economic issue - not that it's not important, but in the case of datageddon, there is an unprecedented impact, there is a variant of the bubble - the information bubble.
If there is a massive collapse and a domino effect in Silicon Valley, unlike the first internet bubble shortly after the turn of the millennium, the companies that would fall hold surreal amounts of information. Google holds about 10 exabytes (10 billion gigabytes) and Facebook has approximately 3 billion items, cataloged with names, emails, addresses, degrees of relationships, friendships, sexual preferences, and with a wealth of detail impossible until very recently.
It was in this environment that the Center for Long-Term Cybersecurity at the University of Berkeley developed a series of five possible scenarios for a future as early as 2020. The probable futures of the CLTC are gloomy possibilities, worthy of science fiction that can, naturally, overlap. These scenarios involve to a greater or lesser extent all the actors: audience, government, companies, hackers, digital criminals, behavior, etc.
Among the scenarios imagined by the CLTC are:
- a purposeful and manipulative integration of the "Internet of Things" that would keep users hostage;
- a digitization reaching such a point where the amount of data collected is so large that it provides insights about purchasing patterns that consumers themselves still don't know because they don't have the data organized in a way to perceive it;
- a world in which the failure of authorities to establish functional legislation turns the digital world into a "Wild West", where individuals and organizations create their own courts and do "justice" with their own hands;
- a world in which the manipulation of emotions (for commercial or political purposes, of course) reaches an Orwellian point (the Facebook has even admitted an experiment in this line and apologized for doing it without notifying users);
- a scenario where the rupture of the economic bubble linked to technology companies leads to large databases, of spectacular wealth, being sold - legally - to companies.
The last scenario was the subject of a really cool article from The Atlantic. Basically, the scenario is one in which after a company's bankruptcy, its databases are sold off as bankrupt assets to buyers who can range from the unscrupulous to the openly criminal. In this case, all your preferences, ties of friendship and family, and history can go into the hands of buyers who will want to promote a sale "aggressively" or force you to pay a ransom to avoid some sanction (think of the Ashley Madison scandal taken to the millionth power).
You have already legally given most of these data to these companies. If they sell this data, they are not even committing a crime.
The scenarios drawn in the study can occur only in case of a financial rupture. This would certainly aggravate the situation, but the "bubble" here goes beyond the business sphere. They can happen without there being any "disaster". There's an unbelievable amount of information pent up in various places: technology companies, banks, social networks, stores, government and any and all entities that use data to guide or facilitate their mission. These "personal data dams" create new opportunities - which the market certainly takes advantage of - but they also gave birth to risks that society as a whole does not perceive because they did not exist.
Want to do an exercise? Well then: in the massacre of Rwanda, in the 90s, about 10% of the population was killed in 100 days (about 700 thousand people). The tragedy escalated when one of the Hutu militias took over a government body that had a registry of the population, with names, relationships, addresses, and the ethnic group they belonged to.
Another case: a major retailer in the US sends its customers offers of products they are interested in. A customer wrote to the company complaining that he was receiving offers of baby and pregnant products and that his children were already teenagers. The customer didn't know, but his oldest daughter was pregnant.
One more: there is already an app that analyzes all data related to a certain person (through cookies, emails, usernames, etc) and draws up a profile for its users that serves to make the approach of that person more efficient. For example, when making a professional contact, this profile warns that the person you are writing to does not like direct sentences, prefers long reasoning, does not like ironies, and rarely closes deals immediately.
All the cases mentioned have already occurred. With the growing accumulation of data, they will become more and more frequent.
The CLTC exercise serves to remind two things: first, that the overwhelming majority of people have no idea how exposed they are on the Internet. Things they wouldn't even reveal to spouses or family members are on some server of a tech giant in some data center in a remote country.
The second warning is about how society wants to deal with this. Any unnecessary government interference harms society, but it's impossible to use the same laws today that were used two decades ago to define what is or is not allowed to do with a personal database.
If in the 90s, a customer base of a credit card operator was worth a fortune and exposed a considerable portion of people's privacy, the files of companies like Facebook, Google, and Amazon are such that they even allow predicting actions that people themselves cannot imagine. Yes, the action of the State is usually a problem, but in this case, it is the worst possible alternative, except for all the others, as Winston Churchill would say. Technology is neither liberating nor enslaving. The human being can be either of the two and, statistically, the latter is much more common than the former.\
PS: eight years after, the CLTC scenarios became true - all of them.