ACTIVATION Read online

Page 2


  Postulate number 8: Technology allows for constant monitoring of individual activities.

  In general, computer programs act based on a set of instructions that are given to them. They just apply what is expected from them. They do not show any form of intelligence. They just repeat. They seem deprived of any creativity or learning capability.

  But are they really?

  They certainly were. But not anymore. Or at least not for long. There are still a few glitches that need to be overcome. But the technology is getting there. An experiment was made by a large high-tech company that consisted of giving access to a social network to its own artificial intelligence (AI) and letting it interact with human users. It took less than a day for the AI to be ‘hijacked’ and to start spreading racist comments to the online community. Some users started a massive attack on the AI and fed it with anti-feminism, racist, and neo-Nazi comments. The firm decided to shut down the account shortly after realizing what had just happened. This looked like a complete failure.

  At first sight.

  What was supposed to be a casual conversational experiment between humans and an AI turned into an absolute public relation disaster.

  But those who thought it was a complete failure were wrong. The most important conclusion was that the program could actually learn from others.

  The big challenge becomes to ensure that the teachings are analyzed before reaching the AI so that every inappropriate comment is discarded and does not contaminate the AI.

  Learning also comes in the form of artistic creativity.

  Assisted creativity, that is.

  A good illustration of that happened when a robot was built and fed with all of Rembrandt’s works. The robot analyzed every single aspect of the artist’s style. The amount and thickness of paint he used in each painting, and the type of portraits he was painting. It eventually came up with an ‘average’ of all these characteristics. It resulted in a white male, aged thirty to forty, with facial hair, dressed in dark colors, wearing a white collar and a hat. Now that the robot knows what to paint, it studies the distance between the eyes, the nose and the mouth of each existing painting that matches the model characteristics it is to produce.

  The result is astounding. The new painting, computer-generated some 350 years after the artist passed away, succeeds in reproducing the artist’s style. Up to a point where an average painting enthusiast considers it as a newly found painting from the artist himself.

  And what about this robot, built by an institute of technology, that is able to listen and, most importantly, improvise and play along with human musicians? It took a certain number of years of research and development to get to that point. But the technology is there now, and the next iteration of a similar robot will take a lot less time to develop.

  These robots have one thing in common, and it is called machine-learning, artificial intelligence.

  Postulate number 9: Under proper supervision, robots are able to learn and to show signs of creativity… in other words, to make decisions.

  These nine signs, these nine trends, emerging or not, are all slowly and insidiously converging to create a new world.

  A new world order.

  People do not notice the dramatic change that is about to happen. They do not feel the threat, as everything they see is branded as “making the world a better place”.

  People cannot connect all the dots because they simply do not see the big picture.

  A few additional years will be required for the picture to finally unveil itself to the world.

  CHAPTER 2

  Five years later…

  With the major progress in the tech sector, a new race begins to create an artificial intelligence able to replicate the human brain.

  For years, companies have attempted to develop their own version of what is called deep learning, machine learning, and artificial intelligence.

  The first to bring a stable solution to market will be guaranteed of a bright future. A solution that not only offers communication capabilities across all devices, but also decision-making capabilities based on the data collected by these connected devices. And finally, actions based on these decisions. All of which in a seamless and secure way.

  But it quickly becomes obvious that fierce competition will not promote the creation of an industry-standard.

  Indeed, as most companies want to bring their own solution to their customers’ homes, the market does not seem to really take off. The interest is there, but it only translates into limited traction.

  The security question is also of paramount importance. Whenever a new product is released, a group of activists, hackers for the most part, goes on a mission to take over the platform, the application, the intelligence, and turns it against its users.

  The objective is twofold.

  One, the challenge a new technology is offering for people eager to measure their own skills against what is sold as the most advanced and secure form of artificial intelligence.

  And two, to send a message to the consumers that giving away too much of their lives to the technology can be dangerous.

  The multiplication of offerings also leaves the consumer a bit perplexed and lost.

  Which solution to choose? Which one is going to stick around long enough so consumers still have service support a year or two down the road?

  Who would want to pay for an AI-based assistant, feed it with as much data as possible about their habits, give access to their bank accounts to eventually discover that either the device is not as safe as it was said it was. Or, that the company that built it has been purchased by a competitor and the product line is discontinued?

  Not too many.

  Fortunately, markets are efficient and tend to self-regulate.

  The only way to bring trust to the customer comes from a truce among competitors as opposed to an all-out commercial war. The major players in the industry decide to form an alliance under which all research and development capabilities are consolidated. The alliance creates a standard that all participating companies incorporate in their technology.

  Unfortunately, different approaches on what the end product should look like maintain some form of competition.

  On one side of the spectrum, some companies see the future in a fully decentralized artificial intelligence. An AI that learns from the various environments where it is installed.

  And on the other side, the proponents of a single, centralized artificial intelligence. An AI managed by the consortium of companies sharing the same view and that is controlled, updated, and upgraded for all devices at the same time.

  In a fully autonomous market, the two different approaches wouldn’t pose a problem. The invisible hand of the market would indeed choose the most efficient solution. It would eventually become the standard, leaving no room to the alternative technology.

  Unfortunately, the market is not autonomous. It suffers from regulations, especially when it comes to technology and artificial intelligence.

  What was originally sold as a way to make people’s lives easier, to dramatically increase analytical capabilities, and to find answers to all the questions humans have not been able to solve until now, becomes a geostrategic matter.

  Access to data and information is power.

  Granting that access to a foreign company means for a government to abandon power and independence.

  Information is the new oil. Information is the new natural resource. Information is a key component of governments’ strategy.

  Hence, the standards that businesses try to create, the governments try to destroy.

  Geostrategic blocks emerge again.

  Information is strategic, and no government wants another government to have access to the whereabouts of their citizens. Specific solutions are built in Russia, in China, in the European Union, and in the US.

  They all aim to become the world standard, or at least to be the preferred solution for their area of influence.

 
And if they don’t manage to be the former or even the latter, they impose the use of their homegrown solution to their population by making it illegal to use a foreign solution.

  And so it goes for a few years.

  Every major block develops its own artificial intelligence, with limited success given all the roadblocks the governments are putting in front of the companies that want to innovate by building open platforms.

  Artificial intelligence becomes present in more and more areas of the economy. It is all about showing the world that one block is more advanced than the other, and can put more activities under the supervision of a secured artificial intelligence.

  But in all races, someone sometimes falls. A fall that has the power to trigger an irreversible chain reaction.

  Some smaller countries, formerly considered as pioneers in e-government, are the quickest to transition most of their administration to some form of artificial intelligence.

  The small size of these countries, both in surface and in population, makes it possible to implement a full-blown strategy of ‘everything to AI’. Teaching is now performed by AI. Police activity is supported by AI. And robotic personal assistants, performing either intellectual or physical tasks, are in almost every household.

  For those countries, life has become a succession of happy days. Stress levels are going down to a never before recorded minimum. Health indicators and students’ levels are strongly improving.

  Unfortunately, it is not very smart from these countries to rub their happiness in their neighbor’s nose. Especially when the neighbor is a lot larger than they are.

  The reaction of a jealous country that also happens to be a regional power does not take long.

  It materializes in a state-driven, hacker-executed, full-blown attack on the small country’s AI. It stops when the country is completely shut down, with limited access to power or foreign supplies, and pretty much sent back a century or two ago.

  The message is clear; a regional super-power cannot be messed with. The unfortunate episode always concludes with the attacked country surrendering to the block’s controlled AI.

  The world community is shocked by such an action from one of the regional AI super-powers.

  As usual, many nations condemn the country in question.

  But a condemnation that remains without effect.

  What can really be expected or feared from countries and economies where jobs such as Chief Happiness Officer have become so common?

  Nothing.

  And as no human casualties were made as part of this ‘digital aggression’, there is limited ground for the international community to intervene in favor of the smaller country.

  What this action reveals is how weak the defense against a coordinated hacker attack still is.

  Despite all the efforts made to secure an AI installation, time, dedication and a sufficient number of hackers will always be able to penetrate an AI interface and make it collapse from the inside.

  Security has to be taken to a whole new level.

  Defensive countermeasures need to be improved to prevent a similar attack on a larger economy.

  A second incident takes place and is decisive in shaping the world as it is known today.

  While most of the efforts are put on securing the infrastructure and the AI programs from outside threats, fewer efforts are now put on monitoring the AI’s activity.

  The trend is similar across all major powers in the world.

  Security at all costs. Build as many firewalls as possible, as complex as possible, and leave no back door open to penetrate the system.

  The supervision of what is happening inside these firewalls becomes a lower priority.

  The victim of this strategy is a country living in autarky. It rejects the rest of the world. It is completely closed to global trade, global information, and global networks. But it is totally open when it comes to threatening its neighbors with its so-called military capabilities.

  For obvious reasons, this country has developed its own AI. And while not state-of-the-art, foreign experts judge it, from what they can understand of it, as something quite remarkable. If the goal of that country’s leader is to control the population and to ensure that nothing from the outside world can pollute the nationalistic propaganda, then this AI is the perfect tool.

  The AI is based on traditional machine-learning, centrally controlled by the government. Machine-learning is based on the leader’s conspiracy theory that all countries in the world are enemies of the state.

  All foreign economies are corrupting their citizens and have the ability to corrupt other economies. Therefore, the country needs to protect itself against any attempt from foreign countries to instill a message of dissidence among its population.

  The country’s AI role is to protect the government’s ideas and, to a lesser extent, to protect its citizens.

  Built and nurtured with such ideas, coupled with binary logic, and leaving no space for interpretation or moral judgment, the AI comes to the conclusion that it represents everything that is good. And, by basic deduction, the rest of the world represents everything that is evil.

  To preserve the government and the people, the AI must eliminate the outside threat.

  The AI’s decision takes the form of a takeover targeting the country’s military capabilities. Capabilities subsequently aimed toward a set of foreign targets chosen by the AI.

  Long-range missiles are launched instantly by order of the AI.

  The world’s reaction is immediate. With nuclear heads cruising towards the world’s capitals, the only solution is to destroy the threat, regardless of the cost.

  The first action is to intercept and destroy the missiles before they reach their targets.

  The second action is to stop this country’s belligerent and dangerous behavior once and for all.

  For the first time in a very long time, the world’s community is no longer barking, but willing to bite hard until all life is gone from its prey.

  Time is a scarce resource and the reaction cannot afford to be surgical. Carpet bombing and eradication are seen as the only solution to preserve the world. Too bad for collateral damage. Too bad for civilians. It is too late for them.

  Fortunately, a last-second statement from the country’s leader changes the outcome of that dark day.

  The country did not actually manage to develop the nuclear capabilities that could be embarked in missiles. Out of the hundreds of missiles that were fired, many did not manage to go far and eventually crashed without making any victim.

  What was believed to be nuclear weapons ended up being conventional weapons of limited destructive power.

  The rest of the missiles were destroyed in the air by anti-missile defense systems. The coalition of countries that formed to respond to the attack voted to abort the annihilation mission and ordered its bombers to fly back to their bases.

  It was a close call. The world had just come to the verge of collapse because of a poorly managed AI, with access to too much power. It was only saved because of the propaganda of a dynasty of dictators that eventually proved to be just that, propaganda, with nothing behind. A cardboard-made army. A propaganda that also deceived the AI.

  The government is eventually forced to shut down the AI after the attacks were launched. Foreign experts are sent to the country to study and determine the root causes of what could have send the world to its end.

  The reasons are quickly found and call for an immediate international regulatory effort.

  A supervisory structure is created. Rules and ethical guidance around AI are to be defined. One single objective will govern this effort; to ensure that similar accidents never happen again.

  CHAPTER 3

  In 1945, as countries slowly emerge from the World War II nightmare, the world’s governments come together as one to create the United Nations. The recent incident requires a similar reaction. Unanimously, governments decide the creation of a worldwide organization. It
s purpose is to define and organize the rules a global artificial intelligence should operate under.

  Such a global AI does not yet exist. But after what happened, it becomes necessary to remove the competitive element that has led to potentially disastrous conflicts.

  Answering such a concern comes in the form of designing a global standard which rules are to be enforced by and across all countries.

  It doesn’t take too much negotiating to sell the idea to the major powers. Their might and power have already shaken under what could easily have become a third global conflict.

  It is now high time a governance body brought back some reason to what started as a technology competition and turned into a geostrategic race for more power. A power that took the form of information gathering and analysis to be used against other countries.

  All countries agree with the principle. They form a United Nations’ sub-organization in charge of monitoring all future AI’s developments.

  The newly formed organization operates under three prerogatives.

  One, define the ethical rules under which the AI should operate. A number of philosophers and psychologists are in charge of this activity.

  Two, ensure the AI cannot be hacked by anyone. All software security companies in the world are responsible for building a solution that will be impenetrable.

  And three, in case of a major problem, the organization itself needs to be able to shut the AI down.

  That last one is the most challenging task. To make a shutdown possible, the AI needs to have a backdoor that can be unlocked to access its program. However, leaving a backdoor also means that the AI is not completely sealed. And therefore it provides potential hackers with a way to penetrate the AI.