We all feel that robots will soon rule the earth. That will definitely affect our lifestyle, career choices and identity, but will it also affect our goals and moral compass? With today’s article I suggest reading some related posts here, here, here, here, here, here, here, here, here, and here.
AI dominated world
AI can already beat a human team in every intellectual sport I can think of: chess, go, dota or trivia. Algorithms manipulate us to vote or to buy. It is only a matter of time until humans will be no match for bots. Scientists claim that AI will be smarter than any human being ever born in 2040, and smarter than entire humanity in 2060. Probably scientists are wrong, perhaps by 20 or 30 years. The trend is there and remains stable. When CPUs started to slow down, GPUs became the main computational vehicles. Quantum computers are likely to become ubiquitous in every data center soon enough. And we can only imagine which technology will help to multiply the computational power afterward.
Interestingly enough, while robots often exceed humans in more advanced intellectual activities, it is still well behind in everything delivered to us from mother nature. AI pilots are easy to build, training AI to drive is significantly harder, and training AI to move like a man or a mountain goat, or a monkey is yet much harder. There is no fundamental limitation for artificial intelligence to copy animal traits with high energy efficiency. It took nature billions of years to develop, so the challenge is very hard.
We are not better than some other highly successful animals in our athletic capabilities and situational awareness. As much as it may hurt our pride we do not stand a chance against AI in intellectual capabilities looking forward 100 years from now. Yet, humanity has several tricks that put us in a unique position.
Humans have developed culture. We have arts, science, and ethics. No animals have these attributes, and no AI can yet come close to human abilities. So if there is anything that makes us superior, our social capabilities come to mind. After all, mankind had a long and complex history of clashes between different perspectives, values, and ways of life. This is our unique cultural evolution that transcends any person or people.
Robots can create art, fashion, and literature, but the quality of the pieces that are generated without human intervention is not very good yet. Computer generated graphics is very good handling the small technical details, enhancing individual pixels end slightly moving the curves. When we ask an AI to generate an image, we get something unpleasant. This is often called “deep dreaming” and I would not put the result of such a process on my wall. Robots are effective in translating texts, reducing text size, and generating short informative texts based on numerical indicators and keywords. AI can theoretically create poems, but the poems created by AI appear to be full of nonsense.
It’s not that artificial intelligence lacks creativity. Creativity is easy to mimic using a large bank of data. The critical skills are also present due to adversarial networks. And the experience any AI system creates playing against other AI systems is more than people can acquire in a thousand lifetimes. There is something else present in human arts, that we cannot quite pinpoint or synthesize.
Maybe what makes us so unique is our connection with our animal side? All the instincts and chemistry that is perceived as emotions was developed by a very long and complex process of evolution. We can experience not just love and hate, fear and greed, but a huge range of fine-tuned emotions. As we started to live in large communities, our emotions become more complex. The more civilized cultures often hide their primal emotions, showing a very fine gradation of reactions. If you are skeptic, try to witness a Japanese person trying to say “no” in a polite form.
The primal emotions drive us to action, pumping our blood with hormones and other chemical reagents. These emotions help creatures survive in the wild. To survive in the concrete jungles or medieval courts, we learned to sublimate the primal calls into refined messages and art.
The experiment I will suggest for a fellow researcher is very simple: try to recreate the social norms of a medieval court in a computer simulation. Let AIs face each other. The AI agents that will not lose their heads in this game of thrones, will probably be very interesting, and potentially capable of good art. I will be happy to join such a project.
Artificial intelligence does not have a moral compass. Isaac Asimov suggested “Three Laws of Robotics”:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Current AIs do not actually need these laws, as they do not have a free will. Instead, they are trying to optimize their own goal function. Strangely, stereotypes help the AI to optimize its goal function. If an algorithm classifies African Americans as monkeys, it is bad publicity, bad judgment, and probably not the behavior its programmers expected. However, no AI is mistake-free and AIs develop stereotype to increase its accuracy rates over the training set.
For example, if we would ask AI to predict who is a potential terrorist, it would very fast train to profile Muslim Arabs. Not because it is morally right, and definitely not because some cultural bias: simply it would optimize some statistics. A programmer could try to offset the inherent bias of the neural networks by introducing some examples to teach it differently. However, if the programmer is unaware of a bias there is nothing he can do.
Contrary to the popular belief, it is possible to teach AI empathy, when we divide the empathy into its building blocks. Our emotions can be read by algorithms even better than by humans. Algorithms can accurately estimate the sentiment of each word, facial expression or voice modulation. In a similar way, algorithms can accurately mirror certain positive emotions by similar sentiment and counter negative emotions with preprogrammed strategies. Maybe algorithms cannot replace the best highly trained and experienced psychologists just yet, but they can probably replace the vast majority of mental health practitioners, gurus, and coaches in the near future.
Algorithms may also learn to “care deeply” for each other by sharing resources in order to achieve common goals. The same algorithms may get competitive when the resources become scarce and their survival is threatened. If a goal is properly defined, the robots may even sacrifice themselves for the greater good.
The social behavior of artificial intelligence depends on its goal functions, the training processes and the corrective measures of its creators.
As humans, we think we have free will. Statistically, this will may be manipulated by many factors including the heat of the beverage we drink, the moral quality of the books we read and the cultural bias of our friends. Yet, in each particular case, each particular person may behave differently from the statistical norms. From what I understand, we cannot fully preprogram a free will to artificial intelligence, but we can program it to try new strategies from time to time.
We as engineers and scientists program AI to achieve goals that correspond to our personal plans and values. At the same time, the vast systems of Google, Facebook, and Amazon condition us to act accordingly to the company needs. Eventually, there is a feedback between what we program into our robots and what our robots condition us to do. As we feel our careers and social status threatened, we continue to build robots that in the long run might be dangerous for the free will of the entire humanity. And I am not even talking about the various hackers, most of which work for the shady government organizations whose name we prefer not to say.
As simple robots take the manual jobs, and advanced datacenters take more intellectual jobs, we as humans are forced to specialize. The social cost of technology is high, but sufficiently advanced technology may make us more human.
While robots are good at polishing art pieces and solving the problems we define, they cannot yet design great art pieces, create new inspiring goals and visions, or provide a moral compass that contradicts their stereotypes. As humans, we can use the things that make us different and special.
The mass production and mass consumption are AI-friendly, yet there are niche products which AI cannot see. With time we all probably need to become artists and entrepreneurs, otherwise, we may become pets of our own creations.
While most science fiction writers were afraid of dehumanization of production lines and gray future for our kind, the reality appears to be far from it. AI pushes us to explore new niches and become more human, and this is probably a good thing…