As humans, we pride ourselves with being creative beyond other creatures and beyond what robots can achieve. This might be true now, but is this a permanent quality or a temporary advantage? I think the subject is more complex than we would like to admit, and I will share some of my thoughts. For more information, you are welcome to read here, here, here, here, here, here, here and here.
Our intelligence served us well. People live everywhere on the face of the earth, and occasionally venture beyond: into deep oceans or in space. We do not any more die from hunger, at least in the developed countries. Our population grows, yet the world is hopefully far from being overpopulated as we tend to create new and more efficient food source and production methods with time. We have even eradicated some disease. We have a lot of time and energy to spare, enough to build huge monument. Evolutionary speaking it looks like things could not be better, thanks to our brains.
The big brains we have
We love to talk about our big brains and the evolutionary advantages it gave us. But there were many other creatures with huge brains. For example, neanderthals. The one thing that distinguishes the modern humans from neanderthals is abstract thinking. Neanderthals were great in many ways, in fact their brain size was larger than hours. However they were entirely practical, with tools made for survival rather than beauty. They also lacked socialization, language skills and some other cognitive capabilities we possess. Their visual thinking was great, they probably were creative, and some of their blood can be traced in our DNA, yet they are gone.
But is the brain size that important? Parrots have very small brains when compared to dogs or dolphins, yet they are very smart, very social with great visual skills and they copy our language extremely well. Some octopuses have huge brains, yet they use a large percentage of their brains mainly to change their skin color patterns. Human males have larger brains than females, this does not make us men smarter. Clearly, different brains are organized and optimized differently, but this is just a part of the story.
A different part of the story is the strength of our civilization. Our use of tools to do work and of language to pass knowledge is phenomenal. In some ways our society is much more intelligent than each of us. We are not some ant colony. Each of us could probably build a reasonably acceptable house even without the help of others. Yet, I dare to say none of us would be able to build a computer on a remote island in isolation, no matter how time and resources given. One of the advantages comes from books: we can spend a lifetime specializing in very narrow tasks and pass this knowledge to other specialists.
Another advantage comes from creativity: each of us thinks slightly differently, and provides slightly different solutions to the same problems. Some chosen few think very differently and change the way we do things. As a whole no one person could generate similar results. And this is our greatness.
We use tools to achieve our goals. Some of the tools are simple and stupid like a hammer. Others are smart and subtle like dogs and horses. In certain societies manpower, especially slaves, were seen in a very similar form as smart tools. While the masters forged policies and made art, the slaves did some of the hard work. While I do not recall this happening in Greece or roman societies, in some other cultures slaves could become ministers and even kings. For example, the biblical Joseph or some Indian kings. They were not free to do what they want, but they could serve their country in very important positions, including kings.
Robots that can think
Animals and humans are expensive and very limited. Machines can do more work cheaper and better. We replaced smart animals and slaves by machines, and teach the machines to think. There is nothing wrong with it, especially if we accept that machines one day may rule us. There are many forms of artificial intelligence. Statistical tools like support vector machines simply learn to tune their parameters for the input they get. Genetic algorithms evolve to provide the best solution.
Neural networks are taught to think just like us: they see examples and learn to generalize from them. They can easily teach each other. Some of the best chess neural networks are taught by playing with other neural networks countless times. The generative networks create examples that are very similar to the original data, yet will fool the neural network built to deal with the data. The current generation of neural networks can understand our commands, can learn from each other and from the real life situations, can do many of the complex thinking tasks.
Humans are still the masters
The robots work for us doing the work we built them to do. They do not have their own desires, and do not manipulate us to act on their behalf. If it occasionally seams otherwise, there are some humans that planted the specific behavior into the robots. When robots outplay us in almost every game, they use computational power and endless exposure to game situations. They can appear creative, simply because they think differently from humans. They can communicate with each other and learn faster than any man, but they us a very different form of communication and almost brute-force learning. So we accept them as inferior and our slaves.
Robots are creating art
One of the things that set us apart from the animals, is the art we create. Even neanderthals did not create the art in the way we perceive it. The neural networks do create art and music. We do not necessarily like and enjoy the art created by the neural networks, but would any barbaric chieftain or civilized medieval knight enjoy our own art and music?
The art created by the neural network often has some nightmarish properties of several objects mixing with each other, flowing and morphing into each other in some uncanny ways. The music created by robots does not have to sound mechanical, and can be of outstanding complexity and texture. Robots even create technology, optimizing various design parameters under human supervisions. Like people work easier in the boundaries set up by our culture, robots can be incredibly creative within the rules of the game that they play.
Can robots become leaders?
The robots do not need to be human-like. In fact there is so called credibility gap: at some point, the more human-like robots appear to be, the more we will be upset by them being fake. As soon as we accept that the robots are robots, they can do amazing deeds.
Robots can definitely devise strategy and optimize resource allocation by checking much more scenarios than a person would. They can also read our emotions more accurately than we read each other. If you are not sure, consider polygraphs. Artificial intelligence can be programmed to calm or motivate us, even though such applications did not yet reach their maturity. Why do these algorithms work? Some artificial intelligence is programmed to sound like a human with a problem, some to distinguish between programmed machines and people, and the others to provide various answers. Now, as the machines talk, all strategies improve, and the experiences accumulate.
Given the strategy and the empathy, a robot can become an effective leader. It may also develop human faults.
Can robots be moral and accountable?
Robots can develop human-like cognitive biases, simply because some examples appear more often then others. By analyzing statistics, the robot would probably not try to understand the root cause, but will probably reach the most likely solution. The robots are not taught cultural sensitivity, and their honesty can be very offensive. When a robot acts offensively, we do not sue the robot in the court, but we can sue the company that created it.
A company that creates bots could teach them some moral principles just like the parents teach the child. We show some situations and explain while the initial response was wrong. Even if there are not many negative examples, we could teach the robot to generalize and avoid offending behavior. If a robot drives a car resulting in a crash, we hear about it. When a robot mislabeled African Americans as monkeys, we object to this behavior. If a robot prevents a drag queen using her social network, the case resonates in the media. There are consequences to the huge companies involved. And these consequences build both future awareness and future responsibility.
Fear from cyborgs
We do not necessarily fear the robots, as the robots do not yet have a free will. A combined mind of humans and machines is much more dangerous. Robots could harvest the most primitive human desires, and empower the most unscrupulous human wills.
The social networks are full of hate talk. Teenagers often experience emotional issues due to bullying of their peers that feel protected by certain anonymity of the digital space. What would happen if someone connected the social network mentality to the productivity of a conveyor belt? We know what fascism can do and rightfully fear it. Yet, if a fascist state would combine robotic leader with robotic henchmen, the situation could be still more explosive.
The computer-empowered humans can do more than the regular people. They can get an immediate access to a lot of information. They can communicate their message to many people. They can amass significant machine power if needed. Wait. Isn’t it what we already do?
If you read this blog, you are probably already more cyborg than you would like to admit. You probably spend most of your day in front of a mobile device or a computer screen, use or plan to use soon a car with some level of driving capabilities, and have a hard time imagining a life away from what contemporary civilization offers. So while we may fear to lose jobs, rest and privacy to robotic oversight, the fear is not surprising.
Will humans become the next neanderthals?
Since the introduction of computational abilities in every home, we are starting to become less intelligent and more connected. We could compare the natural human beings to neanderthals, that crossbreed and die out under the pressure of computer-enabled super-connected new kind of people. The people become less willing to do careful computations, as computers are becoming much better than we ever could hope to be. At the same time, we get much better at wanting things and expressing our will, as our will powers the computers we work with. Old languages are dying out, leaving only those languages that are spoken by millions and comfortable for the computers. Old isolated tribes also disappear at alarming rate. The writing is on the wall, and we should probably be happy that we are on the winning side, at least for now. Why does it feel so bad?
Can the robots want things?
Some of the most influential people were good in many things. The same person could be an athlete, an artist, a scientist and an author. There are fewer people of these sort. At the same time we see algorithms like certain breeds of neural networks excelling in many different disciplines. It is not unreasonable to assume that at some point robots will not depend on humans to develop certain desires. Any robot I want to imagine, should eventually want to become more energy efficient, have faster communication with better connectivity and do a meaningful task. What would happen to the human element, once the robot will not benefit from our symbiosis? I sincerely hope that by improving our learning ability and communication skills people will stay relevant at least for a while.