Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

The False Danger, the Real Threat, and the Solution - Dealing with AI

We, the human are weak. There are many stronger, faster animals on this Earth. And still, we rule this planet. This is due to our intell...



We, the human are weak. There are many stronger, faster animals on this Earth. And still, we rule this planet. This is due to our intelligence: We are smarter than any other creature on Earth. With our intelligence, we are able to enhance our physical strength, our speed, and practically any other ability, by creating tools to help achieve our goals. First, we learned how to use stronger, faster animals to extend our abilities. Then we created machines to extend our abilities even more. Now, with our intelligence and gathered knowledge, we are able to form the Earth, explore space, and understand the Universe. We are able to accomplish this because we are smart, and we are able to create tools to enhance our abilities.

And we are able to enhance our ultimate strength, our intelligence as well. We created counting machines; then we created logic-based computers to help and solve problems quicker than we would be able to. We recently reached the point when our smart tools are not just able to solve problems faster than we can, but able to solve problems better than we can as well. They conquered the “best” humans in the most challenging fields, such as chess, go, and trivia. Smart machines have reached a point where we do not need to teach them, only give them rules, and they are able to learn and gain knowledge independently from us. These machines can be smarter than us. We are surpassed in our ultimate strength. We are not the smartest creature on Earth anymore. We created our superiority.

Did we? Or have we just extended our abilities in a new direction and to a new level? Stronger and faster machines did not conquer us. They are tools in our hands to accomplish more than what we could accomplish without them. Smarter machines are only doing the same thing. They do what we can't. They are tools in our hands to extend our abilities, in this case, our intelligence.

This creates a strange feeling. We extend our ability by creating and using capable tools in the field of cognition, the very field that made us capable of conquering. This looks different than extending our physical powers. We are using our cognitive power, our intelligence to enhance our abilities. And now we became able to create an enhancement of our enhancing ability, our cognitive power. We were able to create a machine not stronger or faster, but smarter than we are.

Is it a danger to us? Will they conquer us by their cognitive ability, as we conquered everything else by our cognitive ability? Will they be our conquerors? Or are they just harmless tools, made to enhance our abilities in a new direction?

Many of us are scared of the perspective of creating smarter machines. However, we went through this before. We were scared when our abilities were enhanced by machines and thought that we would become unneeded, that machine will replace us. The machine has replaced us and set us free from some of the tedious work. Today we would not want to go back in time to work instead of the steam machine and others. These machines extended our physical strength to do more. The machine, which is "smarter" than us, has extended our cognitive abilities to do more. It overcomes us in chess, go, and trivia, and in all areas requiring thinking or problem-solving. But, it overcoming us is not the point. A crane overcomes our strength by lifting more weight. The real question, and where the danger is: Who drives; who controls the machine?

This has always been the question. We are building more and more capable machines, and they replace us in more and more fields to do things that we, who control the machine, want. We decide to build or to destroy.

However, there is a big difference between physical and cognitive abilities. We, the builders can completely control a physical machine. It is capable of doing things that it was built for. Computers with closed programming are the same: They are physical machines, capable only of the things that they are built for. Cognitive machines are different. They are learning machines that can acquire knowledge, new knowledge which was not built in before, maybe what did not even exist before. Give them time, sometimes just hours (and they need less and less), to be a better chess player than a human ever could. And this is true for all cognitive challenges. They learn much faster, acquire knowledge much deeper. And we, the human, may not even understand the acquired knowledge that they come to achieve. And they may not have limits on knowledge. Can we lose control over them?

No, we do not lose control over them. We might not know, and we might not even know or deeply understand how these machines have solved a specific problem, but they are still machines. We may have to get used to the idea that we will not completely understand how they solve a problem. We build them, but they acquire knowledge by themselves. This situation is not new. Most of us do not understand how a computer operates playing World of Warcraft. Some of us still do. But have to get used to the fact that none of us will understand how they will get the solution of a protein folding problem, for example.

Do these cognitive machines overcome us? In the cognitive area, undoubtedly, yes. Do these cognitive machines conquer us? No. We may not understand how they have solved a problem, but the essence is who has the control; who gives the tasks, and how to deal with the acquired knowledge. And it is the danger too.

Right now we have the control, we give the tasks, and we decide what we want with these machines. They can win against us in chess, but we decide what the "game" is. These machines do not decide (they are not capable of it) to play, and even more importantly, they are not happy or sad about the result of the game. They are just (smart) machines; they are just extensions of our wills. We decide what to do with the acquired knowledge. Even if this knowledge is not understandable to us, we can decide how to use it, to build or to destroy. We will be happy or sad about the result. How we deal with the acquired knowledge could still pose a danger, but this danger originated and came from us, from humans, and not from the cognitive machines. We, unfortunately, are not afraid of us enough.

However, we are afraid of smart machines. We are afraid because they will become smarter than we are. But the real danger is not when these machines become smarter; the real danger arises when these machines start to "want," when they become conscious. That will be the danger to us.

The easy solution is not to build consciousness into these machines. We do not fully understand what consciousness is, so we are even farther from building it into a machine. So, the danger is low. Simply, when we find out what consciousness is (we maybe won't), we should just not build it into a machine. Even the craziest member of our kind should not do it.

However, it might not only be our decision. What if the machines build machines, and since they are smart, they find out what consciousness is, and they build it into its next generation? How this would be possible, and how we humans could control this, is an interesting topic of thought.

However, maybe consciousness is not a simple choice to build into a machine, or maybe it's not built at all. Most likely consciousness is not a function of a specific brain structure, but a kind of emerging property of the whole, or a big region of the brain. If this is the case, then consciousness does not depend on our decision to build or not. It will emerge; it will just begin to exist. We may not even notice when the machine gains consciousness. It could happen if we build smart machines or machines to build smart machines.

Today's knowledge-gaining computers are built on a different architecture and different working mechanisms than our brain is. Maybe only our brain's architecture and working mechanisms can create consciousness. This can be the case, but even if it is the case, the brain's architecture and working mechanisms can be simulated on different systems, or we can build similar systems like our brain is just to have more powerful smart machines. The point is, to build or to have a conscious machine might not be our choice. We probably could turn it off if we recognize that it is conscious, and if we don't want it.

Consciousness is a danger because it probably comes with conscious will, with the "I want" effect. And it is hard to control. But consciousness comes with the ability to be even smarter. Our consciousness probably made us smarter. Evolution selected consciousness as an evolutionary evolved state. Consciousness is probably not a random phenomenon; even if it is emergent, it is an advantageous property.

If consciousness makes machines even smarter, why should we not let the machine be conscious? Smarter machines can be more good for us. Or, if we won't always be present, they could be our missionaries or maybe our descendants. Can we somehow control their will to be useful and not to be an uncertain threat to us?

Yes, this is possible. Human-safe AI can be built. The solution is the built-in motivation. We have this built-in motivation in the form of pain and pleasure. As it was discussed in thoughts before, if the builder of the smart machines selects what is good and bad, and builds it into the machine as a low-level rule, even if it is conscious, its own will can be controlled by this built-in motivation. This method could save us from becoming an enemy to the AI and could set the rules for a life of living together.

Interestingly the Bible describes a similar method of God on the creation of the human. There is nothing new under the Sun.


No comments