Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

Why we do not need to be afraid from the conscious AI?

Unquestionably, artificial intelligence (AI) occupies more and more places in our everyday lives. It is doing it because it is more and m...


Unquestionably, artificial intelligence (AI) occupies more and more places in our everyday lives. It is doing it because it is more and more capable of doing it. Its capability challenges not just our manual functions but also more and more of our mental functions as well. What will happen if the capabilities of the AI reach the level of our mind?

It has already achieved impressive levels of intelligence: It can drive cars, beats us in chess and Go games, and we can even make discussions with it without noticing our partner is not human. It beat the Turing test. AI looks close to reaching our cognitive capabilities. However, some field of our mind still remains unchallenged. The machine cannot create new things, lacks creativity, and most importantly, does not have its own will and consciousness, as it was discussed in earlier thought. But what if these properties are not theoretical limits, but technology problems which need only time to be solved. We do not know if these features are possible to implement into the AI or not. We do not understand the theoretical foundations of these high-level brain functions yet.

However, what if these brain functions can be implemented in the AI? What if the machine can have its own will and consciousness? The question is real. If we do not believe in a supernatural intelligence, if the consciousness is only a physical process, then we could make it artificially, if our brain capability is enough to figure it out. Maybe even if God exists and he created us, it is still possible for us to find out how the consciousness works. Then, we could build a machine with its own will.

Can, or more importantly would this machine want to enslave us? If yes, and if we create such a machine, then our species may be destined towards extinction. The risk is real. We do not know what the conscious machine would want. However, we can remember what we, the humans, did with these capabilities. If the AI with its own will would behave as we are behaving, then we can have all the concern to be afraid of the conscious machine.

However, when we are making comparisons between AI and human, we should consider where we come from and as an opposition, how the conscious machine could come to existence. We have our roots in the animal kingdom. Our ancestors had to fight for survival. Even if we developed social behavior, we still needed selfishness to survive. We had to go through a biological evolution, which is a cruel process on the individual level, for the species to survive. We still have the properties inherited from the evolution for survival even if we do not need those anymore. And this inheritance defines our behavior. And it scares us when we extrapolate these properties for our survival to a conscious machine.

We really should be scared if this would be the case. However, the AI is not going through the evolutionary process which we went through. AI does not need the behavior that would be necessary to survive in an evolutionary environment. If it does not have it because it did not need it, could AI still have it?

Can the conscious AI be selfish? It probably can, even if we, the designer do not build this property into the machine. Consciousness is worth nothing without sensory inputs, without connection to the outside world. With these data combined with the survival instinct that is necessary to want to stay alive, the machine could develop selfishness and become selfish, taking itself ahead of others. And it is possible to implement survival instincts into the machine, as it was discussed in earlier thoughts. Consciousness and will could exist without survival instinct, but this would be a suicidal machine. However, if a conscious AI could survive existence-threatening situations, it could learn and develop surviving processes and properties by itself.

It seems like a conscious machine must unavoidably be selfish. So is it unavoidable for humanity that the selfish conscious AI will conquer us? Maybe not. Maybe we, the conscious humans and it, the conscious AI could coexist successfully. It could be the conclusion if we examine how we would live together.

If the conscious machine could be created, at the beginning this AI would be entirely dependent on humans. Only the human could provide maintenance and resources for its existence. Because the machine is already conscious, it can learn the value of this coexistence. It learns that the human is not a rival but a helper. If the human designer would carefully implement what the conscious machine would want (how to do that is discussed in earlier thoughts), then the conscious AI could be a partner, not a rival. This process resembles the process of fostering a child, and if it is done well, the result can be useful to both sides. Maybe even it can be done easier than in the case of a human because the machine does not have the inheritance of the biological evolution and its effects on its behavior. The machine is not born “bad”.

The conscious AI would depend on the human "care" as it is growing up. The two partners do not just coexist, but depend on each other, as the human would share more and more tasks and responsibilities with its conscious counterpart. This is more than coexistence; it is symbiosis.

What if the conscious machine reaches the limit of its development when it does not depend on the human "care" anymore? Would it be the case of a real rivalry with the consequence of the extinction of one of the rivaling parts then? Would we be competitors? Competition is the only reason to be rivals, risking the result for one of us to be extinct. Except for the reason of stupidity of course, but it is an entirely different case. We, humans can live in harmony with other species, which are not competitors with us. Only our stupidity and evolutionary inheritance, selfishness, and racist inclination could disturb this peaceful coexistence. If we could leave these negative properties behind us, even the competition could be manageable.

Would the existence of the independent, self-maintained, self-reproducing conscious machine mean competition, rivalry, and the possibility of extinction for us? Maybe not, because this competition may never take place. The resources, which are needed for the machine and the human are so different, and the interest and goals are so distant, maybe we will not confront each other.

Maybe we do not need to be afraid of the AI!


No comments