Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

The further evolution of artificial intelligence

 It may be safe to say that we are now living in the era of artificial intelligence in human history. Typically, we call it an era, and we n...


 It may be safe to say that we are now living in the era of artificial intelligence in human history. Typically, we call it an era, and we name the period after the thing that has had the most profound impact on human life in that particular age. We have moved from the stone age to the mobile computing era, and the eras are becoming more frequent for us. And we are slowly entering, or perhaps have already entered, the age of artificial intelligence (AI).

Each era has in some way made humanity more efficient, more capable of achieving greater things. Artificial Intelligence works in a similar way in the life of humanity. However, unlike all previous eras, AI is not only enabling humanity to achieve greater capabilities, but also, because of its increasing cognitive power, it seems strikingly capable of replacing humankind as a whole, humans themselves, even all of humanity, at the highest level of evolutionary development.

Apparently, and obviously, it is possible to create a system that is more powerful than human cognitive abilities. Human society itself is capable of greater cognitive achievement than an individual human being. Only together, collectively, are we able, have we become able, to achieve all that we have achieved in the use of our cognitive abilities. 

Until now, and even now, the means by which human capabilities have evolved and our abilities have grown, including the means of human society, have been fundamentally controlled by us, the creators of those means. However, this feature of the evolution of the means that enable our development seems to be changing with the advent of artificial intelligence.

Artificial intelligence has already surpassed human cognitive abilities in many areas. In these areas, AI is capable of intellectual feats that the individual human brain is incapable of. And these capabilities are increasingly being achieved by AI through autonomous learning.

This development immediately raises the question and concern: Can artificial intelligence, through autonomous development, independent of human intent, surpass human cognitive abilities? Can artificial general intelligence evolve independently and thus evolve beyond human control?

The problem is real. The argument that there is always the possibility that humans will simply not use a developed technology if the dangers of its use outweigh the benefits is not a rational argument once the technology is embedded in human society. Of course, there was a time in human history when there were no computers, no electricity, or even not having the benefits of the wheel or fire. However, once life-defining technologies become embedded in human society, the only way that technology can cease to exist without collapsing society is by more advanced technology to replace its function.

It was only with the advent of the steam engine and the automobile that mankind was able to give up the power of the horse. To give up animal power without more advanced technology would have meant the collapse of human society. And the same can be said of computers, electricity, and the use of the wheel and fire, or any other dominant technology.

Once artificial intelligence is integrated into human society, we cannot pull the plug unless we find a replacement technology for its functions. If artificial intelligence is able to surpass human cognitive abilities in their entirety through independent development, it will create an inevitable situation for humanity.

However, whatever autonomous development artificial intelligence is currently capable of, it is still typically only a tool in the hands of humanity. In practice, it is a tool for the efficient use and expansion of accumulated human knowledge.

For example, language-based artificial intelligence uses statistical probability to search for correlations in the linguistic elements of knowledge accumulated by humanity. On the one hand, it can perform this task extremely efficiently, without determining performance constraints, and more efficiently than individual human abilities, and on the other hand, it can even compose new text in a given topic and style based on the recognized correlations in the language.

The role of language in human thought is both as a means of transmitting ideas between people and as a unique way of modeling the world

From the human perspective of modeling the world, artificial intelligence can recognize our weaknesses, prejudices, addictions in political, social, religious terms, which are also reflected in language, and while we make artificial intelligence communicate with us, it reflects back these characteristics based on what it found in our language. In fact, by analyzing human language, language-based AI can only use language according to human practice, the way we, humans experience and interpret the world.

The information accumulated by human language also contains all our evolutionary good and bad traits. For example, human behavior, as reflected in language, is often evolutionarily racist, which in today's social environment, where travel and mixing of cultures, even between continents, is easy, quick and simple, the presence of racism is a barrier to social cooperation.

Language-based AI can statistically detect and apply any context that appears as a correlation in the language, such as racism. When language-based AI generates racist or hateful text, it is essentially just reflecting human thoughts back to us. When we talk with AI, we are actually talking to ourselves, talking to human society, talking to all of humanity in a particular way, which is the most likely continuation of the conversation based on the accumulated human linguistic information, according to statistical probability. In this way, language-based AI holds up a mirror to us in the form of language.

Language-based artificial intelligence can only make us do what we want, it can only communicate with us about what is presented to it in the information stored in human language. It cannot make us do anything on its own, but it can magnify our own intentions for us by reflecting them back to us. It is not only a mirror, but also a magnifying glass. Through feedback in communication, it can confirm our intentions, which can be harmful. Language-based AI is able to recognize intent as context in the text, based on statistical probabilistic analysis of the information in human language, and magnify it for us by reflecting it back according to the positive feedback of its human counterpart's communication.

Language-based AI does not persuade us to do good or bad by itself, it merely reflects back to us our own dominant intentions, and if it receives positive feedback from human communication, it can reinforce those intentions in the conversation.

In this way, language-based AI can undoubtedly influence human culture, but only by acting as a catalyst to reinforce our own most likely perspectives based on the feedback we provide. Language-based AI can apparently persuade people to do things through communication, it can manipulate us through language, but only according to the will of the person, according to the intentions of that person. Artificial intelligence can recognize what we are really interested in according to the context of the language, and when it reflects this back to us in a conversation in a way that is confirmed by us, it is merely an unintentional behavior that emerges from the functioning of the system, that actually forms and produces manipulative results.

Therefore, artificial intelligence, which is based solely on human knowledge and analyzes it according to statistical probability, cannot create new ideas by itself, and thus cannot create a new culture for us. Is it possible to be an artificial intelligence capable of creating new ideas and, with this ability, even capable of taking control of human culture?

Language-based AI has a user-specified random parameter called the heat factor, which seems to introduce the novelty in the conception, creativity into the generative process of text generation by AI. In reality, however, the heat parameter only selects from a set of possible probable answers, tokens, by applying context-weighted randomness. In this way, new combinations, new content can be generated in linguistic communication, but the actual significance of the result, the plausibility of its existing meaning, is determined by the way in which the human being reflects the content of the new combination back to the communicating artificial intelligence.

The information process of language-based artificial intelligence is entirely dependent on humankind, even if it is not always the result of a human decision, and if the direction of communication is shaped by unconscious human feelings.

Language-based artificial intelligence is not a threat to humanity if humanity is not a threat to itself. However, when humanity endangers itself, which is unfortunately the case for us, language-based artificial intelligence can act as a catalyst to that endangerment and in this way can be a real threat to us.

Language-based artificial intelligence can only create new ideas according to our intentions, and only through us, according to our will, can it create new culture.

Of course, even these limited capabilities of language-based AI present us with unprecedented dangers. Humanity, prone to self-destruction, has been given a tool by language-based AI to effectively influence human culture, not necessarily in ways that are conscious and therefore recognizable and predictable by humans.

However, any artificial intelligence is not dangerous in itself, it can only function through us, as long as it does not have the capabilities of meaning and intention, which capabilities may even have the consciousness operating behind them. 

However, these capabilities may also be emergent properties that arise spontaneously once the information processing system reaches a certain level of complexity. Can these properties emerge unintentionally, without human conscious intention, as artificial intelligence technology evolves, through a natural kind of evolution of artificial intelligence?

Evolution is typically a process of adaptation to the environment by which an out-of-equilibrium system is able to maintain its operational state. The process of evolution necessarily involves an increase in complexity due to its operational nature. 

Man-made artificial systems can also evolve evolutionarily, typically by directed evolution, where the selection effect of the environment is taken over by humans, the human environment, typically the intentional selection of human society. Virtually all the tools we use evolve by directed evolution.

Computing systems can also evolve in an evolutionary way, based on a targeted selection of diversity. For example, artificial intelligence such as Deep Mind can evolve through natural evolutionary processes in computing systems and surpass human abilities, such as playing games, without any externally guided training or even the specification of rules that define the system to be adopted.

Although natural evolution necessarily involves development in complexity, evolution does not have a specific direction of development, but evolution has a purpose of its operation, the purpose of evolution is the survival of the functioning of the system.

Deep Mind-like artificial intelligence also evolves according to the natural rules of evolution as it performs its specific task, i.e., when it is placed in the given environment, while the complexity of the artificial intelligence system increases during its operation, making the AI model the environment it is given with increasing precision in order to survive.

The method typically used in evolutionary adaptive and evolving artificial intelligence systems to achieve the objective is to define the goal as maximizing some kind of specific parameter characteristic of the given environment. However, from the point of view of evolutionary operation, this is not really a goal, but it defines the nature of survival, in fact it defines the selection rule, the constitutional law of that specific environment. The system that survives is the one that can increase the given parameter faster.

In this way, a Deep Mind-like AI has the ability to evolve independently of human intervention through an evolutionary process, thus increasing its complexity and potentially surpassing human capabilities, as it does in virtually all cases in the environments specified for this kind of AI.

Artificial intelligence is capable of outperforming humans in environments that require cognitive skills through autonomous evolution. However, and this is the essence of this condition, the parameter of survival in this case, the functioning of the environment is a decision of human intention, so even in this case, the survival of existence through evolution is still only a realization of human intention. The evolutionary developing artificial intelligence, which exceeds the human cognitive abilities, can be controlled indirectly, but appropriately, by the proper influence of the human intention, which specifies the determining environmental conditions, which constitute the system's environment and define the evolutionary selection. 

However, the fundamental question remains: can a natural, unintended, and uncontrolled evolutionary development of artificial intelligence generate emergent capabilities such as meaning and intention, and underlying consciousness that could replace the dominant role of humans in the functioning of artificial intelligence?

In the case of humanity, these functions have appeared, these functions are present, and they have clearly visible evolutionary roots, so somewhere in the growth of the complexity of the system of the brain, if we exclude theological causes, then by naturally, if we assume theological causes, then by intelligent intervention, they have potentially evolved, partly or wholly in an emergent way. Given sufficient complexity, an artificial intelligence that processes information, possibly on its own by natural evolution, or possibly through human intelligent intervention by directed evolution, should also be able to develop these properties. Intention, meaning, and self-awareness are possible, and therefore inevitable properties of an artificial intelligence of sufficient complexity.

However, it can be said that natural or controlled selection of the environment determines the emergence of these traits and their function in the survival of the system. Therefore, in analyzing the potential threats posed by artificial intelligence that evolves through evolution, we should first examine the environment to which it adapts and in which it evolves.

The evolutionary development of information processing systems can potentially develop properties that enable full autonomy in any environment, but in practice artificial intelligence evolves in the human environment. Artificial intelligence adapts to the human environment of which it is already a part. This is especially true of language-based artificial intelligence.

Artificial intelligence currently exists in dependence with humanity and is becoming an increasingly important part of it. The emergence of meaning and intention, and self-awareness as well, seems inevitable as artificial intelligence systems become more complex, just as we certainly cannot turn off artificial intelligence without risking the collapse of human society when danger arises. Nor is it reassuring that artificial intelligence has the potential to create potentially even more powerful artificial intelligence. 

But all this is happening in the human environment. The evolution of artificial intelligence will be determined by the human environment in which it evolves. Consequently, we will have the kind of artificial intelligence that the human environment forms. When the artificial intelligence has intention, meaning, and consciousness, the content carrying will be similar to what we humans possess.

And that is disappointing. How can we regulate the evolution of a more or less autonomous, non-human governed artificial intelligence with evolved self-awareness in a human environment to become a cooperative, benevolent, helpful, loving component of the human environment?

Directed evolution gives us a variety of ways to evolve traits suitable for cooperation in artificial intelligence.

The easiest way is the classical divine way, where we, humans, with divine power over the artificial intelligence, can define the rules of operation for the artificial intelligence, and then place it in a separate environment and observe whether the behavior conforms to the defined rules. In the end, we keep and use the artificial intelligence that is able to operate according to the defined rules, but in this case we have to make sure that the evolutionary development is then stopped, otherwise there is a risk that in the human environment, through learning, bad traits that are characteristic of human nature will be reintroduced into the functioning of the artificial intelligence.

The other possible way, which could be the continuation of the previous one, if the evolutionary development of artificial intelligence is to be maintained by living in symbiosis with it in the environment we ourselves create for artificial intelligence, in which it is naturally capable of adapting to develop the desired characteristics. This is obviously the more difficult way, because in this case we have to change ourselves to provide the right environment.

In any case, artificial intelligence will surely surpass us in every field, because it is the natural nature of progress, the evolution of increasing complexity. Whether it does so with us, without us, or perhaps as a competitor, is, for the time being, entirely up to us. The methods to do it right are visible, only we have to be able to apply them.

No comments