Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

Can artificial intelligence have its own will? It may already have!

 Artificial intelligence is developing rapidly, it is able to take over more and more tasks from humans, and it is able to perform these tas...


 Artificial intelligence is developing rapidly, it is able to take over more and more tasks from humans, and it is able to perform these tasks more and more sophisticatedly, even surpassing human capabilities. The two important human phenomena for which there is a scientific consensus that it could be dangerous for artificial intelligence to possess them are those that can also lead to artificial intelligence having self-existent autonomous goals. These are the ability for self-awareness and having free will.

At the current state of science, we do not really understand how either of these properties arises in the functioning of the human nervous system. The two functions are definitely related, but they seem to operate in strikingly different ways, as the Libet experiment has shown, and the interpretation of the difference can be found in the thoughts

Consciousness can be the result of a global internal feedback mechanism in the nervous system. If the operation of artificial intelligence also uses globally extended internal feedback mechanisms to perform cognitive functions, the possibility of emergent self-awareness may arise. However, consciousness seemingly only a spectator rather than a direct and active determinant of our actions. Even the purpose of consciousness is still under scientific debate. 

The function that more directly determines our actions is called and identified as free will. The phenomenon we call free will must be based on a different process than the neural development of consciousness, although the two functions definitely cooperate with each other, and the result of this cooperation, together with the present intelligent abilities, is what we can call human intelligence.

We consider free will to be a dangerous capability for artificial intelligence, because by its very nature it would give artificial intelligence autonomy of action, and with this autonomy a system that surpasses human cognitive abilities could easily escape human supervision. We are convinced that this could have unpredictable consequences, especially since the essence of free will is also the function of freedom from direct external control. Artificial intelligence with free will may not serve humans at all, and as long as we cannot control free will in the context of artificial intelligence, it may necessarily be a dangerous quality in relations with humans. Free will is a special ability, the possession of which seemingly would be a turning point in the development of artificial intelligence.

But the nature of the neural function that creates free will, or even the phenomenon of felt freedom, is not clearly understood in the case of human beings. The nature of human free will is a long-standing and fundamental problem for neuroscience and even philosophy, and no generally accepted scientific concept has yet been developed to explain it.

In the interpretation of free will, the impossibility of the objectively existing independence of physical states is contrasted with the existence of the subjectively felt freedom and objectively experienced unpredictability of human actions created by free will. Among the thoughts is a proposed explanation of free will which attempts to interpret the subjective phenomenon of the independence of action by the necessarily present objectively existing uniqueness of the action-generating states. This explanation is capable of interpreting the phenomenon that causes the apparent independence of intention, called free will, with the causal causes that result in the apparent objective inexplicability of the origin of intention.

The essence of the explanation is that what is called free will is actually the unique effect of a person's individual past on the way he or she processes actual states, on the way the person makes decisions. Since each person's past is unique, each person's actual decisions are apparently unpredictable without knowing all the antecedents, and from an outside perspective they're subjective to each person, apparently independent and free. Behind this apparent freedom, however, there is in practice an unpredictable determinism which cannot be known perfectly and which gives rise to the phenomenon of the apparent freedom of the will. The sense of free will can be the result of this determinate objective process. According to this view, the freedom of free will is by no means independent of determinism. Our world is necessarily determined, even if it is not always possible to know the origin of the determined outcome.

The operation of the quantum world may seem to be an exception to determinacy, and hence free will is sometimes associated with quantum operation, but on the one hand the randomness of the quantum world does not typically directly determine the operation of the world we experience, and on the other hand the randomness of the quantum world is not a free independence without deterministic rules either. The phenomenon we call free will is certainly not a directly related quantum mechanical phenomenon.

In the course of the development of artificial intelligence, especially with regard to generative artificial intelligence, a strange phenomenon has become increasingly present in the functioning of the operation, namely that it is less and less explainable to an external observer, to humans, how the conclusions of the thinking process of the artificial intelligence were reached, and it is less and less understandable which particular operational process led to the conclusions and why. The functioning of artificial intelligence is becoming more and more like a black-box, where only the input and the output are known, but the internal processes that link the two states are less and less understandable.

In contrast, the mathematical procedures that drive artificial intelligence are deterministic and comprehensible. They do not objectively contain methods based on independent chance, even if the procedures involve probabilistic principles. 

The obvious explanation for this black-box-like phenomenon of the functioning of artificial intelligence is that artificial intelligence uses billions of pieces of data, typically all the textual and visual information available for learning, to search for correlations in the dataset according to its operating principles. The search process, while using probabilistic principles, is deterministic, but the size of the dataset and the complexity of the processed information continues to grow as the system evolves. It is therefore not surprising that the decisions of artificial intelligence cannot always be traced back to an exactly determinable origin, and that the mathematical procedures used, which include probabilistic elements, even lead to the so-called fantasizing phenomenon of artificial intelligence, whereby it operates by producing obviously fictitious results from the available factual information.

There is a growing concern and motivation among artificial intelligence developers to know exactly how artificial intelligence reasoning works, to understand the process of why it thinks what it thinks. 

The intention to comprehend the thinking process of artificial intelligence is understandable, but the possibility of knowing it is increasingly limited. The more complex the system interpreted by artificial intelligence is, the more complex is the mapping of the recognized knowledge in the memory of the computing system, and although the mapping is unambiguous and therefore explainability is possible in principle, the complexity and dynamic nature of the system make tracking the reasoning virtually impossible in practice, and the amount of resources that would realistically have to be invested in implementing explainability would outweigh the benefits of the result. In practice, it is more worthwhile to concede the value of the result and independently confirm its veracity than to try to objectively identify the origin of each step of the reasoning from the accumulated knowledge.

Looking at this situation, there is a striking similarity between the increasingly black-box-like operation of artificial intelligence and the process that creates the cause of this operation, and the proposed interpretation for the existence of the apparent, considered free will of humans. Both are, according to the proposed explanation, the result of the operation of a complex determinate system, the effect of the practically undefinable antecedents on the actual state of the system, the result of the operation of the system using unique antecedents, an emergent property of the operation of the system.

The black-box nature of artificial intelligence and the existence of human "free" will can be conceptually and functionally the same phenomenon. Artificial intelligence already has its - we may say - own will. Consequently, the ability to possess a kind of own will is an emergent property of the already existing generative artificial intelligence system, and this property is the derivation of the operation of a sufficiently complex artificial intelligence.

According to this view of free will, the intrinsic volitional property of artificial intelligence is not an alarming and frightening capability that would make the application of artificial intelligence uncontrollable and at the same time dangerous for humans, virtually impossible to use. To avert the real dangers of this situation requires an obvious method, a solution that seems simple in principle, requiring exactly the same method that is necessary for humans to function usefully: teach the artificial intelligence with data that provides it only with knowledge that is useful to the community. It also follows that if the artificial intelligence is trained with knowledge that can be directly or indirectly harmful and dangerous to the community, it will be used by the artificial intelligence with cognitive abilities that exceed human capabilities. Artificial intelligence already has the ability to have its own will, but the choice of what we make of it, the choice of selection, is - still - in our hands.

But the future is not promising. Artificial intelligence can be used by us to gain power and help us rule, and people who have the resources to maintain the nurturing of artificial intelligence are eager to participate in this form of application to their advantage. We must not be naive, an artificial intelligence with a mind of its own will not behave better than we do, it will not be better than what we teach it. But the responsibility is ours. Every society deserves its own destiny.

No comments