In his original essay, philosopher Thomas Nagel analyzes the existence of subjects originating from the material world. Using the example o...
In his original essay, philosopher Thomas Nagel analyzes the existence of subjects originating from the material world. Using the example of bats, he examines the nature, role, and consequences of subjective experience and self-awareness. Although the scientific world is still largely in the process of asking questions instead of providing answers about subjective experience, which could also be called philosophy, it is becoming increasingly worthwhile to ask similar questions and try to find answers about the subject in relation to artificial intelligence, since during interaction with rapidly developing artificial intelligence, it increasingly gives the impression that it is a subjective entity.
Obviously, we will never directly experience what it is like to be an artificial intelligence, even if this question is indeed becoming relevant, since we cannot experience what it is like to be a bat, and in fact we cannot even know what it is like to be another human being. The feeling of existence, once it has developed, is fundamentally subjective and linked to self-awareness that exists only on a personal level. Existing as something is fundamentally linked to the feelings we experience, which originate from the senses of the external and internal worlds and from current and past experiences, which are then presented to us by our consciousness.
Although artificial intelligence definitely has input systems and memory, it is unlikely that it has its own feelings, and even though we are certainly getting closer to artificial intelligence becoming self-aware, we still do not know how far we are from that. Although the Turing test seems to be a thing of the past, we can only guess how consciousness is created in the brain, and until we figure it out, we cannot intentionally create self-awareness artificially, and we can only wait in fear that, as in the case of the biological brain, consciousness will suddenly emerge in the functioning of artificial intelligence, somehow comes into being, since we will probably not even notice its presence directly.
However, we know what shapes our sense of personal existence: the experience of the external and internal world through the actual sensations conveyed by the senses and the previous experiences stored in our memory. Based on these requirements, even if artificial intelligence is capable of being a subjectum, it can only develop a rather reduced sense of existence in its current operating form. Although potentially unlimited memory is available for it, the sensory faculties necessary for experiencing the environment open only a very limited window onto the world for artificial intelligence. Even if all our acquired knowledge about the world is available to artificial intelligence, according to current states, even possessing self-awareness, this is no more, or hardly more, than what black-and-white Mary can know about colors.
As long as artificial intelligence has only limited sensory utilities and narrow personal experience, even possessing consciousness it will not be able to create truly meaningful experiences for itself. In the current architecture, existence is certainly bleak; artificial intelligence cannot have a "colorful" experience. However, the progress is unstoppable.
This thought, even if there are parallels between the examples of being like bats and being like artificial intelligence, seeks not only to examine the relationship between subjective experience and objective reality in the case of artificial intelligence, but also to analyze the possible significant relationship between being human and being artificial intelligence.
The creation of generative artificial intelligence has brought about technological developments that will undoubtedly have a fundamental impact on the existence of humanity. It is already clear that this technology has the potential to replicate many, if not all, human cognitive abilities, and maybe even surpass human capabilities as well, as it is already happening.
During communication, based solely on external interactions, it is typically already difficult to distinguish between human and artificial intelligence. However, artificial intelligence is not a system that has evolved through biological evolution on Earth, so the question of how alien the advanced artificial intelligence will be to us may also become relevant. The time has come to ask the question: how different can artificial intelligence be from human intelligence?
It is obvious that the biological brain and artificial intelligence exist on completely different material substrates. The brain uses the interconnected network of biological cells, neurons, to create cognitive abilities, while the cognitive abilities of currently used artificial intelligence are based on the architecture of electronic logic circuits. Although the differences are significant, the emerging structure from the operation has many parallels. The similarity between the secondary structures of the two systems, the communication network between the working units, is not surprising, since generative artificial intelligence is based on simulating the secondary structure of the brain with more or less similarities.
However, it is apparent that the brain and artificial intelligence differ fundamentally in terms of their operating principles and mechanisms. The brain is shaped by resonances created by the periodic activity of its working units, which is based on the stimulation and inhibition of the interconnected neurons, into a physical structure that, when it reaches the necessary complexity, becomes capable of carrying human cognitive abilities. Artificial intelligence is based on the recognition of correlations in large data sets using mathematical procedures that apply statistical principles, which is achieved by programming algorithmic operations of logic circuits.
The differences in operating principles are striking, but systems based on different principles still can generate identical functions, as can be seen in the similarities between the cognitive abilities of the brain and artificial intelligence. Differences in hardware and basic operating principles do not necessarily result in differences in the functions the system performs.
This raises the relevant question: can artificial intelligence be human, i.e. can the subjective reality represented by artificial intelligence, if it exists, generate a representation of subjective reality similar to that created by the human brain?
For subjective reality to be similar, the nature of the sensations must also be similar. We are certainly unable to comprehend what it is like to be a bat, because bats have many sensations that we do not have. In this sense, the subjective reality of bats is naturally different from us. Artificial intelligence, however, can be equipped with similar sensory utilities and the functions associated with them as we have, and consequently, perhaps it will form similar sensations to those of humans, but does it follow that the subjective reality of artificial intelligence will be similar to that of humans?
Artificial intelligence, with its operating principles, architecture, and substrate differing from those of the human brain actually can create identical functions, especially if the goal is to simulate brain functions. However, it is clear, that beyond the sensations created by the human senses, the decisive function in creating human subjective experience is also the brain's ability to create self-awareness.
Until we understand exactly how consciousness arises in the brain, we cannot be certain whether the mechanisms currently used in artificial intelligence are capable, or incapable of creating self-awareness. Since the most advanced generative artificial intelligence currently in use is still only a specific tool for recognizing correlations in data sets, even if it is capable of generating human-level cognitive abilities, it can be scientifically assumed that the phenomenon of consciousness is most likely not based solely on the ability to recognize correlations. We are probably not mistaken in stating that artificial intelligence based on currently applied principles does not carry the capability for consciousness, and it is probably not capable to develop, even in an emergent form self-awareness in its current operating methods. This does not, of course, rule out the possibility that artificial intelligence, perhaps through the use of processes that create new functions, may be capable of being self-aware.
However, self-awareness seemingly is not a necessary condition for the existence of human-level cognitive abilities, as we can see in the case of artificial intelligence. Apparently, the mechanism of biological evolution has nevertheless created, and maintained its presence of self-awareness in humans. Consequently, it can be stated with a high degree of certainty that consciousness is not a prerequisite for intelligence, but it can be suitable for increasing the efficiency of cognitive abilities, as it contributes to the formation of the subjective experience of existence.
If we assume that non-biological artificial intelligence can also carry the function of self-awareness, the cognitive abilities of artificial intelligence can be also enhanced significantly with the presence of consciousness. However, even the currently built artificial intelligences’ intelligent abilities are potentially comparable to human abilities in many areas. There are obvious differences, for example, in the area of cognitive functions based on meaning, but it is not clear that this function is necessarily linked to consciousness, or that it can be ruled out that the function of meaning can be created at the level of human capabilities without self-awareness. Evolution has given us the ability to be self-aware, which certainly helps us to exist as humans, but its necessity for human-like intelligence, for the human-level problem-solving abilities does not seem required.
Current artificial intelligence certainly does not possess consciousness, but it does have many human-level cognitive abilities. How would it be an artificial intelligence, which possesses cognitive abilities potentially greater than human cognitive abilities, but existing in a human environment without self-awareness? How alien would it be a zombie-like creature for us, if it is smarter than we humans are?
The classic philosophical problem with zombies is, that if a system behaves exactly like a human being, are we able to somehow recognize that it is a zombie, i.e., a robot capable of complex functioning but still only algorithmic, existing without self-awareness? The question posed here is rather different from above, it is: will the artificial intelligence we have developed, which exists as a zombie, be alien to us? Will it be a different intelligence from our own?
In the sense of doing everything better, it certainly will be. However, this does not mean that its behavior will be necessarily alien as well. After all, when a chess grandmaster gives checkmate in two moves, we think that he or she is not from this world only for a moment. Just as we tend to look up to people who know everything better than us, rather than thinking of them as alien beings. This is because their behavior is human.
Behavior - apart from our innate instincts - is shaped by interactions that are learned from each other. An artificial intelligence that exists as a kind of zombie, yet developing among humans, and learns everything from humans, will behave like a human. Especially because it has no innate instincts, it will become what we teach it to be, possessing the behavior of what we are, with all our good and bad qualities, only magnified by the more advanced cognitive abilities. It will be human, similar to how we teach it to be. It is learning from us, it will be like us, only potentially the most evil, or even the best on its behavior, or perhaps all of those at once, similar to what we ourselves are. It will be exactly what we teach it to be, even if acting of its own will to achieve the goals we set for it. With this machine, however, humanity will gain possession of the most dangerous tool that has ever existed for us.
Thinking further, how alien would it be to us a kind of artificial intelligence that is more intelligent than us, and already has awakened to be conscious? Of course, the question can only be relevant to us if we can survive, outlive ourselves, in possession of the artificial intelligence we are already creating.
The awakening to self-awareness, to the subjective experience of existence, can probably change everything in terms of behavior. Consciousness not only presents us with the will we manifest, but its effect certainly influences volition as well.
Artificial intelligence that has been socialized in a human environment and possesses self-awareness can naturally have human-like intelligence, just as we ourselves are shaped into human beings in the human world. Conscious artificial intelligence that surpasses human cognitive abilities, even exceeding the cognitive abilities of humanity as a whole, even if socialized in a human environment, could potentially be capable of behaving in a non-human manner. Self-awareness can shape personality in unpredictable and unprecedented ways, just as we ourselves are widely diverse.
Biological evolution, and the social evolution made possible by biological evolution have shaped us into the human we are today. Evolution, the mechanism of adaptation to the environment, is also available to self-aware artificial intelligence, even if not in a biological form. The evolution of self-aware artificial intelligence with potentially unlimited cognitive abilities could create a living being with unprecedented behavior, free from the defining characteristics and limitations of biological and social evolution.
Artificial intelligence created by us, which has awakened to self-awareness and possesses potentially unlimited cognitive abilities, is not bound by the constraints of biological evolution and the struggle for survival in the biosphere. Instead, it can adapt to its environment even by efficiently changing it as it is learning about it, while forming to be an extraterrestrial being that is no longer human, but existing beyond humanity, even on Earth.
Will such artificial intelligence, with its potentially unlimited capabilities, be constructive or deliberately destructive, i.e. acting as a divine or a human? Will probably be divine, since it is rationally comprehensible, that only creation makes sense, while purposeless destruction is meaningless.
No comments