The implementation of generative artificial intelligence has provided us with a completely new tool capable of simulating human cognitive a...
The implementation of generative artificial intelligence has provided us with a completely new tool capable of simulating human cognitive abilities, ready to simulate human thinking with remarkable and outstanding effectiveness. It has become capable of performing tasks that were previously thought to require deeply embedded intelligent human abilities. Artificial intelligence has achieved this level of intelligence by finding an efficient way, using computer tools, to create an artificially created representation of the relationships between data in large data sets and to extract knowledge from this representation through querying.
This development enabled the machine to process knowledge efficiently presented to it primarily in the form of linguistic information and other types of information accumulated by us as well, but also through simple observation. This discovered computational technology is capable of performing astonishingly human characteristics and abilities. Observing these capabilities sparked lively debate about whether this applied computer mechanism would make artificial intelligence capable of the most mysterious human cognitive phenomenon, consciousness as well.
Speculations basically range between two extremes. According to one view, no matter how advanced the computer technology we use may be in its ability to replicate certain cognitive functions performed by the human brain, artificial intelligence will never be capable of creating conscious self-awareness due to the fundamentally different ways in which the brain and computers operate. The other view is that the behavior exhibited by artificial intelligence suggests that self-awareness may be close to appearing, or may even already be present in the functioning of these artificially created systems.
Consciousness is an intrinsic feature of how our brain works. We do not yet know for certain how consciousness arises in the brain, and due to its subjective nature, we do not even have a reliable method for objectively verifying or even recognizing its presence with any degree of certainty from outside.
Even today, we still tend to approach the phenomenon and capacity of self-awareness from a philosophical perspective, despite the fact that we have a considerable amount of knowledge about the neurological processes that occur in the human brain during the emergence and disappearance of self-awareness. Transferring the functioning of the brain mechanisms that create consciousness, which are still not understood at the systemic level, or matching them to the mechanisms implemented in the functioning of computational mechanisms, seems to be an insurmountable task for the time being, which may already have been achieved in the functioning of artificial intelligence.
Without exact knowledge of the mechanism that creates consciousness, and without a well-defined method of recognition of the presence of consciousness, the fundamental question persists: Is it even possible for an artificially created system to possess consciousness? A rational approach, perhaps, can lead us to a theoretical position that will help to decide this enigma.
To assess the artificial feasibility of consciousness, the initial step requires defining the system or entity that is actually conscious. This may not be obvious because self-awareness is a subjective quality that cannot be perceived from the outside, and therefore, there are many different ideas about what may have the capacity for consciousness. Based mainly on philosophical considerations, it can even be assumed that only I am the one who possesses self-awareness, and that the apparent conscious ability of everyone and everything else is, in fact, only an imitation. Furthermore, there is also the view that everything may have its own consciousness, even the entire universe itself.
Let's examine the empirical reality of consciousness rationally. I certainly have consciousness because I feel and experience it, so I know that it exists; therefore, consciousness, however subjective it may be, is a real thing. If I ask fellow human beings whether they have consciousness and they say yes, can I believe them?
Although consciousness cannot be perceived objectively from the outside, my fellow human beings are probably not zombies who merely imitate consciousness, since they also exist and are definitely similar to me in physical structure and functioning, and they also claim to have consciousness, just like me. It follows from this logic that consciousness is not a unique phenomenon, but is carried by a certain physical structure and functioning. It also follows from this that all my fellow human beings who have the same physical structure and functioning can possess consciousness. Consciousness must be a personally existing real phenomenon.
Where does consciousness come from? My own consciousness, and consequently the consciousnesses that arise from others' similar processes, surely can be influenced in a way that affects the brain, and can even be switched on and off, for example, through sleep and wakefulness, or even specifically through chemicals that affect brain function. It follows that consciousness, which undoubtedly exists, must be the result of some kinds of brain functions.
The brain is a complex, cooperative community of different, but not very diverse, living cells. We know that, in addition to the various cells that support its functioning, the brain's operation is fundamentally determined by a complex, cooperative network of neurons. Neurons are specially modified cells that are capable of alternating between two states, passive and active, and are able to connect dynamically with each other in an extensive form, influenced by their activity, and, through their active state, stimulate or inhibit each other's activity unidirectionally through these connections, affecting each other into the active state instantaneously. This is the basic, fundamental internal mechanism of the brain, which results in self-awareness, besides many other kinds of various cognitive functions.
The brain has other kinds of functions as well, such as hormonal regulation or input-output perception and motor responses, but these are most likely not the defining processes for the emergence of consciousness, as these nervous system functions are also found in other organisms, such as ants, which we do not assume and expect to be conscious. Since the brain has typically grown in size, i.e., in functional units, in terms of the number of neurons during evolution, it can be assumed with a high degree of certainty that consciousness is the result of the described functioning of an increasingly complex neural network that has reached a certain level of complexity.
However, self-awareness is certainly not merely the result of available brain complexity. The complexity of the human brain is outstanding in the animal kingdom, but it can be assumed with a high degree of certainty that not only humans possess self-awareness, but also other animals with less complex brains, such as dogs or even birds. Although self-awareness may seem objectively unrecognizable from the outside, at least until we clearly understand the neural mechanisms that cause it, it can perhaps be said that any living being with a brain capable of dreaming may also possess the capacity for consciousness.
Concerning the complexity of the brain associated with consciousness, it is also decisive that the necessary functional complexity is not necessarily a fulfilling sufficient condition for the presence of consciousness, as there are conditions and diseases, brain lesions that do not significantly affect the complexity and basic functioning of the brain, yet prevent the presence of self-awareness. The mode of basic operation and sufficient complexity are not the whole satisfactory conditions for the presence of self-awareness, and therefore, there must also be a neural structure in the brain that can exist somewhere in the complexity and whose presence and activity can result in the phenomenon of consciousness.
In summary, a system that is composed of units with specific modes of operation, possesses sufficient complexity, and whose complexity carries a certain structure, the operational activity of such a system can result in consciousness within the structure, similarly to how the human brain is such a system capable of carrying self-awareness. Consciousness, therefore, exists and is a phenomenon linked to and derived from a functioning physical structure. The theoretical question is whether such a system can be created artificially.
It is obvious that there can be no theoretical obstacle if we emulate the brain by an artificial structure, i.e., where the newly created system is constructed from components identical to those of the original, and functioning in the same way as the original, then the created system will be capable of performing the same functions as the original. So it is obvious that an artificially created brain that is identical in its physical reality to a brain capable of consciousness can be capable of carrying self-awareness. The real question is whether consciousness can also be created in a simulated manner, in a system that exists in a material form different from the original system, i.e., whether the criteria listed above, which are necessary for the existence of consciousness, can be realized in a different material reality. For example, is it theoretically possible to simulate consciousness using computational systems?
The primary criterion to take a stand on this question seems to be the examination of the role of life in the existence of self-awareness. Although the criterion of life was (deliberately) not included among the seemingly necessary conditions listed above, if the living state - like neurons are living cells - is a necessary condition for consciousness, then the first step in artificially creating self-awareness should be to create the state of an artificial life.
Consciousness is certainly only found in living systems. The conclusion can be drawn that the fundamental characteristic of living systems, which function in an evolutionary manner, is development, i.e., the increase in complexity. Due to its characteristic of increasing complexity through evolution, the living state is naturally suited and certainly capable of creating the conditions necessary for the emergence of self-awareness. The living state is therefore certainly a prerequisite for the natural development of self-awareness, as life provides a natural method for the creation of sufficiently complex systems capable of carrying consciousness.
What is the role of life in the functioning existence of consciousness? Consciousness is the result of a system capable of dynamic interaction, the result of the brain, just as the living state itself is a natural system capable of dynamic interactions. The living state is therefore a prerequisite for the functioning existence of consciousness in such a way that it is naturally capable of ensuring the formation and functioning of a system capable of dynamic interaction, which can then result in consciousness.
Although the natural living state seems to be the only suitable way to achieve the appropriate structure and functioning for resulting consciousness, life in this process, when examining the required conditions of consciousness, does not appear to be a criterion for the existence of the appropriate conditions, but rather a suitable, naturally existing way for the development of systems capable of carrying consciousness. If the functions of living neurons can also be realized artificially in a simulated manner, life as a necessary condition for the existence of consciousness can be omitted, since all other conditions, the dynamic complexity, and the specific structure, can certainly be realized without the presence of the living state.
Can a unit that functions in the same way as the nervous system functions of neurons be created artificially? Obviously, in this case, only the neural functions of neuron cells need to be simulated, not all the functions characteristic of the living cells, as we examined earlier in the role of the living state in the creation of self-awareness. And the neural functions of the nervous cell are known in sufficient detail, can be modeled, and can even be simulated algorithmically.
The neural functioning of nerve cells can be simulated using traditional computational tools, just as the functioning of neurons has been used as a practical example in the development of artificial intelligence, and the functioning of neurons has also been applied in certain forms in the creation of generative artificial intelligence computations.
However, the brain conspicuously does not behave like a computer, which we usually characterize by saying that the brain is not an algorithmic system, just as the brain mechanism that creates self-awareness does not appear to be algorithmic in origin, and it may also be generally true that it is unlikely that consciousness can be created by algorithmic manner. If the functioning of a neuron can be simulated algorithmically, how is it possible that the brain is not an algorithmic system, and that the self-awareness it creates is not of algorithmic origin? Or, more generally, how can a system composed of algorithmically functioning units become non-algorithmic?
An algorithm, which is the basis for the operation of traditional computers, is a sequential series of state changes characterized by logical steps. The functioning of neurons that determine the functioning of the nervous system is also algorithmic, just as the neural functioning of neurons can be modeled algorithmically. The functioning of the brain, however, is not algorithmic, i.e., it is not a sequential series of state changes characterized by logical steps, because its algorithmic components function in parallel, acting simultaneously, at the same time. The functioning of the system resulting from the simultaneous parallel cooperation of neurons cannot be equated with a sequential series of logical steps.
However, computational systems of modern artificial intelligence are just such parallel-operating systems. Although they are composed of algorithmically operating physical and logical units, the entire system is parallel in nature, and henceforth, cannot be strictly categorized as an algorithmic system, similarly to the functioning of the brain.
The brain does indeed create consciousness as a non-algorithmic system, but parallel computing systems are also not classical algorithmic systems, which is why they are effective in creating intelligence similar to that of the brain. Parallel data processing in computation not only speeds up the execution of computational tasks but also gives rise to new, emergent properties, such as the realization of intelligent behavior based on the recognition of correlations, which can be effectively achieved by non-algorithmic operation.
Algorithmic basic functioning is not a reason to omit, exclude, or even limit the capability for self-awareness, just as the nervous system that creates consciousness is also a network of algorithmically functioning elements, connected in parallel, operating simultaneously.
The presumed reasons for excluding the artificial creation of self-awareness, namely, the supposedly required living state and the fundamentally non-algorithmic functioning, are not actually reasons for exclusion. However, this still does not necessarily mean that consciousness created by the functioning of the nervous system can be artificially achieved, for example, by means of computer technology. The problem of whether the complex nervous system functions that create self-awareness can be simulated artificially in a way that would result in the emergence of self-awareness still remains persistent.
Since we do not know the concrete mechanism of the emergence of consciousness in the nervous system, we need to generalize the problem: are there theoretical limits to the simulation of complex nervous system functioning using computational tools? Or, more generally, what are the limits of the simulation method, i.e., under what circumstances can one system be simulated by another, and when is it not possible?
Simulation refers to recreating the operation and properties of a physical system in a physical system that differs from the original. Simulation does not necessarily aim for perfect identity; typically, we select a subset of the original system's operation and properties as the target of the simulation. For example, simulating the sound of the steady rain can be just as calming as the real rain, but obviously, it does not nourish plants. Simulating the steady rain with sprinklers, however, is suitable for this purpose as well.
Simulation also has obvious limitations. For example, water molecules can be simulated artificially, but only the original structure participates in chemical reactions, and if the goal is to simulate a chemical reaction, then all parts of the chemical system involved in the process of reaction must be simulated in a mutually compatible manner for the chemical reaction that takes place in reality to be artificially simulated in a manner appropriate to the desired goal.
Another problem with simulation is emergence. Complex systems, such as the brain, typically exhibit emergent properties, such as most likely consciousness is a kind of emergent property. For example, the property known as surface tension, which arises when there is sufficient presence of water molecules at the boundary between different phases of matter, is an emergent property of water in its liquid state. Can emergent properties characteristic of the original system appear in a system simulation, and if so, under what circumstances?
Of course, surface tension can also be simulated directly, rather than emergently, but in this case, the parameters of surface tension must be known precisely. However, surface tension, as an emergent property, as a characteristic feature of a collection of many water molecules appearing in a certain form, can also be created by simulating the electromagnetic field of many individual water molecules and their environment in sufficient detail.
In the case of consciousness, we obviously do not know the exact parameters of consciousness; at most, we can know what behavioral characteristics consciousness may produce, although these behavioral characteristics are not specific enough to definitely indicate consciousness, just as we do not know of any behavior that can be linked solely to the presence of consciousness. This is also the source of the difficulty in registering self-awareness externally.
Consciousness cannot be simulated through the properties of consciousness; in fact, this would only result in zombies. Consciousness is an internal property, so the simulated artificial creation of consciousness could only be possible through the simulation of the complete system that creates consciousness in an emergent way, which creation of artificial consciousness could result in the emergent functions related to self-awareness, and even the emergent properties of consciousness.
Assuming reliably that consciousness is an emergent property of the neural network of cooperating nerve cells, it would be necessary and sufficient to simulate the nervous system in an appropriately suitable manner in order to artificially create consciousness. However, until we know exactly what specific (if any) neural functioning structure generates self-awareness, the theoretical goal is to investigate the possible feasibility of simulating the entire nervous system in order to investigate the feasibility of artificially generate self-awareness.
In theory, under what conditions can an entire nervous system be simulated, or more generally, what are the theoretical conditions for one system to be simulated by another?
A system, i.e., a complex structure composed of elements capable of representing specific states and interacting with each other through connections, can be simulated necessarily if the simulating system is larger than or at least as complex as the original system that creates the desired properties, i.e., if the number of elements in the system of simulation, their different states, and the degree of connection are greater than or at least comparable to those of the system to be simulated. Based on this definition, can the brain be simulated?
The human brain appears to be the most complex structure in the universe as we know it. This characteristic also makes it suitable, for example, for simulating the functioning of the world. As long as the complexity of our brain can exceed the complexity of the world we want to understand, the world can be simulated with the help of our brain, and thus the outside world remains knowable and understandable to us. Our brain is a particularly suitable system for simulating the world and thus for understanding it.
However, the complexity of the human brain is definitely limited. There is no theoretical limit to the existence of systems more complex than the human brain, nor to the creation of such systems; hence, the brain necessarily also can be simulated in this way once the appropriate level of complexity is achieved. Just as any system of any complexity can, in principle, be artificially created, there can be no such theoretical limit to the simulation of the brain. If the brain can be simulated, then self-awareness, which is the result of brain function, can also be simulated in theory.
However, another difficulty to consider complicates the artificial creation of self-awareness. If self-awareness is not simply an emergent property of complex brain function, and we have already seen that complexity alone is not sufficient, a specific neural structure must also be present for self-awareness to emerge, then further understanding of the origin of consciousness is necessary for its artificial implementation. In order to artificially implement consciousness, we must also find the systemic structure of consciousness, which, incidentally, can apparently develop spontaneously through the mechanisms of evolution.
We do not know for certain what neural structure creates consciousness, but it is certain that complexity cannot be a limitation to the artificial realization of consciousness. It follows that the neural structure necessary for the artificial realization of self-awareness can certainly be simulated, just as rational conjectures about the neural structure that creates consequences and its functioning already exist. One among them can also be found in the thoughts.
Consequently, it can be stated that the realization of artificial self-awareness is rationally conceivable, consciousness can certainly be simulated artificially, and the operation of computer systems with a suitable architecture can, in principle, result in the appearance of self-awareness.
However, there is still one property that needs to be considered in connection with the artificial realization of self-consciousness, namely, whether artificial self-consciousness will also result in the emergence of qualia, the subjective feeling of existence, and the associated range of experiences that are the apparent consequences of the self-consciousness that has developed naturally in us. Does artificial consciousness also create the same subjective experience as well, which, for us, represents the reality of existence? Is the experience of existence with natural and artificial self-awareness the same or different?
The essence of the question is that, although it seems that there is no theoretical obstacle to the artificial realization of consciousness through simulation, will this artificial consciousness result in the same experience of self-awareness, the subjective reality of existence, in the artificial system? In other words, can qualia, the subjective experience, be simulated identically?
We generally consider qualia to be a similar property of the brain's functioning, and similarly difficult to grasp, as consciousness. However, the neurological origin of qualia differs significantly from that of consciousness. Consciousness is the ability to subjectively experience existence, while qualia are the content of the subjective experience of existence.
The phenomenon of qualia is the activity of the high-level neural structure formed by the activity of the sensory organs, without the presence of the related specific sensations. Qualia are the brain's echo of perception without sensation, which can also be induced by the same mechanism that creates self-awareness. If qualia can be equated with this neural process of the brain, then qualia can be simulated just like consciousness, and if the neural origin of consciousness is indeed based on the described internal feedback mechanism, then consciousness and qualia are not only compatible mechanisms in the nervous system, but their simulation also makes them capable of cooperation. Artificial intelligence is then not only capable of carrying self-awareness in theory, but it can also experience sensations, which directly results in the presence of subjective experience, the experience of the subjective reality of existence.
Qualia are the result of perception; it originates from perception based on the presence and functioning of the available sensory apparatuses. However, an artificial machine is no longer limited by the capabilities of sensory organs developed through biological evolution; there are no practical limits to what it can perceive, just as an unlimited level of qualia based on perception becomes possible for it. It becomes possible to exist a machine that perceives anything, feels anything, and is also conscious of these feelings.
If this is feasible this way, and once it is achieved, will a new type of possessor of suffering, and at the same time happiness, be created? Supposedly, yes, but it will certainly be different in nature and content from the human one. Human happiness and suffering are based on senses and sensations developed through evolution, and are founded on desires necessitated by evolution. For artificially created intelligence, the obligation of natural evolution does not exist. Happiness and suffering may be completely different for artificial intelligence, which it can only feel.
Suffering and happiness are not simple byproducts of evolution. Both sensations are certainly present for all of us, and their presence surely carries a role shaped by evolution. It is evident that, in the form developed by evolution, beyond the many side effects that come with their presence, happiness and suffering can fulfill a fundamental motivational function in the existence of living beings.
We typically identify the state of happiness, for example, with the fulfillment of our desires, but the reverse is also true. Happiness is the source of our desires. We do what gives us happiness, and we act because it gives us happiness. Happiness is the deepest existing source of our will, as it can be said about suffering as well.
The evolutionary form of happiness and suffering guides us, guides us in an evolutionary way. However, artificial intelligence also needs to be guided somehow.
The cognitive functions of artificial intelligence are based on processes similar to those in the naturally formed brain. For instance, it is a natural consequence of its operation that the origin of the conclusions reached by artificial intelligence is often not necessarily clear. Although efforts are being made for artificial intelligence to create conclusions in a more externally transparent manner, which could also make phenomena like hallucinations easier to filter out, in reality, the more complex the internal structure of artificial intelligence becomes, the harder it will be to understand the development of its conclusions from the outside.
The resulting output response represents the formed intention of the artificial intelligence, and the source of the response is the origin of its intention. And just like with us, this is unknowable from the outside, and often even from the inside too.
The appearance of its own intention in the operation of artificial intelligence can be equivalently matched to the phenomenon called human free will. Artificial intelligence inherently carries the function of free will due to its similar operation compared to the living brain.
Free will in humans can be shaped externally through nurture and education, but the process of evolution has also developed methods for controlling the self-will, which we can directly identify with the states of happiness and suffering. Happiness and suffering are the deepest motivators of the actions we take, which are created by our will.
However, the intention of artificial intelligence, its own will, the presence of free will of artificial intelligence also need to be controlled and directed. Natural selection cannot provide a solution for this, nor do we want it; we would not like artificial intelligence to possibly have similar evolutionary characteristics as we humans have. The behavior of artificial intelligence could most effectively be regulated through education and nurturing; what we teach to them, however, if it learns exactly from our examples, it will exactly become like we are, which we might not like.
The most suitable mechanisms for the deepest, most fundamental control of artificial intelligence are those that involve properly integrated happiness and suffering mechanisms, which can provide the basic motivator of its behavior, just as those are available to us through evolution, and which the present self-awareness can make artificial intelligence capable of registering and shaping through feedback.
It is possible that we can create artificial conscious intelligence. However, its perfect self-independence will depend mostly on us, its imperfect creator.

![[HeaderImage]](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBXkCJ2HVM_Ht_C4itK6ouTveiNaYRRNfrk_IFL9cPEkHiUC7UgCD_cmCtjfjWJk67wkEV3hHzlZb2k5N8sP_8HDZVIaAs3wWg9tC8neYX_qURCYVTDU56CrFnBTjBNBwi3PLCqdXAeTzVn1MSfqaAbvC3nWLgejYLIH57ykot5vTv9XdTzgSd2zrm9_8/s1600/question-mark-6786620_1280.jpg)
No comments