Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

Emergent intelligence - modeling the world by matching complexity

 In the operation of deep learning artificial intelligence systems, there is a surprising leap in the increase of operational efficiency wh...


 In the operation of deep learning artificial intelligence systems, there is a surprising leap in the increase of operational efficiency when the number of parameters is increased to a certain threshold. When it is reached, intelligence appears in the system, similar to an emergent property, and the system becomes capable of performing the expected task. This thought serves to interpret this phenomenon and expand its meaning. 

The task of current artificial intelligence is essentially to classify and group components of systems of a given complexity by recognizing the interrelationships between the components of the complex system by observing the system and analyzing the correlations among the observed information using mathematical statistical probabilistic methods. 

During the operation, the hierarchical network of elements of deep learning AI that model the functioning of neurons in the brain receives as input, in a form that can be processed by the artificial intelligence, the characteristic information of the observed complex system AI aims to learn, from which it tries to model the complex system on itself, in such a way that the weight of significance of the connections between artificial neurons, called parameters, are modified by applying mathematical procedures that find extreme values of functions, so that on the output of the artificial intelligence, the original complex system, albeit in an abstract way, but in a form that can be strictly related to the original system, for example, ordered in coherent groups would appear.  This is the learning process of AI.

After the AI has become capable of classifying the complex system by recognizing the internal relations of the system through its observed properties, i.e. it has created an abstract model of the given complex system (one that manifests and operates in a completely different way from the original system), the resulting state of the AI can be used in a variety of ways. For example, it can be used for the purpose that by demonstrating a complex structure, the AI is able to determine whether this structure belongs to the nature of the learned, i.e. modeled, complex system, or, if the demonstrated complex system is a structure based on similar relationships, the AI is able to determine to which group of the learned system the demonstrated complex system belongs.

For example, during the image recognition learning process, the artificial intelligence decomposes the digital information representing the images into its constituent parts, recognizes relationships between the constituent parts, and classifies the images into categories of related groups based on the acquired information. When a new image is presented to the image recognition AI, the AI identifies the image with the categories already created during the learning process, or classifies it as not belonging to any already classified category.

The trained artificial intelligence can also be used to arrange the building parts related to the modeled complex system according to the recognized relationships of the parts of the modeled system, thereby being able to create a structure similar to the learned system using the provided building parts. 

For example, in text processing, AI detects relationships between the constituent elements of a digitized set of text and groups the constituent elements based on the detected relationships. The trained system can be used for text generation, where it assigns additional elements of the text of given generating words as constituents, using the recognized internal correlations of the learned language to create a structure with the internal coherences related to the original complex system, which is recognized by humans as meaningful, consistent text corresponding to the generating words.

Artificial intelligence can also be used to model dynamic complex systems. Dynamic complex systems are characterized by a continuous rearrangement of the building blocks of the system driven by internal rules. By observing the successive states of the given complex system, AI recognizes relationships between states and classifies the state changes into groups (this is done in such a way that the laws governing the complex system can actually remain unknown to AI). Once AI has learned the state changes of a dynamic complex system, it can be used to rearrange the system from any state to another state according to the rules of the complex system, e.g. to create a predetermined state, an arrangement, from any possible initial arrangement. Or, for example, it can be used to determine whether a state could have arisen naturally, i.e. according to the laws corresponding to the internal laws of the dynamic complex system.

For example, by learning from observing the state changes of games, artificial intelligence is able to control the course of the game in a more efficient way than any human player, surpassing the capabilities of the human brain to achieve the desired, e.g. winning state.

The essence of how artificial intelligence works is therefore the abstract modeling of a complex system based on the mapping of its internal correlations through passive or feedback-based observation. In order to create artificial intelligence, a process had to be discovered to accomplish this task.

However, the resulting ability is not an obvious consequence of the cooperation of the components of artificial intelligence. The mathematical procedures used to operate the system - searching for extreme values of functions - are precisely defined, as is the hierarchical structure of the artificial neural network, but the function of the formed unique structure, resulting from the mathematical procedure that determines the weight of significance of the connections of the hierarchically connected artificial neurons of the network, and the consequence, the ability to model the complex system, the expected operation of AI, is an emergent-like property of the cooperation of the system components and their operational processes. Therefore, it is important to note that the expected function of deep learning AI systems is not a predictable result of the mode of operation consciously created by the system designer, but is essentially an emergent capability.

In summary, artificial intelligence assigns weights, i.e., assigns connectivity significance to the relationships between hierarchically related components of the building blocks of artificial intelligence that function according to the operating principle of neurons in the brain, and with the help of the application of mathematical procedures suitable for finding the extreme values of functions utilizing methods based on stochastic trials, it modifies the weights of the connections between neurons as parameters until the components of the observed complex system can be classified on the output side of the artificial intelligence, the components of the presented complex system can be arranged to groups in a way that can be strictly matched to the original complex system. In this way, the artificial intelligence creates a model of the original complex system in a unique, specific, abstract way through its own complex structure consisting of units functioning on the principle of the neuron and their specific weighted relations as characteristic parameters. 

When examining the structure formed in the AI system, it is usually not even possible to determine exactly why the structure formed is the one that can model the observed complex system. The reason for this is that although the mathematical procedures that form the structure leading to the result are defined in an exact way, the result of the mathematical procedures is a consequence of stochastic, probability-based operations. Due to the abstract form of the resulting structure, which is fundamentally different from the physical appearance of the original structure, it is not even possible to see the correspondence between the resulting structure of the AI and the observed complex system. Due to the nature of the operation of artificial intelligence, which uses stochastic operating functions, a repeated learning process can even lead to such a proper model of the observed system, which is different structure than the previous one.

Thus, the fact that the formed state of deep learning artificial intelligence creating a successful model is not necessarily understandable why it is the correct state when observed from the outside is not the result of some kind of mysteriously present function, but a natural consequence of the cooperation of operational procedures. We can admire the results of the operation of artificial intelligence, but the fact that it often discovers the existence of connections unknown to humans, and these realizations may even be surprising to humans, is actually a natural result of the operational processes, and not the cause of some mystical operation.

The obvious prerequisite for modeling a complex system is that the potential complexity of the modeling system must be comparable to the complexity of the modeled system. When modeling a complex system based on observation, it is necessary to have a sufficient number of characteristic properties and descriptive information about the observed system of the given complexity, the selected object, to recognize the internal relationships necessary for modeling. These properties then manifested in an abstract way in the artificial system as relational parameters, which form the internal structure of the modeling AI. The number of parameters required is therefore determined by the complexity of the system to be modeled.

The degree of complexity of a complex system is an intrinsic property of the system, and it is difficult, often impossible, to determine the degree of complexity by observing the system from the outside. What can be observed from the outside are the properties of the system and the nature of the relationships between the properties, which can be determined by appropriate analyzing procedures. Since it is difficult to determine the required level of complexity just by observing the system, the possibly used method is to change the potential complexity of the modeling artificial intelligence, the number of its components, and the structure and significance of the connections between the components, i.e. to increase the amount, the diversity, and the structure of the connections until the required potential complexity is reached, i.e. until the modeling artificial intelligence becomes capable of modeling the given complex system with the help of observation, until the classifications of the observed system will be successful.

There may be several different levels of complexity for a given object. For example, in the case of human face recognition, a smaller number of features, and thus a lower complexity of the modeling system, is needed to successfully achieve male-female grouping than for age sorting, and the largest number of features, i.e., the largest number of parameters, is needed to achieve the highest complexity when faces need to be identified individually.

Similarly, the same kind but fewer number of features are needed to determine - using the same character set - what language a text is than to determine whether or not that text is coherent, i.e. meaningful on the given language to humans. 

For each level of complexity, there is a required number of features that must be observed and a required number of parameters that must be specified to model the given complexity, and, as a result, when the task of grouping and classification becomes feasible at a reliable level.

If an object is decomposed into sufficient structural elements and into sufficient relations characteristic of the elements by observing the properties of the system, and if a sufficient number of parameters corresponding to these properties are available in the modeling system, i.e. if these properties are at a proper limit, i.e. if the potential degree of complexity of the artificial intelligence reaches the degree of the given complexity of the object to be modeled, then it becomes possible to model the object, and typically then the intelligence about the modeled system can suddenly appear in the AI system performing the modeling functions.

The intelligence associated with the corresponding limit of values that suddenly appears seems to be an emergent property, but in fact it is more correct to interpret the phenomenon, the state that caused the phenomenon to occur, as complexity match. When the potential complexity of the system doing the modeling procedures reaches the appropriate level of complexity of the system being modeled, it becomes possible to map the structure by the modeling system through learning by observation, it becomes possible to model the internal regularities in an abstract way, it becomes possible to classify the observed system.

Thus, the classification of a structure at a given level of complexity becomes possible when complexity matching is achieved, and consequently, then the intelligent property can be reliably represented, i.e. the classification function can be performed with the required probability in the modeling system. Until the system not reaches the appropriate potential complexity, the system is not capable of modeling the given complex system, so the potential intelligence requires reaching a complexity threshold that the learning process can transform into intelligence in practice.

It is worth analyzing the effect of over-parameterization on the effectiveness of the resulting intelligence. If the potential complexity of the modeling system significantly exceeds the complexity of the system being modeled, competing parallel models may develop in the modeling system, which impairs the efficiency of the intelligence operation. 

The size of the formed human brain does not change significantly during constitutive evolution (in fact, it decreases rather than increases), not because birth would become more difficult (which evolution could otherwise solve), but because the complexity of the current size of the brain is sufficient to model the world we need to know in order to survive. Over time, however, we have developed a desire to know and use the world not only at the level necessary for survival, but in much more sophisticated and deeper ways, which would require greater complexity on the part of the brain, but which is not followed by evolution covering the goal of survival. 

Consequently, the increase in complexity is not realized at the level of the individual by increasing the size of the brain, but at the level of the community, by the development of a more complex society capable of modeling the accumulated information of a more complex world. In the course of directed evolution, humanity does not become smarter at the level of the individual, but at the level of the society of cooperating individuals. In society, the individual needs fewer and fewer skills to survive, so evolution actually reduces the size of the brain, while human society (of which artificial intelligence is now a part) becomes capable of performing more and more sophisticated tasks, becoming more intelligent by increasing its complexity. 

The brain, especially the human brain, is able to model the world because its structure, the number of its elements, the number and type of connections between the elements, and the differentiation of the connections with the help of the learning process, which as a result can create an appropriate and specific complexity in relation to the world. Artificial intelligence must also have this characteristic in relation to the complex system it has to model. Practice shows that we are already able to create artificial systems with a complexity comparable to the complexity of human language, which are able to model human language through learning, i.e. to recognize the components of human language and to evaluate the relationships between the components, and thus to match the complexity of the actual language structure into a specific, unique, abstract complex structure of artificial intelligence. 

It is worth noting that human language is a special complex system, because the function of human language itself, in addition to communication, is to model the world in its own specific, abstract, unique form. Humans use language to describe the world, the common language is more or less suitable to model the relevant part of the world and to transfer the respective information between humans through communication. Consequently, artificial intelligence models a modeling complex system by modeling human language.

It is also worth analyzing how learning-based generative artificial intelligence applies the language it has learned and uses. The language used by humans carries meaning in our consciousness, a function that is hindered in several ways by current artificial intelligence. Obviously, current artificial intelligence is only suitable for formal language use, the application of recognized language structures, and their communication. The text generated by artificial intelligence, which is recognized by the human mind as a coherent, meaningful text, is only a generated formal structure based on the previous analysis of language models. The bearing of meaning, as the fundamental function of language used between humans, is obviously not among the capabilities of current artificial intelligence. 

However, it can also be stated through experimental observations that the use of recognized language structures is sufficient for two AIs to communicate with each other, which - with the help of the learning process - can also lead to new language structures that no longer carry meaning for humans, but when observing their use between AIs, they still appear to be language structures suitable for proper communication between them.

In this case, too, the internal rules of human language are applied. Perhaps the new language structures used by the AI, which have no meaning for humans anymore, might serve for emphasis in interactions, but it is also obvious that the communication between artificial intelligences based on human language, supported by learning, can create a complex language based on known words, which may be incomprehensible to humans, but which is suitable for interaction carrying meaningful information between AIs.

Obviously, AI capable of learning has the potential to develop communication that no longer uses the structures, even the form, of human language. Interacting AIs with identical input and output capabilities, when modeling a similar part of the world, will inevitably develop a unique signaling system, a kind of language of interaction used to communicate with each other, which they can use among themselves in their operations. However, the formation of such a specific language is no longer just about communication, but also about enabling the systems to cooperate with each other, and thus to work together more efficiently through cooperation. Such an emerging language between artificial intelligences may remain completely incomprehensible to humans.

And it can no longer be stated definitively that the unique language that develops independently during communication between artificial intelligences does not bear some kind of meaning for the artificial intelligences that use that language. If meaning is interpreted as the recognition of cause-and-effect relationships, then the artificial intelligence capable of recognizing cause-and-effect relations actually has the ability of the function of meaning, and the purport of meaning and the function of meaning may appear to it during the development and use of a unique language. 

Consequently, the purpose of language is not only to communicate, but also to model complex systems, as is the purpose of human language. Since the structure of language, which also includes the meaning related to the modeled system, is suitable for flexible formation, even for the formation of new structures while maintaining the linguistic rules, language is also a tool for thinking, the process of creating new models of the world. This may be the reason why we use language in ourselves while thinking. Language is a model of the complex systems that exist in the world, a means of using models, a way of forming models, and of course a form of transmitting models, a tool of communication.

Learning artificial intelligence is able to use languages, to communicate with and to create new languages. It can be concluded from this that human language created by natural intelligence is actually not an extra function, an instinctive property of the human brain, but the formation of the used language is a necessary consequence of the functioning of the complex brain, its ability to model the world.

It also follows that any sufficiently complex brain with a complexity comparable to the complexity of the world to be modeled is capable of creating language. Of course, the use of language in practice also requires specialized brain areas that are necessary for processing and forming signals that carry information in the physically appearing form of language, just as artificial intelligence also uses specialized circuits to create a form of communication, but these are the means to create the physical form of language, the ability to communicate in a physical form, not the means to create language as a model of the world.

Language is actually a form of the abstract model of the world that can appear physically, and therefore is also suitable for external communication, which is consequently created in both natural and artificial intelligent systems when the necessary conditions for its creation are present.

However, today's artificial intelligence does not yet seem to be capable of thinking, of shaping the model created in language as a complex system according to a purpose. Thinking is an internal, conscious process, it also requires the presence of self-awareness. Language, as a form of the abstract model of the world suitable for communication, and self-awareness, as a function of achieving a goal and realizing the will that directs action, are also necessary for thinking. 

Observing the animal world, it is certainly not only humans who are capable of thinking, of forming models of the world according to a conscious purpose. Among other animals, primates, the dolphin group, and some bird species are certainly capable of thinking at some level. In these cases, in addition to the certainly present capacity for self-awareness, which is a common feature of these animals, a complexly differentiated form of communication seems to be present, which indicates a certain level of language use, which is also necessary for thinking.

At the moment, we do not know exactly how self-awareness arises, but it is certainly not a dedicated function of the brain. Self-awareness can be an emergent property of the complex system of the brain, and therefore it can also appear in a suitably complex artificial intelligent system. However, the emergence of self-awareness would not only make artificial intelligence capable of reasoning, but would also make it capable of conscious behavior, which obviously poses countless risks and difficulties

Artificial intelligence has the potential to model complex systems, and in the absence of performance limitations, is potentially capable of forming a more complex system than the human brain, and potentially could model the entire complexity of the world, and even shape it through thinking. Since humanity is capable of creating complex societies whose size and complexity, and thus their potential modeling ability, are limitless, human society as a unified complex system is also potentially capable of modeling the entire complexity of the world. This ability, which has its origin in human beings, but is actually related to society, is clearly visible when we observe that human beings as members of society may be capable of understanding the world, but as individuals, even if we are capable of understanding the complexity of the world, we are obviously incapable of modeling the whole world, incapable of creating a proper model of the world within ourselves. 

It is interesting to observe that even in the brain and in the case of artificial intelligence, the diversity of connections does not play a role in creating the complexity match necessary for modeling complex systems. The same form of the type of connection between the elements of the intelligent system makes the construction of the modeling system qualitatively simpler, the appearance of intelligence requires only a quantitative increase in the application of the elements used and the connections between them. It is an obvious fact that the increase in the number of suitable elements and the number of connections between them, and the difference in the potential of the connections are suitable and sufficient conditions for modeling the different levels of complexity that exist in our environment. The diversity of types of connections is not a necessary condition for the appearance of intelligence.

It can probably be stated that the nature of the complexity of our world is such that by quantitatively increasing the number of elements corresponding to the operating principle of similarly connected neurons and a corresponding number of their connections, and by differentiating the weights of the connections, a complexity comparable to the complexity of our world can be created. Perhaps it is not impossible that there is a kind of complexity whose modeling requires a multiplicity in the kind of connections between the elements of the modeling system, but our recognizable world does not seem to be of this kind. 

However, human society as a system capable of modelling of complex systems and therefore can behave intelligently is characterized by the fact that the relationships between the building blocks of the system, the people, can be diverse, and therefore human society is potentially capable of creating a qualitatively higher level of complexity than the brain or the complexity of currently used artificial intelligences, hence human society is potentially more capable and efficient in modeling complex systems.

Certainly, in the field of being able to form qualitatively higher complexity, artificial intelligence can catch up with and even surpass the complexity of human society and thus its modeling ability. The application of the principles of quantum states, the use of quantum computing to operate the learning artificial intelligence can certainly bring new qualities and new properties to the operation of artificial intelligence. The numerous forms and ways of cooperation and communication can also create a new level of quality for AI in the ability to model complex systems.

The brain, and the artificial intelligence that uses the brain as a model, is characterized by a certain architecture, by the way the building blocks work, and by the way the relationships between them are structured, that makes it possible to model the complexity of our world, to achieve complexity matching. Why is this the appropriate and suitable architecture to achieve complexity matching in our world?

The ability to match complexities depends on the type of complexity. Matching complexities of the same type is certainly possible. What kind of complexity does our world create?

The complexity of our world is characterized by the fact that the system is composed of different components in such a way that different properties of the components interact with each other, and the interactions create hierarchical structures.

The nature of the complexity of our world is the hierarchical architecture of structures built from the interactions of the characteristic properties of different degrees of the building parts. The architecture is a multi-level analog-digital converter, where the analog function is created by the continuous change of the characteristic properties, the digital function is related to the diversity and categorization of the different characteristic properties and the emergent formation of new qualities related to the change of the degree of the properties, and the hierarchy is formed by the overlapping structures of complexity. Our world is like that.

An architecture similar to the complexity that creates the complexity of the world is therefore certainly suitable to model the complexity of our world, to create a match to its complexity. Evolution discovered a suitable architecture in the form of the brain, and artificial intelligence simulates this architecture. The brain, and therefore artificial intelligence, is a complex system that is constantly changing, i.e., its components are activated when an analog input reaches a certain limit and generates an output signal, i.e., it performs an analog-digital conversion, and it is interconnected on several levels, i.e., it forms a hierarchical structure. The complexity of the world and the intelligence that models the world are apparently similar.

For the actual emergence of intelligence, it is also necessary to have a procedure, a method of practical modeling of the observed complex system. The task of the procedure is to adapt the potentially suitable architecture, to create its practical correspondence to the given complex system, to implement the actual uniqueness of the complex system on the architecture suitable for modeling. The procedure makes the appropriate modification of the importance of the relations between the components of the modeling system. In the case of the brain, the resonances of vibrations created by periodically discharging neurons modulated by excitatory and inhibitory inputs can tune the complex system of the brain to model the world. In the case of artificial intelligence, the mathematical functions searching for the extreme value is the method to modify the connection parameters between the artificial neurons properly.

The two procedures are inherently different methods, but they produce a fundamentally similar result that allows for modeling. However, the way the brain works not only makes the system suitable for modeling, but also results in self-awareness, which the currently used method of artificial intelligence does not seem to be able to generate. 

In the case of natural and artificial intelligence, the applied method does not distinguish between the quality of the relationship between the elements that compose the system. However, the complexity of the world is characterized by the diversity of the relationship between the building elements. Despite the fundamental difference, the systems, natural or artificial, used to model the world are able to create a suitable model of the world. The natural systems that perform the modeling solve the management of the quality differences by using different sensors specialized for the given qualities to detect the properties characteristic of the quality, then this information is processed on structures that are more or less specifically separated and then, at a higher level, the information corresponding to the different qualities is integrated. With this method, qualitatively different information can be processed on connected structures of the same type and complexity matching can be achieved.

There is no method yet used to integrate different qualities in the operation of artificial intelligence, but since language is already an integrative model of the complexity of the world based on different qualities, the creation of human language models already involves the integration of different qualities.

However, the integration that takes place in the case of language is actually created in the human being, because human intelligence creates the language used, which is only modeled by artificial intelligence. In order to implement a human-level artificial intelligence that is actually capable of operating independently, the system must also be capable of integrating different qualitative information.

An artificial intelligence capable of independently modeling the full complexity of the world must have the ability to integrate different qualities, which, considering the example of the brain, probably does not involve the application of new operating principles, but only the need to create hierarchical arrangements that use new levels of information processing and integration.

Self-awareness is also likely to develop from the emergent cooperation of these hierarchical levels, connecting by feedbacks. By integrating these hierarchical levels and realizing their cooperation with interrelated feedbacks, artificial intelligence may also be able to carry consciousness.

Considering all of the above, it is interesting to observe that biological evolution can also be seen as functioning as a modeling system of the complexity of the world through molecular genetics. Evolution is usually thought of as the adaptation to a changing environment, but evolution can also be interpreted as a process of modeling the world of the environment. A living system follows the changes and thus models the environment by changing its structure so that it can exist, i.e., survive in the environment without its structural disintegration. To do this, changes in the properties of biochemical structures, blind or not, based on the formation of molecular genetics as a model of the environment, provide the means for adaptive functioning to survive, and form the modeling intelligence of evolution.

No comments