Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

Pursuing Artificial General Intelligence - the problem, the solution, and the risk

Artificial General Intelligence (AGI) is our desired ultimate tool to make our human life better - or make it worse because of the risks....


Artificial General Intelligence (AGI) is our desired ultimate tool to make our human life better - or make it worse because of the risks. Today's Artificial Intelligence technology advancement seems to be stuck in the development, because of the mounting difficulties of how to make deep learning systems universal.

An AGI system would be capable to act intelligently in any environment without specialization to those circumstances. It's inside working mechanism would be capable to generate intelligent behavior regardless of the actual properties of the available datasets.

We have systems called Artificial Intelligence. AI models are capable to exploit statistical patterns in datasets. AI systems doing it without any internal judgment that the found correlation really has a real connection or it is just an unrelated coincidence. The believed reason for this setback is that today's AI lacks of learning meaning in the flexible and generalizable way that humans do.

Today's AI doesn’t have the clue what it’s doing because does not have the ability to possess the property to have meaning from the available information. A human decision is needed for its behavior whether it is properly adjusted, because only human has the ability to possess meaning.

AI has no actual understanding because it doesn’t have an internal model of the world, and how it and the objects in it function as humans do. A possible solution must combine deep learning with a cognitive model approach.

Today's AI development stuck in this hurdle because we do not know how could we create cognitive models artificially, we don't understand how meanings generated from the input information by the brain. We do not even have the exact definition of what meaning means, yet, meaning must not be just a fuzzy philosophical concept. The creation of the meaning must be an information processing method made by the neurons, hence, theoretically, our digital computers must be capable to simulate such an information processing that somehow creates the meaning.

You can try prize-winning chatbots like Mitsuku ( https://www.pandorabots.com/mitsuku/ ) or Rose ( http://ec2-54-215-197-164.us-west-1.compute.amazonaws.com/speech.php ) to see how (un)intelligent are today's language-based cognitive systems.

Today's successful AI systems like Deep Mind’s AlphaGo, the system that beat the world’s greatest game players at the world’s toughest game are specialized settings capable of doing a specific task. Even if they are doing it very intelligently, they do not have the proper flexibility to adjust itself by its own to a slightly different task unless getting it retrained.

IBM's Watson, the AI system which conquered Jeopardy against the best human opponents in regular circumstances using human language, (most likely) did not understand anything about what was the question and why it is the right answer, what Watson itself gave. Watson only has a huge database and quick searching algorithms to find the right relation between meaningless information. Watson can help humans by its effectiveness, but cannot replace human in understanding what it is doing.

We ultimately need an artificially constructed process, the creation of meaning framed around abstract information if we want to move Artificial Intelligent systems forward toward human-level reasoning.

Additionally to the meaning problem, today's AI doesn’t have inspiration or the ability to gather abstract information for unspecified distribution across future learning domains. And, until it does, we’re far away from having human-level intelligent machines. Today's AI (somewhat luckily for us) does not possess the ability to have its own will, have a process of self-motivation, which if correlates and cooperates with the ability of creating meaning, could fundamentally mimic human intelligence.

To create artificial general intelligence, we must define a process that creates meaning in the available data representation.

What could meaning mean? It is difficult to define meaning, but we might not make a big mistake if we define meaning - somewhat loosely - as a structured data representation of all the available information. Meaning must be based on all the available information and created by a process organizing that information. Generally, meaning means all the related information in a structured, relational hierarchy.

Why the concept of meaning is so difficult if it looks like just data processing? Because of our only example, the brain process information differently as digital computers do. This is why it was so difficult to find deep learning methods using common digital computer architecture. The brain, even if it has similarities with digital operating methods, differs fundamentally in architecture from everyday digital computers. Making deep learning capabilities by digital computers is the task to mimic the process of finding correlations in unstructured datasets, what the brain does, in a different architecture that the brain has. After we found that method, the artificial learning systems got superior capabilities in the learning field over a human because the artificial hardware does not have the biological limits. Deep learning overpassed humans in their specific field.

To make further progress in the field of Artificial Intelligence, we need to mimic the brain's data processing method creating meaning on the architecture of the digital computers. We need to create the brain's data processing method artificially by making classified relational data representation from the unclassified datasets to create meaning.

The fundamental problem is that we do not know how the brain process and structure information, we even do not know exactly what is the result of that data processing, we do not know what the meaning really is. However, we can suppose concepts and processes with the hope, by realizing those will lead to the desired goal.

To get closer what the meaning means let examine for example, what a chair means? I can recognize a chair from different viewing aspects because I have a learned concept of what it is. A chair is an object, made from different materials for the purpose to sit on it or use it for other things that can be done with it. A chair means all the available, related information to that subject, organized in a structured relational hierarchy.

Let suppose meaning means - as a result of the above-mentioned data processing - abstraction and generalization of the available datasets. In this concept, abstraction means creating classification in the given data representation, and generalization means creating a dynamic relational correlation between different data representations.

Today's computers mostly can have visual, audio, and a huge amount of abstract written information. In the computer's memory, these are just meaningless symbols. However, with proper information processing, these meaningless pieces of information can be connected together, can be organized, can be related based on abstraction and generalization. The structure of the correlation what the information, the data represent, is the description of the outside world, it creates the abstract representation of the meaning.

The classification and generalization are fundamentally dynamic because the world is dynamic and constantly changing.

What we have now to achieve the function of meaning artificially? We do not know exactly how the brain doing this information processing, the data classification, and generalization, not even by understanding how a neuron works, because the brain has a different architecture. We try to make matching models how the brain works, an earlier discussed resonance model (link) is one of that. However, before we can figure that out, we still can try to mimic brain functions instead of modeling how the brain process information.

We possess extensive unclassified datasets. We have sophisticated statistical methods to find patterns in these datasets. Abstraction and generalization are based on pattern search on extensive unclassified datasets. We already have the tools for data abstraction and generalization, we already have the tools to create meaning.

What do we need to do to have Artificial General Intelligence? We need a general information processing of data classification by abstraction and generalization on all the available information. It must be a continuous background process. Every new information must be processed by classification, must be related to all other already available and processed information. The brain constantly doing it, AGI must be doing it too.

The function of meaning does not depend on the amount of the available information, only the quality of the meaning does. The meaning-based AGI can understand the world more deeply by having more available information, but the ability to possess the function of creating meaning is to have the proper data processing.

We are already doing this kind of data processing. Without it, we could not use search engines or the response time would be unsatisfactory. Today's image processing methods are able to classify objects reliably from different viewpoints. Voice processing is able to convert spoken words to writing. To possess meaning is not a new quality, but a global classification of all the available information by abstraction and generalization. Possessing meaning is a process on the globally available information.

If we ask what is a chair from a capable, today's AI system, they simply recite the dictionary definition. Mostly it is all the information that we can obtain from these AI systems. Today's AI systems do not utilize a global classification of all the available information by abstraction and generalization about the subject of the chair. AI systems have the methods, they are using deep learning function to recognize relations, and in this way, they are potentially capable to organize all the available information in a hierarchical relational way. They simply not doing it. The required computational resources are too huge. Today's digital computer architectures, even if it is potentially capable to fulfill this task, are not efficient systems for this kind of data processing. The human brain evolved specifically for this purpose, it is a much more efficient system for this task. Yet, to reach our desired goal, to create AGI we must, and eventually, we will overcome this computational obstacle. We can use traditional computer architecture with hardware and software optimization and enhancement, we can develop more capable computational architecture resembling how the brain works, like the suggested resonance model, or we might try to utilize the completely unrelated technology of quantum computing, which seems to be more than adequate, a highly effective working mechanism for this particular subject.

How would we recognize if an AI possesses the meaning? Having meaning is not a new function that can be specifically created, and hence easily recognized. However, if an AI system possesses the ability to create meaning, it can do its job more intelligently, more flexibly, and more generally. Our main goal was originally to create Artificial General Intelligence. Our believed obstacle was that today's AI systems do not have the ability to create and possess meaning. If we implement methods of data classification and generalization on all the available information in artificial intelligence systems and we would see it creates the ability for the system to function more generally, we might safely state that the system possesses the function of meaning.

How should we implement functions of data classification and abstraction in today's AI's architecture? It is already there, just need to utilize and integrate more deeply, more fundamentally into the system using it on the whole available datasets. The challenge is huge but can be built up gradually. The ability of the function is potentially and fundamentally present, the quality of the function needs to be enhanced.

Beyond Artificial General Intelligence
To create an AGI which is comparable to human intelligence, we need to step even beyond the possession of the property of meaning. We need to create motivation, will, and curiosity in the AGI systems. The realization of these functions was described in earlier thoughts. However, achieving these functions carries huge risks of creating competitive intelligent systems, which competition includes human intelligence too. Without limiting human-level artificial intelligent systems, we challenge competition with a potentially more capable AGI than we are. Before we implement the ability to have a will of these systems, we need to develop driving mechanisms to limit self-generated goals in these systems. The method to create this limiting function is also described in earlier thought.

Pursuing Artificial General Intelligence do challenge problems, suggest solutions, and predict risks. We need to and we will handle the obstacles. We are intelligent machines.

No comments