Page Nav


Classic Header




How to drive artificial intelligence systems?

Artificial intelligence systems, especially the deep learning architecture are efficient methods to discover correlations, create classif...

Artificial intelligence systems, especially the deep learning architecture are efficient methods to discover correlations, create classifications, and predict nonexistent elements in big data sets. The deep learning method resembles the living brain in many aspects. Similarities are obvious. Practically the brain is the original model of the deep learning architecture.

Based on the resemblance to the brain, risks can emerge from the possibility that artificial super-intelligent systems will identify doing things of more efficient ways, concoct their own strategies for achieving goals, and even develop goals of their own.

The risks look real. In artificial intelligent systems, the functions, the working rules are developed by the system itself. They are not preprogrammed. The relational mechanism to reach decisions in the system is not even clearly seen from outside.

If an artificial intelligence system is more than to discover relationships and more than to classify data, if it is able to emit active responses, it is a crucial requirement, that the system is controlled and can be guided. Current artificial intelligence systems do not have this control feature.

The brain carries this function naturally. However, even the deep learning architecture - which has the most resemblance in structure to the brain - has no visible strategy to achieve the required control. No artificial intelligence system should be allowed to emit responses without control. Otherwise, we risk the above-mentioned situations and many other similar possibilities.

Is it possible to implement control into the artificial intelligence systems? The brain has this function. Therefore, theoretically, implementing control is possible. How can we control artificial intelligence? How can we drive deep learning architecture? How can we make artificial general intelligence to be usable actively?

AGI is equivalent to the UAA-system in goals. UAA-system was discussed in earlier thoughts.  The artificial intelligent systems used today are mathematical computations on data sets to find the required results. UAA-system is a different, unproven method to fulfill the expected behavior. However, the UAA-system's control procedure is usable for the classical artificial intelligent systems.

Controllability is built in the UAA-system. The control is made by the artificial implementation of the pain and its functions to guide the system.

In biological systems, the pain and its conditional relations - which can be low-level relations, like avoid the heat, or very high-level relations like obeying the law or urge to educate - are the driving force of the behavior. Can the pain be implemented into the classical artificial intelligence systems?

What is pain? It is a sense, which if it is present, the biological system begins a search for effective responses until the response reduces the presence of the pain. How can the pain be implemented in the classical artificial intelligence systems?

Let see it through the case of deep learning architecture. The deep learning architecture is a multi-layered system, where the information flows from the input side to the output side. The units of every layer are connected to all the units of the next level (which are closer to the output side) layer. The connections are one-directional, weighted relations. If the sum of the weights reaches a determined limit on the connected unit, the unit will fire, which means, it activates its output relations. The deep learning architecture's main function is to modify the weight of the connections and its sum's limits on the units to produce the simplest output of the system. It is called the classification of the input data. Relation search of the data is the search of a specific set of the weights and limits of the connections' sums, which can produce the simplest output. If this weight set can be found within a given limit of the simplicity of the output, then the actual data set is correlated. If a correlation is established than non-present data elements can be predicted based on the established correlations. The search for these correlations in the deep learning architecture is mathematical procedures on the data sets.

How can the function of the pain be implemented in deep learning architecture? Pain is a state to signal non-readiness. It is a signal, which represents that the system has not found the right answer yet, even if the found answer is an answer in a regular way, i.e. the found output is simple enough. The pain is the state to signal that the search needs to be continued for finding output. The search continues until the signal of the pain becomes none present. The pain is the state, in what the search continues until the system finds an output within the given limit of simplicity (even if it is not the simplest output, so it is not the most perfect weight set) with the pain state vanishes. Pain is a state to continue the search for appropriate weight and limit set.

The pain can be the control and the driving function in the deep learning architecture. The pain's function can be implemented as a mathematical process in the artificial intelligence system's regular, working algorithms.

What triggers the pain state? How the pain can get its values? It should be based on a subsystem, which is controlled from outside. In the case of the living brain, this subsystem is developed by the evolution, affected by the environment (even by the social environment), and enhanced by developing conditional relations. In artificial intelligence systems, it can, and must be implemented artificially, determined its basic values by outside, by the creator of the system. These values, the functions of the pain will be the driving effect, which controls the artificial intelligence system.

In the simplest case, when no conditional relation exists, the pain's trigger conditions are determined directly and built into the system. It can be like: do not harm humans, or do not harm humans that do not carry specific properties, like do not have guns, etc. If the deep learning mechanism finds a weight set, which leads to a pain triggering state, or the pain state present, the search for weight sets remains active and continues until the found result cancels the pain state. The artificially implemented circumstances, rules, i.e. the pain states can be complex rule sets. The artificial intelligence system would follow the rules of the pain's rule sets, and obeys the requirements that it represents.

Enhancing the pain's trigger conditions by conditional relations, the pain subsystem itself can develop a complex set of rules by its own. The artificial intelligence system can develop complex behavior to cancel or avoid pain. The artificial intelligence system can develop morality. Still, every pain state is based on the basic rule sets, similarly, how it works in the biological systems.

Artificial general intelligence can be guided, can follow rules, can obey the law, and even can develop morality. To achieve this, a new, biology-based function needs to be implemented, the function of the pain.

However, it is not about that the artificial intelligence system can develop its own goals, to get its own will, or get free will. Will is a different subject, even if it is correlated to the pain. Will and free will are discussed in several thoughts suggesting its origin.

No comments