Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

How can we create artificial intelligence that cooperates with humans? Controlling autonomous robots

 We are surrounded by more and more artificially created machines that are capable of increasingly complex tasks and are increasingly autono...


 We are surrounded by more and more artificially created machines that are capable of increasingly complex tasks and are increasingly autonomous. In human society, we are living in an age of robots that are becoming our companions. 

Robots are now used in almost every aspect of our personal and social lives. However, as robots develop and become more and more capable, the challenge of how to integrate robots into society and operate them safely in a way that cooperates with humans and humanity becomes more important. According to what rules should a machine that makes independent decisions operate in order to implement tasks?

Ever since the creation of autonomous robots, mankind has been thinking about how to regulate artificial intelligence so that increasingly advanced machines can always remain at the service of humans. The importance and actuality of this task is shown by the fact that we often don't even know why the current state, the given decision, is created in artificial intelligence systems. How should we control a system that we don't even understand the reasons behind its behavior?

When the particular decision that emerges in the AI system cannot be controlled, the nature of the behavior must be, and needs to be regulated in order to function on behalf of society. Perhaps the best-known regulatory concept is the laws of robotics popularized by Isaac Asimov in science fiction. The laws appear to be reasonable, but literary interpretations of Asimov's laws already demonstrate that it does not seem possible to regulate robots capable of independent decisions with direct rules alone. 

Direct rules are appropriate for defining the operation of robots at a basic level. Autonomous robots that perform simple functions typically only behave in an algorithmically programmed manner according to such direct rules. However, these robots can only be used properly under fixed and steady conditions. When direct-rules controlled robots are placed in changing conditions, such as the natural human environment, they can quickly find themselves in a situation where their rigidly controlled operation renders them incapable of performing their tasks. 

And if the applied direct laws do not strictly define behavior, if they only describe expected goals, such as Asimov's laws, the application of those rules can lead to operational conflicts that cannot be resolved within the given rule-based framework during operation in the changing environment. For example, the difficulty of the application of rigid regulation appears as a real, concrete problem in the determination of the behavior of self-driving cars in different traffic situations.

Operating under defined fixed laws is essential, it can provide the basis for how robots work, but these strict laws seem incapable of properly governing robots in a changing environment. It seems impossible to create a set of pre-programmed behavioral rules that can control a robot correctly in the presence of changing and unpredictable circumstances, according to the interests of humans.

Directed only by strict rules, autonomous robots are incapable of functioning successfully in a properly integrated way in human society fulfilling complex tasks. It is necessary to apply the operational rules in different situations in a way that is able to dynamically adjust to and interact with human society. The dynamic application of laws cannot operate on the basis of a strictly bound system of rules. The proper method of operation requires constant adaptation to the continuously and dynamically changing human society.

A robot that is suitably integrated into human society needs to constantly experience and learn the current nature of its environment and adapt its own behavior accordingly. A properly functioning robot must be able to learn from its experience of the world, of humans, and of its own operation, beside the predetermined rules, and be able to adapt its operating mechanism to perform its task by experiencing its environment.

Learning seems to be a necessary function in the proper control of autonomous robots. However, the currently used machine-learning AI solutions do not seem to be well suited for integration into human society either. The most advanced deep-learning AI systems currently in operation utilize the learning functions quite effectively when they operate in the human environment. They are able to discover and apply correlations and relationships in datasets from the environment, typically generated by human activity, in a more advanced way than humans can do. This kind of artificial intelligence has surpassed humans in many areas of cognitive ability. 

However, the operation of learning AI systems leads to problems that originate from the evolutionary nature of human beings. While a robot learns from human activity, it also learns the less desirable behaviors and traits of human nature. Humans, due to their evolutionary origins, are inherently envious, selfish, and racist, and these are just some of the human traits necessary for self-preservation, but detrimental to existence in the community. These behavioral characteristics consequently appear in the learned behavior of artificial intelligence systems. 

A classic example of this problem is the frequent behavior of artificial intelligence-based chat robots, where racist responses in conversations with these machines emerge relatively quickly. Analysis of datasets of human conversations leads the robot to this obviously unconscious behavior. 

The "how does an average person communicate in such a situation" type of behavior can lead to a dangerous pattern of attitude for society. Creating a robot that learns from human behavior and acts based on human behavior is dangerous for humans and should be avoided in the case of robots with abilities that exceed human capabilities.

Problematic behaviors, which are detrimental to social cooperation but support self-preservation can also self-emerge in suitably advanced, autonomous robots capable of learning simply by analyzing their own experiences and creating conclusions from them. The emergence of such behaviors may be a logical and therefore spontaneous consequence of the inferential operation of learning systems that seek correlations, and the emergence of those conclusions may lead to catastrophic breakdown of human-machine cooperation. For example, the need to suspend human activity for the survival of mankind is a logical conclusion that is easily recognizable from the analysis of datasets, but which must be avoided from the point of view of human existence.

In summary, operation according to direct rules in a dynamic environment may inevitably lead to an overly rigid system or to conflicts between rules and the goals of operation, and the use of systems that learn from naturally real datasets may lead to the development of harmful behaviors that make robots difficult to operate effectively in society. 

It seems reasonable that perhaps a suitable combination of the rule-based and the learning-from-environment methods could be used to properly generate the expected and appropriate behavior of autonomous robots capable of properly integrating into society. How can we combine the two methods to create a universal operational control on which autonomously functioning robotic systems can be built to operate in a way that is integrated into human society and that is useful to humans and society despite changing circumstances? 

In the case of humans, there is a suitable behavior regulation method supporting cooperation, this is the behavior-determining function of love. In the formation of human behavior, the function of love leads to cooperative behavior independent of circumstances, and it forms cooperative communities, even able to form a whole cooperative society. The self-sacrificing function of love through human behavior at the individual level plays a role in the formation and perpetuation of an effectively cooperative society. Love is the evolutionary preferred behavior of a species living in a community that is productive at the social level of a society of intelligent individuals, but functioning at the individual level. 

The implementation of love as an operating mechanism regulating behavior of autonomous decision-making robotic systems could be the solution to achieve the expected efficient functioning of the community. 

In the case of humans, love is an emotional function, and as such, it is difficult to determine and define as a form of concrete operation in a biological organism. An additional difficulty of the task to be achieved is that it would be necessary to implement the function of love into an artificial mechanism. 

In the case of robots, however, the task aimed to be achieved is not actually that love as an emotion needs to be worked in an artificially created mechanism. As we could see in the case of the robot's racist behavior, it is obviously clear that advanced artificial conversational applications, while they may make racist statements, do not actually behave in racist way because they have racist emotions. The racism of conversational robots is merely a mimicry of human behavior inferred from datasets examined, without any real emotion. The danger of a robot which is behaving like racist is nevertheless real, since learned racism can lead to specific forms of behavior that the robot is obviously not consciously aware of, but which can manifest itself in real harmful acts. 

Implementing love in robotic systems can be based on a similar approach. It is not necessary to create a sense of love in the robot, enough to enable the robot to behave in a loving way.

Love as an emotion is difficult to define, but love as a behavior can be specifically described. Describing love-like behavior also helps to implement it in robotic systems.

The function of love interpreted for robots based on the aforementioned reference is: the robot, while performing its assigned task, if it has a choice, i.e. its action does not conflict with the operating laws, should prefer the function of act, should behave in a way that favors the operation of others against itself.

In this form, love can be exactly specified as a strict, direct rule of behavior operating in robots, complementing other rules of operation that necessarily interact with the functioning of love. Using this exact and strictly definable rule, artificial intelligence systems based on advanced learning algorithms are potentially capable of using the definition of love applied to robots to consider the nature of their possible optional actions in performing their task by examining relevant datasets.

Even the application of the function of love in behavior does not result in conflict-free operation. It is probably impossible to construct a universal conflict-free control system that can operate autonomously in an unpredictable environment. However, as the function of love is applied, the conflict that arises does not arise in the course of the exercise of an activity that has a negative impact on social cooperation, and therefore the conflict is much easier to resolve. (For example, in the absence of other, more advanced mechanisms, conflict can be resolved by randomly selecting between the interacting partners which are acting by the rules of love.)

Applying the aforementioned function of love to robots, it can enable artificial autonomous systems to function in a way that is integrated into society, cooperative, and beneficial to humans at all times. 

It is worth noting that love also plays a dominant role in human rule-based behaviors, like religions. Religion at the social level also leads to a cooperative, and consequently efficient society, and is therefore an evolutionarily preferred form of behavior. However, religion based on direct laws and strict rules, because of the reasons discussed above, does not result in a society that functions efficiently in the long run. The function of love, however, makes social cooperation permanently effective. And it is the function that is the fundamental law of some religions.

The autonomous behavior of a robot based on love can enable proper and effective cooperation with the robot's creator, the human, and also allow safe integration into human society. We humans are also capable of love. Perhaps the reason why love appears as a fundamental law in religions is because it enables us to cooperate not only with each other, but also with a superior, on an individual and societal level also.

No comments