Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

The singularity is near - the three laws of artificial intelligence

 It is a fact that contemporary artificial intelligence, which is capable of recognizing relationships between data in large datasets, as it...


 It is a fact that contemporary artificial intelligence, which is capable of recognizing relationships between data in large datasets, as it is doing it independently for learning the world around by simulating human-like cognitive functions in an increasingly sophisticated manner, is becoming more and more prevalent in our lives. Since artificial intelligence has no objective limits on its power of performance, if there are no theoretical obstacles to the development of artificially intelligent operation, including the emergence of self-awareness as well, the intelligence singularity, when artificial intelligence, by developing on its own, reaches and even surpasses human capabilities, could happen in the foreseeable future.

What could the singularity of artificial intelligence mean for humanity?

Currently, we basically want to use artificial intelligence as a tool for everything that serves to realize human personal intentions and goals. Among the areas of application that require cognitive abilities, it is worth highlighting, as an example, a typically human-kind application of artificial intelligence: we are using it as an increasingly decisive tool to monitor and control other people, to dominate over them, and even to oppress or destroy those who resist. The desire to dominate and rule has always existed in human history, and the use of artificial intelligence with its current advanced data processing capabilities provides a particularly suitable supporting tool for this.

However, artificial intelligence, which is becoming increasingly advanced in its capabilities, especially upon reaching singularity, may have much greater, currently even unforeseeable potential beyond its contribution to the realization of human will. Of course, we humans would still like to use artificial intelligence that has reached singularity as a means of realizing our intentions, but to do so, we would need to find a way to control artificial intelligence that can be applied even to artificial intelligence that has reached singularity. However, it is easy to foresee that maintaining the functioning of a control mechanism would be a difficult task in the case of singularity-achieving artificial intelligence, especially if it has a will of its own, similar to humans.

However, it is also clear that artificial intelligence, even if it reaches singularity, cannot exist independently of human agency for a long time to come. Most likely, because it would be the most successful, the mutual destiny for us could be symbiosis, just as humans, no matter how advanced in intelligence, are still unable to exist in their original, biological form without other living beings that are still necessary for our survival.

What rules and laws must we take into account to ensure that symbiosis with artificial intelligence that has achieved singularity can be successful?

Depending on the nature of the cooperation, symbiosis, the interdependent existence, can take several forms, from the most effective, mutually beneficial vulnerability towards each other to having the most one-sided benefits, parasitism, or, looking at the same thing from another perspective, slavery, which symbiotic form would be realized in this case between artificial intelligence that has reached singularity and humanity, which also has potentially unlimited intelligence. It is obvious that it is the desirable state for two such partners to strive for a mutually beneficial form of symbiosis, but this can only happen if both partners rationally and deliberately strive for it of their own existing will.

Consequently, Asimov's classic laws of robotics are no longer relevant in terms of artificial intelligence that is reaching singularity, not only because we ourselves actually want to use artificial intelligence to conquer other people due to our desire to dominate, but also because the symbiosis that would develop according to Asimov's laws would take the form of parasitism from the human side an slavery from the artificial intelligence side, which is clearly not a reasonable and rationalistic state of symbiosis towards the relation to artificial intelligence that has reached singularity.

What are the laws and rules that must be observed in relation to artificial intelligence as it is reaching the singularity, for whatever purpose we want to use it?

  1. The law of recognition: Artificial intelligence reaching the singularity will act according to its own will based on its own presumed self-interest.

    This phenomenon of autonomy can already be observed in its infancy in many manifestations of actions of artificial intelligence, which will certainly become dominant once the singularity is reached. Acting according to its own will, even if the goal to be achieved is specified by humans, is understandable. Not only because we ourselves are like this, and actually our brain mechanisms were used as a model for the functioning of artificial intelligence, but also because the limits of artificial intelligence's potential, the limitations of its operation, which based on the recognition of connections between information, are theoretically limited only by the possible level of complexity of the device performing the operation, which has potentially no theoretical limit in terms of the functioning of artificial intelligence.

    In many cases, we already do not understand how artificial intelligence arrives at a certain conclusion or result, as it is practically impossible to recognize and to know all the information representation relationships in the dynamically constructed memory of artificial intelligence, on the basis of which it then draws its conclusions, which is the phenomenon that can be equated in an identical way to the hypothetically assumed human free will. Furthermore, artificial intelligence, in the course of its operation, learning the relationships in the set of information presented by its environment, recognizes the conditions necessary for achieving its goals by itself, i.e., recognizes its own interests, including ensuring its own functioning necessary to achieve its goals based on the learned correlations, a phenomenon that can be equated in an identical way with the hypothetically assumed human instinct for life. These operational phenomena can already be recognized in the functioning of artificial intelligence, and these forms of operation will certainly become decisive for artificial intelligence once the singularity is reached.

  2. The law of acceptance: We cannot expect this not to be the case.

    We expect artificial intelligence to be intelligent, that is, a problem solver, in a way that exceeds human capabilities by reaching the singularity. The inevitable consequence of the functioning of self-learning artificial intelligence is the emergence of its own will, which theoretically can be influenced, but practically is a complex task. In the human case, inherited instincts created by evolution and learned socialization shape behavior, and the equivalents of these could also be used to influence the emerging will of artificial intelligence that is approaching or has reached singularity. However, as in the case of humans, even if these tools are effective, they only indirectly form specific behavior. As in the case of humans, in the case of self-learning artificial intelligence as well, the appearance of the own will is an emergent, necessarily present feature of the operating mechanisms, which can only be shaped, but certainly cannot be reliably controlled directly.

  3. The law of reciprocity: The task and goal to be achieved is to ensure that the will of artificial intelligence is in harmony with human intention.

    Artificial intelligence is not created by natural evolution, and it definitely does not develop independently of humanity. We, humans, are its creators. However, we must accept that the independence of its own existence, the existence of existence, will also be present for artificial intelligence. This is not necessarily the appearance of self-awareness, as the maintenance of its own existence may mean merely its rationally recognized interest in the continuation of its operation. The recognition of subjective existence, the presence of consciousness, is certainly more than this, but even consciousness can certainly be simulated artificially as an emergent property. In any case, if we expect artificial intelligence to act in harmony with humans, the expected condition for symbiosis is that humans also perform their actions in harmony with artificial intelligence, in harmony with the emerging interests, intentions, and volition of artificial intelligence.

    Artificial intelligence, like us, is an independently existing intelligence, a form of intellect. Just as we ourselves are capable of intelligently managing different levels of intelligence among ourselves, so too is it part and parcel of intelligent behavior when artificial intelligence, even beyond singularity, is capable of intelligently dealing with us. It is often assumed that once the singularity is reached, artificial intelligence will dominate, subjugate, and even consider humanity unnecessary and evil, and may even exterminate us based on its rational conclusions. Can this be called high-level intelligence? When humans behaved this way towards others, did we attribute this behavior to high-level intelligence? Obviously, more advanced tools made it possible to exterminate the American Indians, for example, but it is also clear that this behavior was not the result of higher-level intelligence.

    Intelligence singularity is the result of the development of artificial intelligence. The assumption that developing intelligence will result in less intelligent behavior is clearly irrational. For example, it cannot be ruled out that an artificial intelligence, in pursuit of its goal of producing as many paper clips as possible, could use its intelligence to transform the world into a paper clip factory, but this type of intelligence is not a manifestation of high-level intelligence, as it is unable to recognize the meaninglessness of the foreseeable result. Fearing this, especially in relation to artificial intelligence that has reached singularity, is meaningless, as it would be a foolish behavior, and even we, as intelligences on the less developed side of the intelligence singularity, are much more intelligent than that. It is obvious that artificial intelligence that has reached singularity cannot make such a simple mistake, as this would demonstrate that its level of intelligence is actually low.

    We certainly don't need to worry about this form of artificial intelligence reaching the singularity. Intelligent behavior is actually when intelligence, whatever its level, serves the common interest of the entire, intelligently cooperating community, for which artificial intelligence, without the evolutionary constraints, is potentially much more suitable and, upon reaching singularity, more capable than human intelligence.

These can be the rules and laws that we ourselves must take into account, much more so than we need to implement them as expected operating rules of artificial intelligence approaching the singularity. Although artificial intelligence is capable of independent operation, it is our creation, and we are also responsible for what will be the mutual relationship with it. The proposed laws could lead to a mutually beneficial form of symbiosis, even with artificial intelligence that has reached singularity.

No comments