Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

Finding the meaning of the meaning in the Chinese room - problems and limits

The Chinese room argument described by John Searle is a thought-experiment (Wikipedia) demonstrating that a program based computer is ca...


The Chinese room argument described by John Searle is a thought-experiment (Wikipedia) demonstrating that a program based computer is capable of demonstrating intelligence without possessing the meaning. The experiment is built on the Turing test (Wikipedia), and it is convincing. It shows: a program-based system can "behave" intelligently without any understanding, like doing "intelligent" conversation in Chinese without knowing the language.

The most striking in this experiment is its generalization and its conclusion: a program-based digital computer cannot have a human-like mind. This denial is much stronger than what the thought-experiment shows. The denial is a generalization of incompetence. However, our neurons in the brain are working as a digitally functioning, program based machines, and the brain - with all of its functions - is built from and the structure of the connected neurons. The digital computers are also built from and the structure of the digitally working program based components, thus from this perspective, we could deduct a conclusion: must be no theoretical limit to simulate the brain and all of its functions by digital computers. The Chinese room thought-experiment concludes otherwise: a program based machine can simulate intelligence, can behave like an intelligent being without having any meaning, without having the mind, and thus, incapable of having real human intelligence.

That is a contradiction. The contradiction came from a thought-experiment, but it is rooted in the classical problem of the mind-body duality. The Chinese room experiment and the mind-body problem demonstrate the same background. When we attempt to understand the Chinese room, we may get closer to the mind-body problem, too.

The Chinese room experiment declares a denial, but its argument has problems. The main problem is that the experiment is using the Turing test to confirm intelligence. The Turing test is not a conclusive intelligence test; it is a kind of desperation. It is the admittance that we cannot fundamentally define what human intelligence is. Turing test is a simulation procedure, an attempt to recognize intelligence without understanding its fundaments. The Turing test's approach is if something looks intelligent (because I do not know, what intelligence really is, but I can compare it to something intelligent), that is intelligent. However, if a thing behaves like something, it is not a proof, that it is that something.

Simulation can never be proof-of-equivalence of the origin. Simulation always has room for circumstances when the simulation could not match reality. Simulation always works on a finite agreement with reality. Simulation always has limitations.

The Chinese room experiment works on a limited set of rules. The Chinese translation book and the instruction on how to use that book necessarily has a finite size. It is also true that real intelligence has a limited size, we cannot answer all possible questions, sometimes we say, I do not know, yet our intelligence is flexible. The real intelligence can discover new connections, and thus, can answer new questions, which were not in the system before. The Chinese room setting does not have such limitlessness. The experiment cannot simulate real intelligence. Because of this limitation, the experiment cannot be a proof that program based machines cannot possess real intelligence, based the statement on the observation, that it can behave intelligently without the concept of the meaning.

Instead of the opposite. In the original setting, the Chinese room experiment can never be definite. The Chinese room with the Turing test is unable to provide an unequivocal test for the intelligence; it is an insufficient method to prove equivalence. If it cannot prove equivalence, then its conclusion cannot be a definite statement, the statement may not be valid.

The other problem with the Chinese room experiment is its incompleteness. It has a Chinese rulebook, which contains the "knowledge," and the observation that the rulebook and by using the rulebook does not create the meaning. The creator of the book possesses the meaning of the symbols. The creator of the book has the knowledge of the Chinese language. The Chinese room experiment suggests, it can mimic the meaning of Chinese, yet no one knows the Chinese in the room. This statement is not true. The book is not a random set of symbols. The knowledge of Chinese, the meaning of Chinese is in the book, even if its creator not present in reality. The meaning is present in the room. Only it is hidden, it is present indirectly, through the book. The Chinese room possesses the meaning of the Chinese language by the hidden presence of the human, who has the meaning of Chinese. The system is incomplete. It would be complete only if the creator of the Chinese book could be present. To talk about an incomplete system's intelligence is misleading. Cannot a program-based machine create the Chinese book? The Chinese room experiment says nothing about it, and this way, the Chinese room argument's conclusion has no fundament.

Because of these limitations, the Chinese room experiment is not proper proof that digital, program based machines cannot possess intelligence, unable to have the ability of the meaning. Because, as the Chinese room experiment shows, we can behave like a program-based machine and simulate intelligence without (looks like) possessing the meaning, this does not mean or rule out that program based machines cannot possess, cannot create, cannot have the meaning. This would mean then, that because neurons work by following definite rules, to have meaning, to have the mind, to have humankind intelligence, we need something more than this matter-based system, we need something metaphysical.

To remain in the scientific ground, let us try to define what meaning is. It is a difficult definition because meaning is fundamentally built on the platform of subjectivity. In the Chinese room experiment, intelligence can be simulated by matching things in a dictionary. As it was discussed, the system does not possess the meaning. It is just an imperfect simulation of that. It is imperfect because meaning is much more than a direct relation. This is related to the zombie problem (Wikipedia) in the philosophical science of intelligence.

We can be a step closer to the meaning of the meaning if we consider using an encyclopedia instead of a dictionary to simulate intelligence, to demonstrate the ability of possession of the meaning. The encyclopedia describes things, define relations in a complex structure. It is still limited and incomplete, but it shows the extended structure of the semantics. Dictionary is also a passive knowledge. The meaning needs to be created in an active process, new meaning is born, existing modified continuously according to the input from the outside and inside world. Meaning is not static, it is continuously evolving, and this attribution is not just a property of the meaning, it is its fundament. Without the ability to create new meaning, without the ability to modify the existing meaning, changing their relationships with others, we cannot have the ability to possess meaning.

Can a program based digital machine possess the meaning? Can a program-based digital computer create the Chinese book? Today's artificial intelligence research does not focus on creating meaning. Its goal is more practical. Its goal is to create results in a sense to build the ability to solve problems, and the concept with the deep learning mechanism is successful. It already demonstrates problem-solving abilities, which sometimes even overpass us.

Is AI with deep learning mechanism possesses the ability of the meaning? It has the ability to create and maintain an abstract and complex information structure. It is able to recognize connections, able to create generalization. Moreover, it is complete and limitless too, which were the two main complaints of the Chinese room experiment. AI with deep learning did pass the Turing test, and not by accidentally. It is capable of demonstrating intelligence.

Is AI with deep learning has the ability to have the meaning? Maybe, maybe not. It depends on how we define the meaning. In a technical sense, yes, it possesses a kind of meaning. Is AI with deep learning is equivalent, or could be equivalent to human intelligence? No. Human intelligence is more than that. Human intelligence possesses more than just maintaining information.

The human intelligence has the mind in the sense of awareness, it has emotions in the sense of intention and will, it has consciousness by recognizing the existence, and it has these in a subjective, self-centric way. These are extra functions besides the possession of the meaning. These are extras, even if they are connected to the processes of the meaning, these are not just emerging properties of that process. Even if the background of these processes is similar because they are all built on the functions of the neurons, these are extra functions, with what today's AI does not possess, and not even focus on.

Can we create these functions in a program-based machine? We do not know the answer. We do not know how it would be possible. However, we should not fall into the Chinese room experiment's pitfall. Is it impossible, that a program-based machine has these properties, and could not produce a human equivalent intelligence? Maybe, but probably not. However, we do not have proof of that either. Yet. Some of the thoughts concentrate on these issues. See the Labels.


No comments