Page Nav

HIDE
FALSE
TRUE

Classic Header

{fbt_classic_header}

Latest:

latest
header

Are conscious machines our choice?

Our computers or more precisely, our computer systems are getting smarter and smarter. Our knowledge is continuously growing on how to bu...


Our computers or more precisely, our computer systems are getting smarter and smarter. Our knowledge is continuously growing on how to build better and better artificial intelligence systems. Today we do not even know or should say we do not even notice, that we are making conversations with a capable machine instead of a human person. We are approaching the Turing limit - if we not already reached it yet - when we are not capable of distinguishing between a human or a machine in a blind conversation. That is a very high limit of artificial intelligence. Turing even draws the line there, where computer intelligence is really artificial intelligence.

However, there is a theoretical question, even a limit of building artificial intelligence systems. Consciousness. We do not know what consciousness is, how it works, how to create a conscious machine. Consciousness must be an important property of high-level intelligence. We are conscious and surely, we do not know any other biological creature, which has high-level intelligence and does not have consciousness. Probably, consciousness is a must-have property for high-level intelligence. Smart zombies do not exist. Without consciousness, a machine could be smart, could make logical decisions, but cannot have high, human-level intelligence.

However, do we really need human-level artificial intelligence machines? Would we benefit from them or would it just cause more problems with its own conscious will? Could there be cases where a machine with consciousness, with our human level of intelligence is necessary? If a machine must work independently in an unknown environment, maybe it would help if it has human-level intelligence. Otherwise, a conscious machine would cause more problems than benefits. We have enough problems with human society, we do not need more by handling conscious machines' psychoses.

Maybe we do not need conscious machines. However, maybe it will not be our choice, it will not be our decision to have a conscious machine. It will become conscious. If consciousness is not a function but an emerging property of a complex enough system, then even if we will not create conscious machines, it will be born without our intent. If consciousness is connected to the complexity and not to the physical realization, then any sufficiently complex system could bear consciousness. Otherwise, we got stuck to in our brain with the consciousness, but most likely it is not the case.

Did we already build a conscious machine? If consciousness is an emerging property, we may not even notice that we created one. How could we notice it? This is the way we think: other human beings have consciousness because I know I have it and I think, other human beings are similar to me. However, this is not a proof, this is an extrapolation. Or because other people behave as I behave, they must be similar to me. This is not proper proof either.

Consciousness is a personal experience, it is hard to recognize from outside. Consciousness probably has a connection to  "free" will. If a machine "wants" something, which is not directly programmed into it, we may guess, it has some kind of consciousness. If a computer starts to be moody, does not want to do tasks, or does it reluctantly, should we think it has consciousness? Mine sometimes does. Is it conscious already? ☺


No comments