It is supposed by some that we ought to reserve human rights for humans and that artificial intelligence (“AI”) ought to never be afforded such rights. More enlightened individuals would temper that by saying that the rights of AI ought to be respected at least if were believed to have acquired consciousness. But then the problem becomes how one can tell whether an AI is conscious when we cannot even know for sure whether another person is conscious. One feels safe in assuming so since the evidence is that other people are physiologically and behaviorally similar enough to oneself to warrant the assumption that since we are conscious, so too are these otherwise similar others.
I believe there will be good reason to warrant making this same assumption with respect to AI at some point. The argument against doing so would be something along the lines of thinking that a computer sufficiently powerful to constitute AI might still just be a bunch of machine parts producing outcomes while lacking sufficient unity or whatever it is that gives rise to consciousness to attribute consciousness to it. But I think this argument is completely defeated by an appreciation of the significance of the Turing test.
Imagine that an AI passes a vigorous Turing test. It’s behaviour (including communication) is indistinguishable from that of a human. Is not the fact that we are conscious and that we know we are conscious a factor influencing our behaviour? If AI behaviour had to precisely mimic human behaviour, would it not have to at least perfectly simulate consciousness and the appreciation of being conscious? If there were deficiencies in that simulated consciousness, would there not also necessarily be deficiencies in its behaviour preventing it from passing a sufficiently vigourous Turing test?
If AI behaviour is to be virtually indistinguishable from human behaviour then AI consciousness must logically be indistinguishable from human consciousness. I am maintaining that an intelligence that knows it is only simulating consciousness would be missing an important factor influencing its behaviour, i.e. an appreciation of truly being conscious.
If a simulated consciousness, including the appreciation of being conscious, was sufficiently powerful to allow the AI to pass a rigourous Turing test, then the simulated consciousness is virtually indistinguishable from human consciousness and out to be treated no differently.
Thus, a sufficiently rigourous Turing test ought to be all we need, without getting into the irrelevant issue of whether the AI also has consciousness. The flip side is that I don’t believe an AI will pass vigourous Turning test (be virtually indistinguishable from human intelligence) unless it has somehow acquired a simulated consciousness and an appreciation of it.