This, I think, is the ultimate hard problem of #consciousness, not just in relation to #ai, but to any non-carbon-based 'life' we might encounter—if we're even able to recognise it as such:
”We as humans do not fully understand our own minds. Part of the challenge in understanding our own conscious experience is precisely that it is an internal self-referential experience. So how will we go about testing or recognizing it in an intelligence that is physically different and operates using different algorithms than our own?“
Carl Sagan coined a term for it: carbon chauvinism.
