In the debates about AI consciousness, McClelland highlights two main viewpoints. The first group believes that consciousness can be viewed as "software," meaning that if its structure can be reproduced, AI could become intelligent even if it is based on silicon chips. At the same time, opponents argue that consciousness is tied to the biology of living organisms and any digital model will merely be an imitation of real consciousness. In his article in the journal Mind and Language, he emphasizes that both positions represent unfounded assumptions that cannot be confirmed by scientific data.
Conscious experiences, as McClelland notes, depend on sentience—the ability to perceive both positive and negative sensations, which makes the subject ethically significant. In the absence of this component, any "consciousness" of a machine remains neutral and does not imply moral consequences.
“For example, self-driving cars can see the road and make decisions, which is a significant achievement. However, from an ethical standpoint, this does not change the essence. It would be entirely different if such machines began to experience emotions at the level of destination,” the philosopher asserts.
McClelland also emphasizes that the technological interest in artificial consciousness often leads to speculation and can distract from more pressing ethical issues. He insists that a rational approach to the question of AI consciousness lies in agnosticism. Humans are unable to determine whether a machine has consciousness and may never be able to. At the same time, excessive interest in the hypothetical consciousness of AI can create "existential toxicity," where people begin to form emotional attachments and make decisions based on assumptions about the consciousness of such systems.