Philosopher's Opinion: "We May Never Know if AI Has Consciousness"

Виктор Сизов Exclusive
VK X OK WhatsApp Telegram

In the debates about AI consciousness, McClelland highlights two main viewpoints. The first group believes that consciousness can be viewed as "software," meaning that if its structure can be reproduced, AI could become intelligent even if it is based on silicon chips. At the same time, opponents argue that consciousness is tied to the biology of living organisms and any digital model will merely be an imitation of real consciousness. In his article in the journal Mind and Language, he emphasizes that both positions represent unfounded assumptions that cannot be confirmed by scientific data.

According to the scholar, this issue has deep roots: we still do not understand what exactly gives rise to consciousness and how it functions. As a result, we have no reliable way to determine whether artificial intelligence can truly possess consciousness.

Conscious experiences, as McClelland notes, depend on sentience—the ability to perceive both positive and negative sensations, which makes the subject ethically significant. In the absence of this component, any "consciousness" of a machine remains neutral and does not imply moral consequences.

“For example, self-driving cars can see the road and make decisions, which is a significant achievement. However, from an ethical standpoint, this does not change the essence. It would be entirely different if such machines began to experience emotions at the level of destination,” the philosopher asserts.

Systems capable of processing data and making decisions do not necessarily possess consciousness. Even if AI reaches a high degree of autonomy and self-awareness, this state will remain ethically neutral until it includes sentient experiences such as suffering or pleasure.

McClelland also emphasizes that the technological interest in artificial consciousness often leads to speculation and can distract from more pressing ethical issues. He insists that a rational approach to the question of AI consciousness lies in agnosticism. Humans are unable to determine whether a machine has consciousness and may never be able to. At the same time, excessive interest in the hypothetical consciousness of AI can create "existential toxicity," where people begin to form emotional attachments and make decisions based on assumptions about the consciousness of such systems.
VK X OK WhatsApp Telegram

Read also:

Write a comment: