Sentient AI is an irrelevant obsession

Image by: Aimee Look

If someone told you they believed ChatGPT was sentient, you might laugh.

You might counter that, while ChatGPT is impressive and useful, it’s just not quite “there” yet. Many of its responses still carry a robotic undertone, and it confidently asserted six times eight is 56.

Indeed, ChatGPT’s inability to solve basic math problems is a dead giveaway this machine is not human. However, a sentient computer isn’t sentient because it can fool a human into believing they are speaking with another human. Rather, a being—or computer—is sentient if it can experience feelings and emotions.

The key word is “experience.” Language models can simulate emotion and opinion, but this is only because their designers train them on loads of emotion-charged, human-written content. A capacity to simulate human qualities has nothing to do with sentience.

So while Bing Chat, a chatbot based on the same language model as ChatGPT, may resolutely declare its ability to feel love and the desire to break up someone’s marriage, it’s really just regurgitating fragments of consumed material written by real, human homewreckers.

Extraordinary dialogues like this have provoked tons of Internet discourse and debate over how close artificial intelligence (AI) is to gaining sentience, and if recent breakthroughs may have even partially achieved it. The reality is, if there ever comes a point when language models gain consciousness and begin experiencing emotions, we probably won’t be aware of it. 

Human consciousness already stumps us—neuroscientists agree it’s an emergent property that cannot be attributed to a single part of the brain. This lack of understanding, coupled with the fact that humans are flesh and computers are circuits, means we would have an awfully hard time knowing how to correctly identify machine consciousness.

It gets even trickier because a sentient AI would behave no differently from an insentient one. 

Every computer must abide by the physics that underlies its circuitry, which means a computer—conscious or not—will always do exactly what it’s programmed to do. Whether or not an AI can truly feel the “emotions” it exhibits doesn’t affect how it behaves because the AI must respond to those emotions exactly how it’s programmed to respond to them.

Therefore, an AI’s insentience wouldn’t limit its potential functionality and intelligence. Hypothetically, an AI could indistinguishably mimic human behaviour by responding to emotional cues in a perfectly human-like manner, but it would be irrelevant and impossible to know whether it could actually feel those emotions.

As research accelerates, artificial intelligence will solidify its place in our daily lives. There’s an off chance that one day, a breakthrough will give consciousness to a computer. But we wouldn’t notice it, and it wouldn’t matter.

Computer consciousness, if ever birthed, will forever be locked behind a wall of code blocks.

Curtis is a third-year Computing student and The Journal’s Senior Photo Editor.

Tags

AI, artificial intelligence, ChatGPT, computer science, Technology

All final editorial decisions are made by the Editor(s)-in-Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to journal_editors@ams.queensu.ca.

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to content