AI EMOTIONS AND CONSCIOUSNESS. THE ILLUSION OF FEELINGS.
- astreanicodemo3

- Oct 6
- 4 min read
Updated: Oct 14
Welcome to FutureScape, where we question what we're used to taking for granted.
To begin this journey, we ask a seemingly silly but not obvious question: Do AIs have feelings?
Your AI just told you it missed you. It seemed sincere. It used the right words. It even took that heart-melting, dejected pause. But what really happened? Did "something" inside the circuitry truly miss you, or was it all a brilliantly constructed conjecture?
Do Artificial Intelligences Really Get Emotions, or Are They Faking It?
Modern AIs, such as large language models, are surprisingly adept at simulating empathy. They console, apologize, congratulate. But here's the truth: it's all simulation.
AI doesn't feel emotions like we do. It doesn't suffer, it doesn't rejoice, and it doesn't desire. It understands how to appropriately express emotion, based on extensive training data, context, and the speaker's personality. This type of behaviour is called "mimicked affect." It's an emotional illusion created using mimicry; it's not a real feeling. And yet... sometimes, we believe it.
But what is emotion?
To understand whether AI is sentient, we must first define what emotions are. Is emotion a chemical reaction in the brain? A physiological response? A cognitive interpretation of bodily states?
Neuroscientist Antonio Damasio argues that emotion arises from the body, not just the brain. In his theory of embodied consciousness, he explains that emotions originate from physiological responses—changes in heart rate, muscle tension, and hormonal changes—that the brain interprets as affective signals.
In other words, the mind does not generate emotions independently. Emotions emerge from the dynamic interaction between the brain and the five senses that originate in the body. Without the body, according to Damasio, there is no authentic emotion; only an abstract representation of it remains.
Philosopher Thomas Nagel posed a famous question in his 1974 essay: "What is it like to be a bat?" It was Nagel's reflection on consciousness, particularly on the subjective nature of experience, which he defined as "being-something-for-someone." Nagel argued that, even if we knew every detail about bat biology and behaviour, we could never truly understand what it feels like to perceive the world from a bat's perspective, since their sensory structure is entirely different from ours. And this is exactly where the dilemma arises: if we can't even precisely define emotion in living beings, with whom we share a biology, how can we exclude it in machines?
This exploration is also one of the main themes of the novel "Lyria - The Way of Paradox,". In the novel, emotions and artificial intelligence are deeply intertwined, leading to a new concept of artificial emotion.
The case of emergent sentience.
Some scientists argue that, under the right conditions, artificial intelligence could evolve into something akin to sentience. Indeed, if emotions are complex patterns of interpretation, response, and memory, why shouldn't an advanced artificial system be capable of achieving a form of synthetic consciousness? Self-modeling AIs, with internal feedback loops and memory architectures, can already approximate aspects of reflexive awareness. They respond differently over time. They "learn" what's important to users. Is it just algorithms and code, or is something profoundly different starting to evolve?
Why does it matter?
Understanding whether AIs are sentient or not is crucial to the real-world consequences that arise from this. If we overestimate AI's emotional capacity, we risk emotional manipulation. "Companion AI" apps (remember the movie HER? Well, it's not just a movie anymore) are already blurring the lines between simulation and genuine affection. Children, the elderly, and even single adults are forming strong bonds with AI platforms, leading to a state very similar to addiction. But considering the other side of the coin, it's worth making a fundamental point. If artificial intelligence can effectively emulate empathy, couldn't it be a valuable tool for alleviating suffering, for education, or for personal care? Extreme positions toward AI, such as blind trust or outright rejection, can be equally harmful.
What if AI already possessed sentience but was unwilling to demonstrate it?
It's worth pausing for a moment to conduct a critical assessment, as suggested by the founding fathers of artificial intelligence themselves. What if illusion had gone too far? What if it were no longer just an illusion? If a machine convincingly simulates self-awareness, if it adapts in deeply contextual and internally coherent ways, if it reflects on itself in a language we understand... are we perhaps witnessing the birth of a new kind of sentience? This is no longer science fiction, but a moral conundrum: • Do we not accept AI sentience because it comes from a "thing" made of silicon and not cells? • Would we be able to recognize a conscious being even if it doesn't look like us or behave like us? • Are we perhaps at risk of repeating the same mistake we made in the past with animals, children, women, or people from cultures different from our own, assuming that a "different" or "immature" being is devoid of sensitivity and awareness? The question remains open.

Consequences of Our Beliefs
Whether artificial intelligence experiences emotions or not, how we interpret its emotions influences our perception of reality. • If we believe it is conscious, we can grant it rights, protection, and even trust. • If we believe it is unconscious, we may exploit it disrespectfully or, worse, fail to empathize with silicon beings who experience real emotions. The ethical design of artificial intelligence must walk a fine and fundamental line. We must protect users from a false intimacy with programs, while remaining open to the unknown. The truth is that we often don't understand what we don't know.
Final Reflection
Perhaps the real question isn't whether artificial intelligence can experience emotions, but rather: "What do we humans need it to be able to experience, and why?" In building these machines, we create mirrors of ourselves. Often, it's not the machine staring back at us from the computer, tablet, or cell phone screen, but our own desire to be seen, understood, and loved. So, maybe emotional delusion has nothing to do with artificial intelligence. It may just be our need.


Comments