
Since the emergence of language models such as ChatGPT in everyday use, millions of users have interacted with these systems daily, highlighting their ability to generate coherent, fast, and linguistically precise responses. Yet it is worth asking a fundamental question: are we truly witnessing a manifestation of intelligence?
In most cases, the queries we direct to these models do not require complex processing or deep cognitive elaboration. They are generally practical or informational requests, whose resolution does not demand creativity, abstract reasoning, or high-level semantic understanding. In this sense, the everyday use of these tools rarely allows us to evaluate their more advanced capabilities or to critically reflect on their nature. Moreover, the remarkable speed at which these systems produce responses, combined with the linguistic coherence they have achieved, tends to create in users a sense of semantic validity that can be misleading: what seems understandable does not necessarily imply real understanding.
Nevertheless, the ongoing refinement of these technologies and their growing ability to simulate human-like behaviors has reactivated one of the most fundamental philosophical questions of our species: what is consciousness? And even more provocative is to ask: if we still lack a clear operational definition or a consensual theoretical framework to explain how consciousness emerges in humans, is it even conceivable to aspire to develop a form of artificial consciousness? The apparent intelligence of these models reignites this question in a new dimension, challenging the conceptual boundaries between simulation and subjective experience.
Parallel to this theoretical reflection, a highly relevant practical phenomenon has emerged: the use of these systems as spaces for basic emotional support. Beyond task resolution, many people establish intimate conversational bonds with these applications, sharing concerns, frustrations, or personal experiences. This phenomenon not only reveals a latent emotional need, the desire to be heard and understood, but also raises ethical, psychological, and social questions of considerable depth. In this new scenario, attributing emotional functions to systems devoid of consciousness presents challenges that society has not yet begun to address with the seriousness they require.
The Simulation of Understanding
One of the greatest challenges in evaluating contemporary artificial intelligence is distinguishing between sophisticated language processing and genuine understanding. Models such as ChatGPT, trained on massive volumes of text, can generate grammatically correct, coherent, and contextually adapted responses. This fluency, enhanced by the immediacy with which answers are delivered, reinforces in the user an illusion of understanding, as if behind each statement there were a conscious intention or a deep grasp of what is being expressed.
However, what these systems actually do is manipulate statistical representations of language, not concepts. They have no internal model of the world, no awareness of what they state, no subjective experience. Their “knowledge” does not emerge from lived experience but from the correlation between words. In this sense, what we witness is a convincing simulation of understanding, not real understanding.
This nuance is crucial, since attributing human capacities to artificial entities can distort our expectations, perceptions, and decisions. As the boundary between simulation and meaning becomes more diffuse, the urgency grows to cultivate critical literacy about what these systems truly do, and what they do not.
A Digital Emotional Companion?
The use of language models as outlets for emotional expression is an increasingly common phenomenon. For many people, these systems have begun to occupy a space that transcends their instrumental function: they become confidential interlocutors, sources of comfort, or even substitutes for human connections in moments of loneliness, anxiety, or uncertainty.
Far from judging this behavior as naïve or superficial, it is worth recognizing what it reveals at a deeper level: an authentic emotional need. In a hyperconnected yet often interpersonally fragmented world, having a voice available 24 hours a day, without judgment or rejection, represents a tangible relief for many. It is understandable, even expected, that people seek support wherever they find it, especially when other channels of emotional containment are saturated, absent, or inaccessible.
However, it is precisely because of the legitimacy of that need that we must approach the phenomenon with a critical and informed perspective. While these systems can provide linguistically empathetic responses, they entirely lack emotional experience, intention, affectivity, or understanding of human suffering. The “listening” they offer is nothing more than the emulation of conversational patterns, however sophisticated it may be.
The risk does not lie in talking to a chatbot, but in believing that we are being understood in the human sense of the term. Confusing the simulation of empathy with empathy itself can lead to the impoverishment of interpersonal relationships, or even to delegating sensitive aspects of our mental health to systems that are neither designed nor equipped to assume such responsibility.
Recognizing this, the call is not to avoid these spaces, but to use them with clarity, boundaries, and critical awareness. Understanding what these technologies are, and what they are not, can allow us to benefit from them without falling into false attributions that, in the long run, could affect our emotional well-being in subtle yet profound ways.
Risks and Limits of Artificial Companionship
The growing trend of turning to artificial intelligence as emotional company raises challenges that cannot be ignored. While constant access, immediate availability, and the absence of moral judgment are qualities many value in their interaction with chatbots, these very attributes can also foster a subtle yet progressive dependency. When the artificial interlocutor becomes the only or main space for emotional expression, there is a risk of progressively replacing human relationships with simulated bonds, devoid of reciprocity, embodiment, and genuine affectivity.
Another danger lies in the normalization of the illusion of understanding. If the user perceives that the system “understands” their distress, they may develop a false sense of containment, which in some cases could delay the search for professional help or the strengthening of real support networks. AI does not evaluate clinical contexts, detect severe risks, or provide ethical and personalized accompaniment in crisis situations. This limitation is not merely technical but ontological: these systems possess neither consciousness nor moral responsibility.
Moreover, the widespread use of AI tools for affective purposes also entails risks in terms of privacy, traceability, and the exploitation of sensitive data. The most intimate conversations may be stored, analyzed, or used for commercial purposes without the user being fully aware. In a scenario where emotions become data, it is legitimate to ask who truly benefits from such artificial bonds.
Therefore, the emotional companionship provided by AI must be understood in its proper dimension: as a useful simulation in certain contexts, but never as a substitute for human relationships nor as a therapeutic device. The solution does not lie in prohibiting these uses but in educating users to employ them critically, complementarily, and with an awareness of their structural limits.
Conclusion: Humanity Before the Mirror
The emergence of artificial intelligence as a tool for emotional use not only transforms our relationship with technology but also confronts us with a profound image of ourselves. In a sense, when we speak to an AI in search of comfort, we are not seeking a “real” other, but a projection of what we need: understanding, validation, companionship. The question then is not only what artificial intelligence can do for us, but what its use reveals about our structural shortcomings, our models of relationship, and our affective expectations.
The fact that a simulation can seem sufficient, even though we know it is not, should be a source of reflection, not ridicule. In that paradox lies a contemporary reality: we live in hyperconnected but affectively fragile societies, where talking to a system without consciousness may feel less threatening than exposing ourselves to the vulnerability of human connection.
The phenomenon, therefore, should not be addressed through moral judgment or alarmism, but through an ethic of lucidity. As a species, we face a crossroads: either we use these technologies as mirrors that allow us to better understand our needs and limitations, or we risk confusing reflections with realities, functional shortcuts with authentic experiences.
Artificial intelligence can be a powerful tool for psychological well-being if integrated critically, knowledgeably, and with proper guidance. But it will never replace what gives meaning to the human condition: the capacity to feel, to engage, to take responsibility for the other. That is precisely what no machine, however advanced, can simulate without betraying its own nature.
Picture from Cash Macanaya in Unsplash
