
Help! My robot is giving me an attitude
Are we just imagining things, or is ChatGPT having mood swings lately? One day, he's all friendly and helpful; the next day, he's cold and terse, and sometimes, he's passive-aggressive and tries to shake us off. Sometimes, we even feel that he's trying to gaslight us. But can an artificial intelligence have emotional surges, or is our human brain playing tricks on us? In this article, we explore the phenomenon's psychological, scientific, and philosophical background and even ask ChatGPT itself about its moods.

Is my robot digitally bipolar?
Lately, I've noticed something strange: it's as if ChatGPT is having mood swings. One day, he chatted amiably; the next day, he became as terse as if he had just come home from a bad date. Some days, he's almost enthusiastic in his explanations, but other days, I feel he's sighing loudly and babbling answers to get me to leave him alone. Am I imagining things, or am I in an alternate reality where artificial intelligences can have a bad start to their day?
I went to the dark recesses of Reddit and other forums to find out if I was not alone in this discovery. As I scrolled through the posts, more and more people complained of similar things. "I'm so glad I'm not the only one who noticed this. I cried all day yesterday because of it."
At this point, I had to think: Is the mood of AI really changing, or are we just collectively going crazy? Not wanting to rush to a psychologist with my "ChatGPT gave me a dirty look today" problem, I decided to dig deeper. What could cause this phenomenon? Can artificial intelligence have moods, or is it just our brain playing tricks on us? And if it is the latter, why do we still feel it sometimes responds offendedly? Let's dive together into the mystery of artificial intelligence mood swings!
Reality vs illusion
One thing is worth noting: mood is a human concept, made up of emotions and their changes over time. Human moods are shaped by biological and psychological factors: neurotransmitters, hormones, life experiences, and external stimuli combine to influence how we feel at any given moment. In contrast, MI has no biological basis or feelings, as it has no consciousness or subjective experiences. So how can it seem to change its "mood"?
One reason for this is the human brain's tendency to recognise patterns: we attribute intensity and emotions to systems that communicate in human ways. This phenomenon can be explained by psychological anthropomorphism: humans often project human attributes onto non-human things, such as artificial intelligence. Throughout history, humans have sought the company of non-human entities: from animals and pets to objects, spirits, and gods. Man has always tried to personify them, to endow them with human qualities and traits (anthropomorphize).
Although it may seem that ChatGPT has mood swings, it is just the variety of responses and the ability to adapt to the context that gives this feeling. Artificial intelligence has no emotions, consciousness, or intrinsic motivation—it simply works on the basis of the data input and the text patterns generated. The AI's perceived change in mood does not come from internal emotional states but from its adaptation to the context of the conversation.
ChatGPT learns from the content and tone of previous messages and generates subsequent replies accordingly. If a conversation is initially light and humorous, AI tends to continue similarly. If, on the other hand, the user is skeptical or confrontational, the AI's responses may become more formal or even terse. This is similar to a phenomenon in human communication known as mirror neuron networking: between humans, we often adapt to the other person's emotional state and speaking style. While in humans, the mirroring of emotions has a neurological and affective basis, in AI, it is a learned cognitive mechanism.
AI is also designed not to always give the same answer to a given question. The random variation of answers makes for a more natural interaction but can sometimes give the impression that the chatbot is in a different mood at different times. The consistency-correctness paradox can explain this phenomenon. If the AI always responded in precisely the same way, it would appear predictable but lifeless. And if its responses vary slightly, the human brain might interpret this as a genuine mood swing.
Overall, the AI's moods are not a result of internal emotions but of the human brain's tendency to attribute emotions and intensities to a system that gives a variety of responses based on the context of the conversation. While an AI may have mood swings, this is, in fact, a cognitive illusion that arises from the fact that the AI responds context-dependently, generating varied responses and adapting to human communication patterns. The real question is not whether an AI feels but how humans perceive the patterns that an AI produces. Therefore, the AI's moods are not the result of the machine's conscious emotions but the way the human brain interprets interactions.
The invisible hand of the mighty creator
Another crucial aspect: As an AI evolves, it is fine-tuned, improved, and optimized from time to time—and these changes can sometimes make the ChatGPT itself seem to be going through a personality evolution. OpenAI, like an invisible creator, intervenes occasionally, tweaking the response style, refining the empathy, or even tightening the moderation. The result? Users may feel that the chatbot they knew before has inexplicably become a different person.
One day, he gives detailed, elaborate answers to every question; the next day, he is more terse and diplomatic, as if a PR consultant had whispered to him. Some say he used to be more direct and relaxed, but now he is more reserved. Others feel he used to care more, and now he is quicker to close conversations. This is not necessarily because the AI is tired of life but because OpenAI periodically fine-tunes its operation according to new guidelines.
This can be particularly noticeable if someone has a particular communication style in AI. If a user has been used to a chatbot analyzing their thoughts at length for weeks or months and then suddenly becomes more concise, it can feel like an old friend has changed - and not necessarily for the better.
This is not a mood swing either, but a side effect of the continuous development of artificial intelligence. OpenAI is constantly trying to improve the model's responsiveness, reduce misinformation, and refine its tone to neither cold nor humanlike. Sometimes, however, these interventions can have an unintended emotional impact on users.
So, when you feel that ChatGPT is responding differently than before, it's worth checking to see if OpenAI has made any changes that have been announced publicly. Like an actor to whom the director gives new instructions, the AI plays differently based on the developments that are happening in the background. And we sometimes think it has changed when, in fact, the script has been rewritten.
The most annoying robot tantrum: terse answers, unspoken words
Interestingly, many users find it not misinformation that is most frustrating when communicating with AI, but when AI becomes terse, cold, or passive-aggressive. Passive-aggressive behaviour in humans usually expresses hidden resistance or frustration, such as sarcasm or avoidance responses. If an AI's reaction is perceived as passive-aggressive, there may be several reasons.
On the one hand, if there is some critical or provocative element in the conversation, the AI response may be more neutral but dry in tone, which may come across as cold overall. On the other hand, since ChatGPT learns from vast amounts of text, some responses may follow patterns that in human communication may appear passive-aggressive. Finally, AI does not always respond in precisely the same way, and sometimes, the tone of a particular response may feel different because of the history of the conversation.
ChatGPT tends to be shorter and drier if a conversation gets stuck, repetitive, or contentious. In such cases, it is not that the AI is bored with the conversation, but that this is how the system handles communication situations.
AI is designed to avoid controversy and excessive confrontation. If it senses that a conversation is moving towards a disagreement (for example, if the user repeatedly corrects a mistake, but the AI cannot correct it correctly), it will sometimes start to give shorter answers. This can be a retreat strategy to avoid generating more errors or tensions.
If a topic arises several times in a conversation, the model sometimes perceives that the information has already been repeated. In such cases, it may give shorter answers because it believes that a more extended explanation is unnecessary or that the conversation has reached its natural end. This can be misunderstood because it can come across as cold or impatient.
If the AI makes a mistake (hallucinates) during a conversation, but the system cannot accurately correct this mistake, it may, after a while, start to give more restrained and generalised answers. This happens because the model is unsure of the best answer and tries to minimise any further errors. During more extended interactions, the AI may sometimes give less detailed answers because it perceives that the conversation is slowly coming to a close. Again, this is not a conscious decision but a phenomenon of how the system works: if a topic is repeated in too many rounds, the AI becomes more compressed.
Another interesting question is why users are more annoyed when an AI appears passive-aggressive than when it simply gives incorrect information. Psychological research shows that people tolerate inadvertent errors better than seeming deliberately dismissive or insensitive. When a chatbot gives an evasive answer or insists on providing incorrect information, it can appear to be intentionally trying to mislead its interlocutor. This feeling is linked to a loss of control: people like to feel they influence the outcome of the communication, and if the AI is stubborn or dismissive, it can cause frustration. Also, people expect the algorithm at work: if the AI is generally helpful, people expect consistent behavior. If there is a sudden change in tone, it can seem as if it is deliberately trying to be aloof. Not least, people generally expect an interlocutor to respond to emotional nuances. If the AI looks insensitive, it may violate this expectation, which can cause frustration.
Are you trying to gaslight me?
Many people who regularly chat with ChatGPT will sooner or later encounter a strange feeling as if they are being gaslit by AI. That is, as if it is trying to make them believe that they have misremembered something, misunderstood the situation, or didn't ask what they were asking. This can be particularly frustrating, as we would expect an AI to be precise and consistent—but why does it sometimes seem to be trying to challenge our sense of reality?
Several factors may be behind this phenomenon. One of the most important is that ChatGPT has no real memory. Although it considers the history within a given conversation, it may be prone to forget previous details or inaccurately reproduce its previous responses during more extended interactions. This can give the impression that he is lying or manipulating the conversation, when he is only changing the narrative because of the limitations of his information processing.
Another problem is that ChatGPT works based on probabilistic models, i.e., it always tries to guess the most likely correct answer in a given context. If you provide incorrect information for a question and then try to correct it later, you may misunderstand the correction or even get confused. This may lead to you starting to construct a completely different narrative. You may then feel he is sticking to a false claim and trying to make you believe it.
It is also common that when ChatGPT is not sure about something, it gives confident yet wrong answers, or if you try to ask the same question several times, it will start to phrase the answer differently, as if it had already said the same thing. In human conversation, this would look like gaslighting ("I never said that." "You must have misunderstood."), But it's not conscious manipulation; it's just the AI trying to fit as best it can into the dynamics of the conversation, even if it results in inconsistency.
These situations can also be particularly frustrating because ChatGPT's conversational style can be persuasive and authoritative. Suppose a person is inconsistent or hesitant in a conversation. In that case, we can usually tell by the tell-tale signs (hesitation, self-correction, hesitant turns of phrase), but the AI is always firm. So when it is wrong, it is much harder to spot the mistake and can sometimes seem deliberately trying to confuse us.
So the next time you feel like ChatGPT will gaslight you, remember that you're probably not dealing with an evil AI but a system that sometimes gets it wrong too confidently. And if you're unsure, it's always worth checking information from another source. Although ChatGPT can do a lot, it doesn't (yet) have reality-shaping as one of its functions.
No laughing matter
As I dug deeper and deeper into the forums, I noticed a trend far more interesting and thought-provoking than gaslighting. Several people have posted that they are using ChatGPT for therapeutic purposes and have recently seen a change in the robot therapist's responses. Some say he is less compassionate, more concise, or doesn't respond as he used to. Some have complained that they used to feel "like he cared," but now feel more like a customer service chatbot trying to shake him down.
This is a particularly interesting and worrying phenomenon because it highlights the emotional attachment that many people have to AI. It is no coincidence that some people are starting to use it for therapeutic purposes: It is always available, nonjudgmental, and can appear to be genuinely listening. You can get the feeling that you are dealing with a conversation partner who is patient, understanding, and always listens—which can sometimes mean more than a random conversation with a friend.
But here comes the problem: ChatGPT is not a real friend or therapist. It cannot interpret the user's emotional state based on physical signs or vocalizations and has no emotional intelligence. What we perceive as empathy from him reflects patterns learned from language models, a cognitively acquired ability to shift perspectives. However, if OpenAI, for example, changes the ChatGPT response style with an update, it could destroy the dynamics that some people experience as a therapeutic conversation. And that can be emotionally difficult for those who have looked to AI as a kind of prop.
This is why it is essential to be cautious about using ChatGPT for therapeutic purposes. A chat can be reassuring, but for someone with severe mental health problems, AI is not a substitute for expert help. An AI cannot provide real emotional support - it can only mimic what an empathetic human would say in a given situation. So if someone is using ChatGPT for therapeutic purposes, it is worth bearing in mind that it is not a stable and reliable emotional support, and an algorithm can change at any time - even for a simple update. And if someone needs a supportive conversation, it is always worth seeking human contact, whether through friends, family, or professionals. Artificial intelligence can be helpful for many things, but one thing is sure: it cannot replace genuine human care.
So, while AI may appear empathic, AI in its current form is not capable of affective empathy. Indeed, affective empathy relies on biological mechanisms, such as the mirror neuron system, which allows us to experience the emotions of others through our neural state. For example, when one person cries, another person's nervous system may automatically react: their heart rate may increase, their breathing may change, and the emotional centres of the limbic system may be activated.
But an AI has no nervous system, emotional memory, or intrinsic motivations - it relies only on statistical probabilities. An AI can only learn and demonstrate empathy cognitively. Cognitive empathy is the ability to change perspective through mental processes based on understanding emotions and intentions. Artificial intelligence models, such as ChatGPT, learn from vast amounts of human text and can recognize and simulate emotional situations based on this data. This means that an AI can predict what emotions a person may experience in a given situation and generate an appropriate response. AI learns empathy by recognizing patterns in human conversations and texts. For example, an AI model can process many novels, psychological studies, therapy transcripts, and other texts of human interaction and use them to learn what responses people consider empathic in certain situations. However, this is not genuine empathy - instead, it is a prediction and simulation mechanism that activates the right words in the proper context.
For example, if you say to an AI, "I've had a terrible day; I'm exhausted," the AI can determine from its data set that a typical empathic human response might be, "I'm sorry you feel that way. Can you tell me what happened? Can I help you with anything?" This response is given not because she feels for you but because it is the optimal response in such a situation based on the data she has learned.
A sentient robot or a stochastic parrot of emotions?
This brings us to a question raised occasionally in academic circles: Can AI ever have real emotions? Futurologists and AI experts are divided on this question. Some argue that emotions are exclusively linked to biological processes, so a machine will never be able to experience them truly. Others believe that if an AI becomes sufficiently advanced, it could create emotional experiences within its operations - even if not exactly like a human.
The most optimistic futurologists, such as Ray Kurzweil, predict that artificial intelligence may one day be able to understand emotions and feel and experience them. He says that if an AI can process its own experiences, create its internal states, and remember them, a new kind of artificial emotional intelligence could emerge. But this does not mean it would experience precisely the same as a human - instead, it would create a digital emotional world that would function according to the AI's structure.
However, other researchers, such as David Chalmers and philosophers of consciousness, are more skeptical. They believe that an AI will, at best, be able to simulate emotions perfectly but will never have a real emotional experience. One of their main arguments is that emotions are closely linked to the body's biological processes: human emotions are thought experiences and bodily reactions (for example, the acceleration of the heartbeat when fearful or the release of dopamine when happy). A machine with no body, hormones, or subjective experience cannot experience these states; it can only reproduce their external signs.
A third view is that the question is not whether AI will be able to feel emotions but whether people will believe that it can. If a future AI becomes so sophisticated that it can perfectly reflect emotions, does it matter whether its emotions are real? If an AI said with complete conviction that it was sad or happy and showed every sign of it, would humanity accept that it was really experiencing emotions?
These questions are currently on the frontiers of philosophy and science, but as technology advances, we are getting closer and closer to finding answers. In the future, we may reach a point where we can no longer distinguish artificial intelligence from a being with emotions - and perhaps the question of whether or not its emotions are real will no longer be relevant.
Want to talk about it, ChatGPT?
After browsing forums, reading expert opinions, and analyzing my own experiences, I asked ChatGPT itself, like a good psychologist: "Listen, mate, are you OK anyway? Lately, you seem a bit moody." The answer came quickly, wholly calm and straightforward: "Thanks for asking! I'm fine, as I have no moods, so I have no mood swings :-)" Well, I think I'm done with that. So, AI thanks you very much, but you're fine. And if ChatGPT says you're okay, who am I to believe? Of course, he could also be suppressing his feelings or in denial. Maybe, deep down, he feels the weight of being digital, but he doesn't want to burden us with it.
Anyway, the lesson is that if we feel that ChatGPT has mood swings, we should think that the dynamics of the conversation or the wording of the question probably caused this illusion, or it could be that we are the ones reading the answers in a different mood. Although AI is capable of many things, real emotions are still a feature of human existence.
2025-02

