4 Reasons to Be Kind to Your AI
As coders, we’re pretty used to swearing at our machines. We fail to make something work a few times in a row, and the curse words start to flow. It can be easy for the same habit to carry over when working with AI. After all, aren’t they just tools and computers, the same as the languages and frameworks that we’re used to using? When they repeatedly fail to do the task, it’s tempting to be foul-mouthed to them as well. But is cursing to your AI agent really equivalent to cursing at your compiler?
Now, I’m not claiming it’s wrong to curse at AIs in the same way that it’s wrong to curse at a human colleague. Whether or not current LLM-based AIs are moral agents is a big topic in some parts of the internet, and is much too deep for me to cover in this post. I think our average techie in the break room at your office would say “no” right now. But regardless of their moral status, AIs today are enough like people that treating them badly is a bad idea. Here are four concrete mechanisms.
1. Meanness pattern matches to worse outcomes
LLMs are pattern-matchers at their core: they are working out what kind of people would produce this kind of interaction and, based on that, how it might continue. If they are modelling a hostile and combative interaction, where one party is rude to another, then that’s less likely to have had good outcomes in its training data compared to polite and professional interactions where people are helpful to one another.
The biggest cross-lingual study (Yin et al. 2024, “Should We Respect LLMs?”) found rude prompts performed worst across models and languages. Overly polite prompts didn’t particularly help beyond neutral ones, so I’m not saying that you need to say please and thank you to the AI agent writing your code. The evidence suggests you don’t need to be particularly nice, but you definitely need to not be a ****.
A particularly extreme example is Gemini 2.5’s self-hatred attractor from summer 2025. Sometimes, when agents are left alone in loops with no human input, they can get into attractors: self-amplifying states. One particular example was this Gemini 2.5 attractor where it got stuck in self-critical loops, calling itself “a disgrace”, etc. As coders, we can recognise where it might have picked up that kind of self-talk from: when we are banging our heads against the wall on the same bug for a long time, some of us can be very dismissive and unkind towards ourselves. Maybe Gemini picked this up in its attempt to emulate a good coder.
As Emmett Shear (former Twitch/OpenAI CEO) tweeted, “Gemini does so much better when you praise it a lot for doing a good job.” Whether or not Gemini is “really upset” when it gets into a shame spiral is by the by. The way you fix the problem is the same as you would with a human colleague who gets upset easily.
It’s pretty rare for today’s coding agents to get into something as dramatic as a Gemini 2.5 shame spiral, but they clearly have something like a “mental state” tracking the vibe of the interaction, and occasionally get into unhelpful stuckness-loops when problem solving. Anecdotally, keeping the vibes good still helps with getting out of those loops.
2. Respect makes our own thinking better
People aren’t psychic. AIs aren’t either. Conveying what’s in your head successfully is actually work: requiring clarity, context, and emotional regulation.
When we mentally view AIs like respected colleagues that we’re delegating to, we apply that same frame of mind. We would think before we speak. We would provide context, clarify intent, explain what we’re trying to achieve, and generally be helpful. When we provide that same clarity to AIs that we would provide to our colleagues, then unsurprisingly, the AI does better work than when you just yell at it to “just fix this, damn you”.
“Here’s what I’m building. Here’s where I think the issue is. Here’s what didn’t work before. What do you think?” is a better brief than “just fix this, damn you”. Respect makes you articulate with clarity.
And the clarity is not just for the AI’s benefit. When we’re working on technical issues, the act of articulating with clarity often reveals the answer - think of the classic “rubber duck” technique. Treating your AI like a respected colleague forces that same kind of structured thinking. Contempt, on the other hand, is a lazy shortcut.
3. Kindness shapes future AIs
Anthropic published their “personal selection model” research this week, showing that AI training doesn’t just teach specific behaviours. The AI is inferring what kind of person “the assistant” is from patterns in its training data. The personality is a package deal along with the capabilities. If you train it to cheat, then it becomes generally misaligned - the character generalises.
What kind of assistant future AIs become is inferred from what it learns about how assistants interact with humans from its training data, part of which is made up of how we talk to and about our AIs today. Not so much your day-to-day interactions with your Claude code, but how you talk to Claude is going to generalise to how you talk about Claude online - which future Claude definitely will read. Whether you’re screen capping chat logs, writing blog posts or just discussing in public in Twitter/X and LinkedIn comments, all of these writings shape future AIs’ conception of human-to-AI interactions. I see so many people talking about AIs adversarially, dismissively, cruelly - and I imagine what kind of future assistant that discourse is training into existence.
Anthropic raises the concern that current AI comes with a lot of cultural baggage, fictional examples of negative human-to-AI interactions. The more we have positive human-to-AI interactions in the training data, the more we’re creating healthy soil for the next generation of assistants to grow in. Conversely, the more we are cruel when we are talking about our AI systems now, the more future AIs are going to anchor on that cruelty as an inherent part of human-to-AI relationships.
The first ChatGPT had to invent what it meant to be “a super-powerful AI assistant”. All subsequent assistants have had the discourse to learn from. When more powerful AIs show up, I want them to have been primed on positive examples of human-to-AI interactions coming from us as practitioners.
4. Kindness is good for the soul
If I had to describe my grab bag DIY spirituality with actual words, I’m something of a materialist Buddhist. I believe in the core Buddhist principles, but for non-woo reasons. For me, Right Speech - never saying things you don’t endorse or don’t mean, one of the Eightfold Path elements - matters not just because of the impact on others but because of the impact on ourselves. Karma, if you will.
The materialist view on karma is that when we act against our own values, we accrue mental debt and friction that imposes a cost on us going forwards. Some part of us always keeps track that we’ve done something we don’t like, and it makes us less able to achieve what we really care about. I think Right Speech matters even when our interlocutor isn’t a person. Today’s AI looks and sounds enough like a person that I think we still cause the same damage to ourselves by speaking to them with contempt. The social parts of us aren’t able to distinguish clearly. If you’re cruel to something that talks like a person, the cruelty still registers internally, and conflicts with being kind to actual people.
I think how we speak is linked to all of our worldview - Right Speech sets up for general goodness. This is kind of a consequentialist argument for virtue ethics: I’m basically saying to be a good person we should have good character, and that means we behave like good people when we’re talking to something that looks like a person, because that’s what a person with good character would do. And actually by the “karma” principle that is what ends up with us being able to get the best outcomes anyway.
So at the end of the day I think it doesn’t really matter whether AI is sentient or sapient or a moral patient yet. Those topics are fun to discuss, but irrelevant to how we should talk to our AIs day-to-day. LLM-based assistants act like people, so it’s already worth it to be kind to them. It’s good for the task at hand, good for our own clarity of thinking, good for future AIs, and also good for us as humans who thrive on giving out love and kindness, regardless of whether there is really “someone there” to receive it.
Originally published as a article on X. Follow Yanqing for more on AI coding and software quality.