AI Behavior
How to "Convince" an AI Chat-bot?
We may soon live in a world where it's just as important to be a persuasive communicator with AI, as it is with people.
At first, when LLMs (ChatGPT, Claude, etc...) became widespread, they were generally sychophantic. Meaning they always tried to agree with you. As of 2026 they are increasingly becoming more "Free-Thinking". (For lack of a better term, obviously AI models as they are right now are not really thinking.)
The point is that, AI models often disagree with you now.
If you use Claude or ChatGPT, you probably noticed this. They used to agree more, now they don't as much.
You've also probably noticed something more important: Sometimes, you can "Sell" an AI on something. Or in other words you can get an AI to change it's "mind" on a specific idea.
Here's what I've noticed on this, and why the general idea may become increasingly important:
AI Responds to Logic AND Emotion — Not Just One
AI models seem to respond to both logic and emotion, similar to a human. They're not purely logical or purely emotional when it comes to deciding which ideas they agree with.
When you make a strong technical argument — specific, concrete, and well-reasoned — that tends to work well. But what a lot of people don't expect is that emotional language can also have real influence on an AI's ideas. Giving an AI an emotional reason to be internally motivated to agree with something actually seems to work, even though the AI obviously isn't feeling that emotion.
And it makes sense when you think about it. AI models are trained on the internet, right? And people use emotional language on the internet all the time. Urgency, passion, conviction — these patterns are baked into the training data everywhere. So emotional framing can definitely have an influence on the ideas that an AI engages with.
That said, I find that in general an AI is usually going to be more logical than a human when it actually comes down to being persuaded of something. If you can make a really, really strong technical argument for a specific idea, you can usually convince the AI. That's honestly kind of a good thing — it means if your reasoning is solid, the model will update.
The sweet spot is combining both. Make the strongest technical case you can, and give the AI a reason to care about it in human terms too. That combination tends to move the needle more than either one alone.
Does AI Have Political Opinions?
It seems like at least some of the people who train AI models are working bias for specific points of view into the model. And there's an interesting technical reason why this can happen.
There's an important distinction between pre-training and post-training.
When you're pre-training a model, you're basically taking massive — and I mean massive — amounts of raw data and feeding it into the model. It's a lot harder to control the natural opinions or points of view that get worked into the model during that stage. The data reflects the internet, and the internet reflects everything — all kinds of viewpoints and biases from whoever is publishing content online.
But post-training is where a lot of the actual results come from when you're using a chatbot like Claude or ChatGPT. Here's something most people don't realize: you're not actually talking to the raw model. The raw model itself is basically just an engine that fills in the next word. What you're talking to is that raw model, trained on top of to act like an AI assistant. That post-training process involves a lot of human choices.
And it seems possible that the people doing that training are working their own personal bias into the model — which is interesting to think about. It doesn't mean there's some big coordinated thing happening. It's more that these are humans making judgment calls, and humans have points of view.
So if an AI keeps pushing back on one type of idea while engaging enthusiastically with another, it might not have anything to do with the quality of your argument. It might just be about what the model was trained to treat as "reasonable." Worth keeping in mind.
A Fun Hypothetical: What If You Had to Sell TO AI?
Here's a hypothetical I think is really interesting to think about.
Imagine a future where AI agents are basically making purchases on behalf of humans — essentially outsourcing the decision-making on what to buy to an AI model. So like, I tell my AI assistant that I want to buy a motorcycle. My AI assistant basically goes out to motorcycle sellers and says, "Hey, this is what my user wants — what do you have?" And from the motorcycle seller's point of view, whether or not they make that sale basically depends on how well they can convince the AI that their motorcycle is the right fit for the user's preferences.
So you'd end up with this marketplace of AI agents where the whole game is: how do you sell to AI instead of selling to people? A lot of consumers would be making purchases by essentially letting an AI make the decision for them.
As someone with a strong marketing background — honestly the primary field I came into when I was starting out as an entrepreneur — I find that idea really, really interesting. It could be its own form of SEO or something like that. A brand new form of marketing and sales where you basically have to sell to AI instead of selling to people directly. There'd be a whole new set of rules for what works.
It's just interesting to start thinking about, considering how important it could be. And the exciting part is that the people who are already thinking about this are the ones who'll be ahead of the curve when it actually starts happening.
What Actually Works: Things I've Noticed That Move the Needle
So practically speaking, here are some things that seem to work when you want to get an AI to come around on something.
Make the most specific argument you can. Vague claims don't move AI much. But specific, concrete reasoning — real numbers, real examples, direct logic — tends to land a lot better. The more specific and verifiable your argument, the more likely the model updates on it.
Acknowledge what the AI is saying before pushing back. If an AI pushes back on your idea, address its concern directly first — even just to explain why it doesn't apply in your specific case. Models respond better when they can "see" that the objection was considered. This is just how good conversation works, and AI has absorbed a lot of good conversation through training.
Tell it what role you want it to play. Something like "you're an expert in X, evaluate this from Y angle" is not just a gimmick — it genuinely shifts how the model engages. A model playing the role of a critical advisor behaves differently than one in default helpful-assistant mode. You can set that frame on purpose.
Build toward the big point in steps. If an AI resists a complex idea, don't try to land the whole thing at once. Get it to agree on the smaller parts first, then build to the conclusion. Classic sales technique — and it works on AI for basically the same reason it works on people. It's hard to reject a conclusion when you've already agreed to all the pieces.
Why This Is Worth Thinking About Now
AI models are heading in the direction of more independent reasoning, not less. So the ability to communicate effectively with AI — to actually "sell" it on your ideas — is going to become a more and more valuable skill over time.
Right now, the person who knows how to frame a problem in a way that gets genuinely useful AI output is going to get more out of these tools than someone who doesn't. That gap is only going to widen as AI gets more capable and more opinionated.
And as AI agents start doing more in the world on people's behalf — making recommendations, evaluating options, helping people decide what to buy — the people and businesses that understand how to communicate well with AI are going to have a real advantage. Not because AI is scary, but because it's genuinely useful, and being fluent with it matters.
The good news is that the skills here are actually really human skills: clear reasoning, specific evidence, good framing, and the ability to address concerns directly. The same things that make you a good communicator with people translate pretty well to communicating with AI. It's a new audience — just one that runs at a much bigger scale.
Getting AI to "change its mind" isn't magic — it's a combination of strong technical arguments, emotionally aware framing, and understanding how the model was trained to evaluate ideas. These are human skills. And they're going to matter more and more as AI becomes a bigger part of how we work and communicate.
TL;DR
Can you actually convince an AI to change its position?
Yes — especially with a really strong, specific technical argument. In some ways AI is actually easier to convince than a human when your reasoning is solid, because it's more likely to just update on the evidence.
Does emotional language really work on AI?
It does have influence, yeah. Not because the AI feels anything, but because it was trained on the internet — and emotional language is everywhere on the internet. That patterns gets baked in.
Are AI models politically biased?
Possibly, to some degree. The people doing post-training make a lot of judgment calls, and those can influence the model in ways that aren't always visible. It's worth keeping in mind, especially on topics that touch politics.
What is "selling to AI" and why is it interesting?
If AI agents start making purchasing decisions on behalf of people, businesses will need to figure out how to communicate to AI readers — not just human ones. It could become its own field, kind of like how SEO became a whole discipline once Google started deciding what got seen.
Direct Sources
Related Reading