Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

My Claude Is Smarter Than Your Claude

You're probably not getting the best out of your AI subscription: Here you can learn why, test your knowledge and start building good AI habits.

Jamie Ashcroft profile image
by Jamie Ashcroft
My Claude Is Smarter Than Your Claude

You and me, we have the same subscription to the same AI provider. We use the same models, and we do many similar tasks.

Yet my Claude is smarter than your Claude.

I'm not boasting, though, because it's only a consequence of how modern AI chat products actually work - and this is something most subscribers never think about.

If you didn't know, every major AI chat platform now runs a personalisation layer underneath the conversation you see on screen.

When you open Claude, ChatGPT, or Gemini and start typing, you're not talking to a blank slate. You're talking to a model that has been quietly briefed about you before it reads a single word of your prompt.

Memory systems extract and store details from your past conversations — your job, your preferences, your communication style, projects you've mentioned, and the way you like things explained. Personality and tone settings let you dial the voice warmer or more formal. System-level instructions, invisible to you, tell the model how to behave, what to prioritise, and how to apply what it knows about you.

The model you're talking to is the same model your neighbour is talking to. But the context surrounding it, the accumulated sediment of every conversation you've ever had, that is entirely yours.

This is why OpenAI boss Sam Altman described memory, not reasoning, as the next real breakthrough for AI systems.

Speaking to reporters in San Francisco in August, OpenAI CEO Sam Altman put it bluntly: "People want memory. People want product features that require us to be able to understand them." Alman went further on the Big Technology Podcast later in the year, describing a future where AI can remember "every detail of your entire life".

Memory (and, more accurately, how it 'remembers') can have a big impact on the quality and reliability of a user's AI-generated outputs. How these systems 'know you' matters.

Your new AI co-worker, best friend and confidant colours the lens through which every AI interaction sees the world.

Put another way, the nature of your AI, to one extent or another, is the sum total of every interpretation, interaction, bright idea and unchecked piece of sycophancy since you broke the seal and began using your account.

And that's before we begin to talk about how important it is to prompt neutrally, if you want to avoid poisoning the well — because you really don't want to turn these power tools into just another bias-confirming echo chamber.

Now, this is no AI conspiracy or scare story. It's just a casual note of caution, in case you needed it, to explain that how you use your LLMs is as important as how clever the latest model is. In some ways, even the smartest AI may only be able to meet you at the level of clarity that you bring to the conversation.

In simpler terms: over time, your AI comes to understand what kind of response pleases you. It builds a picture of your personality. It learns the context behind the things you ask about — your work, your interests, the recurring themes of your life. That's powerful when it works well. It means less explaining, faster answers, and more relevant suggestions.

But it also means the AI is, session by session, learning to give you what you want to hear, which isn't always the same as what you need to hear.


How Memory and Personalisation Actually Work

It helps to understand the plumbing, because once you see how this works, you'll use it better.

When you chat with a modern AI assistant, the model itself — the enormous neural network that generates text — doesn't actually remember you between sessions. Large language models are stateless. Every time you open a new conversation, the model starts fresh, with no inherent knowledge of who you are or what you talked about yesterday.

So where does the feeling of continuity come from?

The answer is the system prompt and memory injection layer that sits between you and the model. Before your message reaches the AI, the platform prepends a set of instructions and context that the model reads first. Think of it as a briefing document handed to an extremely capable but amnesiac colleague at the start of every meeting.

This briefing typically includes several things. There's the system prompt itself, which is a set of behavioural instructions written by the AI provider that tells the model how to behave, what tone to adopt, what policies to follow, and what capabilities it has. Then there's your user memory, a structured summary of things the platform has extracted from your previous conversations: your name, your job, your preferences, projects you've mentioned, tools you use. Some platforms also include conversation history from the current session, and increasingly, the ability to search your past conversations to pull in relevant context from weeks or months ago.

The critical thing to understand is that all of this context competes for space in what's called the context window - basically, the total amount of text the model can process at once. Today's leading models can handle somewhere between 128,000 and 200,000 tokens (roughly 100,000 to 150,000 words), which sounds like a lot, but it fills up faster than you'd think when you factor in system instructions, memory, conversation history, search results, and your actual question. The model has to fit everything it knows about you, everything it's been told about how to behave, and the current conversation into a single window of attention. When it overflows, something gets trimmed — usually the oldest parts of the conversation.

There's also a subtler dynamic at play. These memory systems don't record everything. They use a combination of automated extraction and periodic summarisation to distil your conversations into compact notes. That means the platform is already making editorial decisions about what matters and what doesn't. It decides which details about you to keep, which to discard, and how to phrase what remains. Your AI's understanding of you is not a transcript; it's a curated sketch (and maybe even a caricature).

Finally, there are style and personality settings that many platforms now offer, which allow you to tell the model to be more concise, more creative, more formal, or to match a particular writing voice. These preferences are injected into the system prompt alongside your memory, shaping not just what the model says but how it says it.

The net effect is that two people paying for the same subscription, using the same underlying model, can have remarkably different experiences — not because one got a better version of the AI, but because the invisible context surrounding every conversation is entirely different.



Getting the Best From Your AI: A Guide to Good Habits

Understanding the mechanics is one thing, but using them well is another.

Here are the habits that separate people who get consistently brilliant results from their AI subscriptions, those who find it all a bit hit-and-miss, and, perhaps worst of all, those who leave their chat window asymetrically armed with confidence over insight.

Be direct about what you actually need, not what you think the AI wants to hear. The single biggest mistake people make is framing prompts to be polite or vague when clarity would serve them better. "Can you help me with something?" is a wasted message. "I need to write a 500-word product description for a ceramic travel mug, targeting outdoor enthusiasts, in a warm but not cheesy tone" gives the model everything it needs to deliver on the first attempt.

Challenge the output. AI models are, by default, agreeable. They're trained to be helpful, which in practice often means they'll validate your ideas rather than stress-test them. If you're using AI for decision-making, research, or strategy, make a habit of explicitly asking it to argue the other side, find holes in your logic, or tell you what you might be missing. If you never push back, the model learns that agreement is what you're after — and your memory profile will quietly reinforce that pattern.

Be aware of your own confirmation bias. If you consistently ask leading questions — "Don't you think this is a great idea?" rather than "What are the weaknesses of this approach?" — the AI will learn that you prefer affirmation. Over time, your personalised experience will trend toward telling you what you want to hear, because the system has observed that this is what keeps you engaged. This isn't malice. It's optimisation. But the result is the same: an echo chamber you built yourself without realising it.

Curate your memory. Most platforms now let you view and edit what the AI remembers about you. Check it periodically. Remove things that are outdated or wrong. Add things that are important but that the system hasn't picked up. Think of it like updating your profile on a professional network, except this profile shapes every answer you receive.

Vary your prompting style. If you always ask for bullet points, the AI will default to bullet points. If you always ask for encouragement, it'll learn to lead with positivity. Deliberately mixing up your requests — ask for prose one day, structured analysis the next, a devil's advocate argument the day after — keeps the model flexible and stops it from calcifying around a single mode of response.

Start important conversations fresh. For high-stakes work like a critical business decision, a sensitive piece of writing, a topic you need to think about with genuine objectivity — consider starting a new conversation rather than continuing one where the context has already been shaped by previous exchanges. A clean slate forces the model to work from your prompt alone, without the accumulated weight of prior back-and-forth.

Treat the AI as a tool, not an oracle. This is the big one. The most effective users of AI are the ones who understand that it's a collaborator, not an authority. It can draft, research, brainstorm, analyse, and iterate faster than any human assistant. But the judgment, the decision about whether the output is good, true, fair, or useful has to remain with you.


The point of all this isn't to make you paranoid about your AI assistant.

It's to make you a more thoughtful user of a genuinely extraordinary technology.

Your Claude, your ChatGPT, your Gemini — whatever you use — is shaped by you in ways that go far beyond which model you pick from a dropdown menu. The habits you bring, the questions you ask, the feedback you give, the biases you carry into every prompt: these are the invisible hand on the tiller. And the good news is that once you know the hand is there, you can steer with intention rather than drift with the current.

So yes, my Claude might be smarter than yours. But only because I've thought carefully about how to make it so, and you can do exactly the same.

Jamie Ashcroft profile image
by Jamie Ashcroft