The number one reason people find AI tools disappointing is that they're using them wrong.
Not wrong in a technical sense — wrong in a conceptual one. They're treating a thinking partner like a search engine, and wondering why the results feel shallow.
Here's the behaviour: you have a problem. You type a short question. You get an answer. The answer is fine — accurate-ish, generic, not particularly useful for your specific situation. You decide AI is overhyped.
What actually happened is you gave a smart collaborator no context, asked a vague question, and evaluated the output against a specific need it had no way of knowing about.
What a search engine actually does
A search engine matches your query to existing content. The shorter and more precise the query, the better the match. Context is noise. The goal is to give Google as little as possible and get back the specific thing you're looking for.
This approach works well because Google has already indexed the thing you need. Your job is to help it find it.
AI doesn't work this way. AI synthesises. It doesn't retrieve — it generates. The output isn't a pre-existing document matched to your query. It's a response built from your input. Which means the quality of your input directly determines the quality of the output. Every time.
Short query in = generic output. Specific, contextual input = useful, specific output.
The context problem
Here's a real example of the same request handled two ways.
Search engine style:
"How do I handle a difficult client?"
Collaborative style:
"I have a client who agreed to a website project six weeks ago. Scope is locked, but they're now requesting significant changes that weren't in the brief and getting frustrated when I push back. I have a good relationship with them and want to keep it. I'm about to send an email asking for a scope clarification call. Help me think through how to frame this so they feel heard but we don't let scope creep go unchecked."
The first prompt gets generic advice about communication. The second gets something you can actually use.
The difference isn't that the second prompt is more clever. It's that it contains the actual situation. The actual relationship context. The actual goal. The actual action being taken. Claude can now give you advice that fits your situation rather than advice that fits every situation — which means it fits none of them particularly well.
The rule: brief Claude like you'd brief a person
If you hired a smart consultant for an hour, you wouldn't open with "how do I handle a difficult client?" You'd give them the background: who the client is, what the relationship looks like, what's gone wrong, what you've already tried, and what you're trying to achieve.
Do the same with Claude.
Before you type your request, spend thirty seconds asking yourself:
- What's the specific situation? (Not the general category of problem, the actual situation)
- What do I already know or have tried? (This prevents generic advice and moves straight to the specific gap)
- What am I actually trying to produce? (A decision? A draft? A list of options? A plan?)
- What constraints matter? (Tone, length, relationship, technical level, timeline)
You don't need to write a paragraph on each — you need to give enough for a smart person to understand the situation without asking twenty clarifying questions.
The conversation, not the query
The other major shift: AI works better as a conversation than as a single transaction.
People who get the most out of Claude iterate. They get a first draft, tell Claude what's not quite right, give more context about what they're actually going for, ask it to try a different angle. Three or four exchanges in, the output is often dramatically better than what came back on the first turn.
This isn't a failure of the first response. It's how collaborative work actually functions. A good consultant's first take is a starting point, not a finished product. You'd push back, add context, redirect. Do the same.
Common iteration prompts:
- "That's too formal — the client relationship is casual. Rewrite it in a more direct, conversational tone."
- "The second point is the one that actually matters. Can you expand that and trim the rest?"
- "I like the structure but the opening doesn't land. Try three different openings and I'll pick one."
- "You've assumed X but actually Y is the constraint. How does that change the recommendation?"
Each of these takes five seconds to type and meaningfully improves the output. Treating the first response as the final answer is leaving most of the value on the table.
When short queries are fine
This isn't an argument that every AI interaction needs to be a long briefing. Some questions are simple and the context is obvious.
"What's the formula for compound interest?" needs no context.
"Summarise this article in three bullet points" — paste the article, you're done.
"What does this error message mean?" — paste the error, minimal setup required.
The short query works when the output doesn't depend on your specific situation. Generic questions with objective answers don't need context because there's no specific situation to accommodate.
The problem is people apply search-engine thinking to questions that aren't like that. "How should I price my services?" is not a factual question. It depends on your market, your positioning, your clients, your cost structure, your confidence level. Asking it without that context gets you a generic pricing frameworks article, not pricing advice.
The practical test
Before you type your next AI prompt, ask: could a smart freelancer act on this request with only what I've written?
If yes: send it.
If no: add the context they'd need to ask for. You'll get better output the first time and spend less time iterating.
The people who find AI transformative aren't better at prompting in some technical sense. They're just treating the tool like a thinking partner instead of a search box. That shift is available to anyone.