Most clients already know enough. Your job is not to educate them.
Most of my clients show up knowing what they want. They have an outcome in mind, maybe a vague idea of what AI could do, and they want someone to build it without making them sit through a lecture.
That means the real explanation work I do is almost never "let me introduce you to artificial intelligence." It's knowing when to explain at all, what words to actually use, and how to tell someone their idea, while genuinely interesting, is going to create more problems than it solves.
That last part is what most AI-related content glosses over. So let's start there.
When a client brings up AI, the gap is almost always in one of two places: they think it'll be cheaper and more hands-off than it actually is, or they want to automate something that shouldn't be automated. Both require a different kind of conversation.
The question I'm actually answering on most calls isn't "what is AI?" It's "will this specific thing actually work the way you're imagining it will?" Those are very different conversations.
The cost myth nobody wants to say out loud
Clients come in expecting AI to be the budget option. Often times it is. Sometimes it isn't.
When you factor in the setup fee, the monthly software subscriptions, and the token costs for whatever model is running underneath, you can end up spending about the same as keeping a human on the task, sometimes more. And that's before you account for the time required to maintain the system.
I break this down explicitly when it comes up. I'll map out the actual numbers side by side. Sometimes the AI option wins. Sometimes it doesn't. Either way, the client deserves the real picture. This is one of the most useful things you can do as a consultant: be the person who does the math out loud, even when the math isn't what the client was hoping for.
AI-POWERED SOLUTION | HUMAN DOING THE TASK | |
|---|---|---|
Setup cost | One-time build fee | Hiring and onboarding |
Monthly cost | Software subscriptions + tokens | Salary or contractor rate |
Maintenance | Monitoring, updates, fixes | Management time |
Risk | Breaks silently, needs checkpoints | Calls in sick, turnover |
Best for | High-volume, repetitive, low-stakes | Complex, relational, variable |
This table lives in my back pocket (metaphorically). I pull it out when I sense a client has already decided AI is the answer without doing the math. It's not there to talk them out of anything. It's there to make sure they're making an informed call.
"Can it be done?" is the wrong question
Sometimes a client's idea is technically possible. It's just not a good idea.
I've had clients want to automate things that would quietly erode the experience for their own clients: an AI receptionist handling sensitive questions for a med spa, a fully automated social content system with no human review, a chatbot replacing the first touchpoint in a high-ticket coaching sales process.
All of those can be built. None of them should be, at least not the way they're imagining. When I need to explain why, I go to one of two angles depending on what's driving the concern.
QUESTION 1
Can it be done?
What you're checking
Is there a tool, API, or workflow that can do this?
How to answer it
Map the current workflow. Identify what AI replaces.
When it's yes
Proceed to the second question before scoping anything.
When it's no
Explain the limitation simply. Offer an alternative.
QUESTION 2
Should it be done?
What you're checking
Will this save real time and money, or just shift work around?
How to answer it
Run the cost comparison. Walk through the client experience.
When it's yes
Build it. You have technical and strategic sign-off.
When it's no
Name the reason specifically. Propose what to do instead.
"Can it be done?" and "should it be done?" are not the same question. Part of my job is knowing which one we're actually talking about, and being willing to say so.
How I adapt the conversation for different clients
I don't fundamentally change how I explain AI depending on who I'm talking to. My clients are smart. They don't need a different framework, they need an example that lives in their world.
So for a med spa owner thinking about automation, I might bring up an AI receptionist: something handling appointment confirmations, post-treatment follow-ups, or FAQs at 2am when nobody's at the front desk.
For a marketing agency owner, that same conversation becomes an AI setter: something that qualifies inbound leads or handles the first few messages in an outreach sequence before a human takes over.
Same underlying concept. Different context. That's usually all it takes.
The other thing that helps: I almost always have a visual ready. Either a Miro board I built for a similar use case, a process diagram, or I'll share my screen on the call and show them an actual system I've built. For most of my clients, seeing the thing is worth 500 words of explanation.
My rule: only explain if asked
Here's what most consultants in this space get wrong. They explain too much.
They use technical language their clients don't need: LLMs, APIs, connectors, databases, token limits. They explain how things work under the hood when the client is sitting there thinking, "I just want to know if this will save me three hours a week."
My default is to not explain unless I'm asked. If a client has a clear outcome in mind and wants me to build it, I build it. I don't take them through a technology primer first. That would be wasting their time and mine.
I only go into explanation mode when:
- The client asks how something works
- A technical limitation needs explaining
- I'm recommending against something
That last one matters most. If I'm telling a client "I don't think this is the right move," I owe them a clear reason, not jargon. I translate: here's what would actually happen, here's what it would cost you in time or in client trust, here's the alternative I'd suggest.
Explaining AI isn't a skill you use at the start of every engagement. It's a skill you reach for at specific moments, when clarity is missing or expectations need adjusting. Know the difference.
Frequently asked questions
Do I need to explain AI to every client?
No. Most clients who hire an AI systems consultant already have a basic understanding of what AI can do. The more useful skill is knowing when to explain at all, which is usually when something can't be done, shouldn't be done, or costs more than they're expecting.
What's the best way to explain AI to a non-technical client?
Skip the definitions and go straight to examples from their world. A med spa owner understands "AI receptionist that handles follow-ups after hours." A coach understands "automated intake process that writes the first draft of your onboarding emails." Start there.
How do I tell a client their AI idea is a bad idea?
Two angles: if it won't save time or money, show them the numbers side by side. If it'll hurt their client experience, walk them through exactly what their clients would encounter. Specific and real lands. Vague concerns don't.
How much do AI automations actually cost?
It depends heavily on the use case. Between setup fees, monthly software subscriptions, and token costs, some AI solutions cost about the same as keeping a human on the task. A good consultant runs the comparison honestly before recommending either direction.
What AI jargon should I avoid when talking to clients?
LLM, API, tokens, connectors, training data, model, inference, and anything that requires a software background to parse. If you have to use a technical term, follow it immediately with an analogy that makes sense in their industry.
