From AI to AX: 9 simple rules for designing empathetic AI experiences
Imagine this: You’ve just lost your job. Your relationship is falling apart. The bills are stacking up.
Most of us still instinctively reach for a phone and hope there’s a human on the other end.
In a recent interview, our EY Studio+ colleagues in the Netherlands asked a provocative question: what if, in some of these situations, an AI “agent” could actually feel like the safer first step? Not instead of humans, but as a new kind of support in moments when we feel most fragile.2
At the same time, the EY UK Human Signals team published research on Empathy Demand – the expectation that technology should provide not just service, but a sense of being recognized, respected, and responded to at an appropriate emotional level.1
That’s where AX comes in: AI experience (AX) – not just what the AI does, but how it feels to be on the receiving end of it.1
There’s a twist of humility in all this: psychology and health research both remind us that humans are not perfect empathy machines. We overestimate how well we “get” each other, we project, we burn out, and we’re biased toward people who look and live like us.3
So instead of asking “Can AI ever be truly empathetic?”, a more useful question for leaders might be:
How do we design AX so that agentic AI becomes a trustworthy part of the empathic system – not a cheap imitation of it?
Here’s one way to translate the EY UK team’s research and the Dutch interview into nine simple rules for AX.
1. Call it AX – and put it on the scoreboard
If you don’t name it, you won’t manage it.
Treat AI experience (AX) as a first-class design goal, next to accuracy, speed and cost. That means tracking things like:
- Did the person feel heard?
- Did stress levels go down, or up?
- Did they feel they could trust the interaction?
If AX never shows up in your dashboards, it won’t show up in your customer’s life.1
2. Design from “moments that hurt” – and match the emotional tone
The EY UK Human Signals work looked at high-stress life events – financial hardship, debt, bereavement – and explored how people responded to AI support in those moments. People were surprisingly open to AI help, as long as it was honest, non-judgmental and easy to escalate to a human.1
So don’t start with “We have a new large language model, what can we do with it?”
Start with:
- “Where do our customers feel the most vulnerable?”
- “What does good support look like in that specific moment?”
Then give each moment its own emotional palette:
- Basic sentiment and intensity detection.
- Different response styles for grief, anxiety, mild frustration, routine queries.
- A willingness to slow down and acknowledge, not just optimise for shorter interactions.1
Empathy in AX isn’t “be friendly.” It’s “show up with the right tone for this person, right now.”
3. Be radically honest about who is in the room
Research on digital medical advice shows something important: people care who they think they’re talking to – and can be biased against AI even when the underlying advice is identical.4
So:
- Say upfront that the user is talking to an AI system.
- Explain what it can and cannot do.
- Offer a clear, easy “talk to a human” option.
Trust grows when people feel informed, not managed.
4. Make AI the listener that scales, not the judge that replaces
Several studies now show that, in certain contexts, AI answers are rated as more empathetic than those from human experts – especially in written advice.3,5
That’s powerful, but also dangerous.
A healthy pattern is:
- Let AI listen, structure, clarify, and document.
- Let humans decide, especially where values, trade-offs or long-term consequences are involved.
Use agentic AI to scale listening and preparation, not to quietly shift hard moral choices away from human responsibility.
5. Co-create AX with real people – and then train your teams for the new dance
You can’t A/B test yourself into empathy in a vacuum.
Bring real customers, patients, citizens and frontline employees into the design process. Ask them:
- “Where does this feel supportive?”
- “Where does it feel weird, hollow or too much?”
Then rehearse the human–agent choreography:
- How humans pick up a hand-off from an AI that has already heard the story three times.
- How tone and language stay coherent across the interaction.
Research on empathic agent behaviour also suggests that the way an agent repairs trust after failure matters as much as how it behaves when things go right.6
AX is not just what the agent says – it’s how well your whole system moves together.
6. Guard against the “compassion illusion”
There’s a growing risk of what you might call a compassion illusion: we start accepting simulated care as a replacement for genuine care.7
A system can say all the right words (“I’m sorry this is difficult”) while being optimised for the wrong goal (upsell, deflection, data capture).
So you need guardrails:
- Clear ethical rules for how emotional language may – and may not – be used.
- Strong alignment between the agent’s objectives and the user’s interests.
- Diverse testing panels to spot where “empathetic” behaviour feels manipulative or culturally off.
Empathy in AX is not a trick. If it becomes one, people will notice.
7. Measure how it feels, then iterate like a product
Deploying empathetic AI is not a one-and-done rollout. It’s a learning system.
Track signals like:
- “Did this interaction make things easier or heavier?”
- “Do people come back to this channel by choice?”
- “Where do they bail out to a human, and why?”
Combine metrics (CSAT, NPS, complaint patterns) with qualitative feedback (“it felt safe”, “it felt weird”).
Treat AX like any serious product capability: with continuous monitoring, governance and iteration, not one big launch.1,8
AX maturity is built over cycles, not workshops.
8. Tell a clear internal story: empathy is your differentiator
The EY UK team’s conclusion is blunt: in an AI-saturated world, being the organisation that truly understands people becomes a competitive edge.1
That means:
- Talking about empathy in board meetings not as softness, but as risk management, trust, and long-term loyalty.
- Positioning empathic AX as part of your brand promise.
- Making it clear internally that AI is there to scale the human touch, not erase it.
Other work on agentic AI underlines this: operating through empathy is a trust amplifier that builds both social and economic value.9
9. Design for humans and agents – your future customers will be both
Today, we mostly think of AX as “how humans experience AI.” But if agentic systems really take off, your next “customer” might be an agent negotiating on behalf of a person – or a network of agents talking to your services directly.9
That has two implications:
-
Be deliberate about where you place agents. Dropping an agent into every process because it’s fashionable doesn’t make sense. Each agent should:
- Solve a real problem in a real journey.
- Reduce friction or risk for the human.
- Create measurable value on the bottom line.
-
Don’t design just for people – design for agents as well. Over time, your services won’t only be discovered and evaluated by humans, but also by their personal and organisational agents:
- Agents that scan terms, compare empathy and risk patterns, and choose who to talk to.
- Agents that remember which organisations handled vulnerable moments with respect – and route future traffic accordingly.
In that world, AX becomes part of your discoverability. You’re not only building experiences for today’s customers; you’re shaping the signals that tomorrow’s agents will use to decide whether you are worth their human’s attention.
If the EY UK team’s research on Empathy Demand gives us the “why”, and our Dutch colleagues’ interview shows an early “how”, then AX is the space in between: a discipline where we deliberately design experiences that respect human fragility and the agents that increasingly act on our behalf.
That’s the shift from “using AI” to being findable, trustworthy and worth engaging with – for humans and for their agents alike.
Read more
-
EY Studio+ UK – From AI to AX: Empathy demand in the next generation of AI-powered services (Human Signals report)
-
EY Studio+ Netherlands – “Waarom agentic AI een nieuw soort vertrouwen mogelijk maakt” (interview – link to be added)
-
Howcroft, A. et al. – Empathy in patient–clinician interactions in the age of AI: a systematic review and meta-analysis
-
Reis, M. et al. – Influence of believed AI involvement on the perception of digital medical advice
-
Ayers, J. W. et al. – Comparing Physician and Chatbot Responses to Patient Questions From an Online Forum (JAMA Internal Medicine)
-
Tsumura, T. et al. – Making a human’s trust repair for an agent in a series of successes and failures (Frontiers in Computer Science)
-
A.I. Is About to Solve Loneliness. That’s a Problem – The New Yorker
-
Marta Fernández – The AX Framework v2.0
-
Tony Bates – As agentic AI spreads, empathy is the next competitive edge (World Economic Forum)
// end