Imaginary friends, AI agents, and who’s really steering.

Imaginary friends, AI agents, and who’s really steering.

Table of Contents

Imaginary friends, AI agents, and who’s really steering.

I sometimes joke that I don’t really have colleagues anymore – I have tabs.

A legal tab. A marketing tab. A tech tab. An “inner therapist with a rainbow flag” tab. All of them always available, always confident, always typing back.

I never had imaginary friends as a kid. But it sure feels like I do now – they just run on GPUs instead of playgrounds.

We keep saying we’re afraid AI will “take over one day”. But quietly, in drafts, search results and smart replies, it’s already here – nudging, steering, autocomplete on our inner voice.


From playground ghosts to grown-up simulations

Psychologists have been studying imaginary friends for decades. Roughly a third of kids invent an invisible companion at some point – not as a sign that something’s wrong, but as a way to practice being a person.1

Through these “friends”, kids quietly rehearse social situations, regulate emotions, and test what’s okay (and not okay) in a low-risk way.1 One study on only children described these companions as a private support system during stress: a listener, a co-pilot, sometimes a co-conspirator.1

Basically: a social simulator wired straight into a developing brain.

And that basic move never really stops. We just stop calling it an “imaginary friend”.


Adults still do it – just with better branding

As adults, the imaginary friend gets rebranded as:

  • inner child work,
  • “parts” in Internal Family Systems,
  • or a visualised mentor / future self.

Same mechanism, fancier language: we simulate another mind in our head so we can think, feel, and decide more clearly.2

Some people even consciously create an imagined persona as ongoing support – a version of themselves or a character that encourages, challenges, or comforts them on purpose.2,3 That’s not delusion; that’s structured self-talk.

If I’m honest, I had my own “inner committee” long before AI:

  • the impatient one,
  • the cautious one,
  • the one that says “just ship it, they’ll survive.”

The difference now is that some of those voices come with a system prompt and a model card.


AI as the new imaginary friend (with latency)

Enter AI assistants.

Over the last years I’ve started designing more and more specialised agents around myself:

  • one for legal structure and tone,
  • one for tech architecture,
  • one for marketing framing,
  • one that mirrors a hard-nosed capitalist so I can stress-test my ethics and strategy.

They don’t feel like tools in a toolbox. They feel like a small advisory board that lives in my browser – one that never sleeps, never says “I’m fully booked”, and never rolls its eyes in a meeting.

Cognitive scientists call this cognitive offloading: moving parts of our thinking into external systems.3 We used to do that with notebooks and calendars; now we do it with models that can draft arguments, generate strategies, and simulate stakeholders.

The research is pretty honest about the trade-offs:

  • Heavy reliance on AI dialogue systems can reduce independent reasoning and motivation if people stop engaging deeply with the material.3,4
  • At the same time, adults with ADHD report using chatbots as “cognitive collaborators” to structure tasks, externalise memory, and actually get unstuck instead of just feeling guilty about it.8

And then there’s automation bias: once something sounds confident and fluent, we’re tempted to trust it more than we should – even when we know better.4

This is the quiet version of “AI takeover” we don’t like to talk about: not robots in boardrooms, but the gentle pressure to just accept the suggestion, send the draft, follow the recommendation. One tiny steering correction at a time.

So yes, AI can be an imaginary friend for grown-ups. But unlike the childhood version, this one can be biased, commercially motivated, or simply wrong.5 And it remembers more about us than some managers ever will.


From assistants to collaborators

What really interests me is the shift from assistant (“write this email”) to collaborator (“help me think”).

We’re seeing tools that try to act less like apps and more like persistent, context-aware minds around you:

  • Personal AI positions itself as a “second brain” that learns from your own data and helps you remember, remix, and extend ideas over time.6
  • NexStrat AI leans into the “strategy consultant” role – ingesting internal and external data to propose structured options and scenarios.7

These aren’t just smarter keyboards. They’re edging into imaginary colleague territory.

My own agents are drifting there too:

  • “Challenge this idea like a CFO.”
  • “Play the unfriendly stakeholder and poke holes.”
  • “Translate this into legal language, but keep my intent (and my spine).”

At that point, the line between imaginary friend, inner voice, and AI co-pilot gets thin. The mechanism is the same: we think better by thinking with someone – or something – else.

The uncomfortable part is this: the more we outsource, the more our inner voice starts to sound like whatever system we’ve put at the centre of the table.

I still don’t have a childhood story of an imaginary friend. But I do have this very 2025 scene:

Me, in a quiet room, surrounded by invisible advisors that speak in different fonts and temperatures. Tabs instead of colleagues. Agents instead of ghosts.

So maybe the question isn’t “Will AI take over?” at all. It’s closer to:

How much of the steering wheel have we already handed over – and are we okay with who’s quietly holding it?

The rest is just interface design.