Recently, I had one of those conversations that refuses to leave your head. I was talking with Ben Hoyle about AI, and somewhere between beach volleyball anecdotes and cosmology detours we ended up on the practical question every leader pretends to have a clear answer for, but usually does not, how do we make AI actually useful for people, not just interesting for engineers and panic-inducing for legal teams.
The short version is this, people will use the tools they trust, and trust is a fragile, easily broken thing. The slightly longer version is the one worth paying attention to.
Trust is the quiet bottleneck. Companies that started using public ChatGPT-style services early on discovered the hard way that “public” is not private. After a few data leak stories went viral, many organizations slammed the brakes. Ben put it bluntly, companies read those breaches and now think “this information will leak out somewhere.” That loss of trust led to blanket bans, secret workarounds, and the delightful mess that becomes a company-wide black market for unapproved tools.
Fixing this starts with two moves, one from the top and one from the bottom. You need board level buy-in that says it’s acceptable to explore and adopt AI, and you need grassroots training and showcases so people can see relevant examples. As Ben says, show them someone in a similar role using a chatbot against their own requirements docs, and suddenly the idea goes from abstract to practical. My takeaway, CEO approval makes AI legitimate, and real users make it tangible.
There was a time when every puzzle was a nail if you had a sledgehammer. Two years ago, throwing an Excel file at a large language model to generate a deterministic report was a bad idea, because Excel already does deterministic aggregations, and Excel functions are well tested. Ben’s advice then, and still relevant, is don’t replace a well-understood, reliable tool with a black box just because it’s shiny.
That said, the tech has changed. Today we can ask a model, “is this problem a good fit for you?” and get a meaningful answer. Models can call tools, run Python, and embed within existing software like Excel or Teams. So instead of replacing the deterministic parts, use AI as an orchestrator, the brain that decides when to call a deterministic function and when to apply creativity. It’s a practical division of labor, and it makes sense for real workflows.
The biggest gains won’t come from isolated demos, they come from integration. Think of Copilot embedded in Teams, not as a novelty, but as an interface to everything your company already knows. Once your documents, knowledge bases, and processes are accessible through your standard workflow, then AI becomes a productivity multiplier rather than a curiosity.
Ben used a striking image, that of Alexa with the brains of a large language model, performing your entire travel routine without you touching a screen. It’s not sci-fi, it’s an integration problem. Give the assistant the context it needs, the rights it’s allowed, and the access to act on your behalf, and watch routine friction disappear. The hard part isn’t the compute, it’s making that experience trustworthy and seamless.
Changing a company’s mindset on AI is a marathon disguised as a sprint. Ben outlined a practical approach we’ve both seen work, combining top-down and bottom-up strategies. You need executive champions using the tech in public, so employees know it’s sanctioned, then you need training, showcases, and “internal datathons” where interdisciplinary teams actually solve problems together.
Those datathons are gold. They move the conversation from “what can AI do?” to “what can we do with our data today?” They create traction, which is contagious. Engineers and domain experts collaborate, build prototypes, and then the organization decides which ideas scale. The result is not just tools, it’s a culture that thinks in terms of problems and solutions, not tools and memes.
There are people who tried earlier versions of AI and concluded “it doesn’t work” and others who are dizzy from the hype and assume everything is solved. Neither stance helps. What we need is curiosity, the kind that starts with a small, safe experiment. As Ben put it, people often stop exploring because earlier experiences were poor, and you need to rebuild that trust by letting them try again in a controlled, useful environment.
This is where good examples win, not fearmongering or corporate edicts. Show someone a tailored use case in their world, and curiosity will do the rest. If they see relevance, they will invest the time to learn the new interface, which is how adoption scales.
A final point that stuck with me, and one I agree with, is that humans will keep valuing human-made things. Even if AI can replicate or replace many functions, we will still pay for craftsmanship and human interaction, whether it’s a handcrafted table or a deeply human consultancy experience. That means people will have choices: automate routine tasks, and spend more time on work that requires empathy, taste, or oddly specific judgment.
So here’s the practical balance to aim for, not as a manifesto, just a working rule. Automate what can be reliably automated, integrate AI where it reduces friction, train and showcase until curiosity spreads, and protect the human spaces that matter most. If you do those four things, you get a future where AI does the repetitive, and people get to do the meaningful work they actually enjoy. That sounds worth building toward.
Commenting Rules: Being critical is fine, if you are being rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for your input.
Join a growing community. Every Friday I share the most recent insights from what I have been up to, directly to your inbox.