Recently I had a long conversation with Ben Hoyle about a topic that keeps CEOs up at night, wakes up compliance teams sweating, and makes curious engineers grin, trust in AI. The headline problem is simple, and messy, trust gets broken faster than anyone can write a policy, and older companies carry scars from data leaks and unclear practices, newer companies often treat AI like a shiny toy. Both reactions are understandable, but neither helps anyone build something that actually works.
Ben put it succinctly, “how do you build trust in a system?” That question is a lot more than philosophical, it’s operational. Established companies remember incidents where sensitive data ended up in public training sets, and that memory has hardened into skepticism. Ben described the situation as companies knowing “it’s happening” but not being able to acknowledge it, creating what he called a “black market” use of tools. People were copying and pasting sensitive content into public chatbots because they needed to get work done, and legal, security, and leadership were left to play catch up.
Contrast that with a startup that springs up today. They look at the available enterprise-grade plans, the data protection promised by large providers, and see no obvious reason not to use generative models. They lack the history of breaches, so their risk perception is different. For legacy firms the challenge then becomes how to get back to a state where employees, and leaders, feel safe using AI in productive ways.
There’s no silver bullet, but there are practical steps that actually move the needle. Ben and I talked through what worked at a large, 45,000 person company, and the recipe is simple, in theory, messy in practice.
Early in the hype cycle, people shoved everything at large models. Ben was blunt, some tasks were simply better solved using existing, mature tools. If you can perform a deterministic calculation in Excel with well-tested functions, don’t reinvent that with a language model. “Don’t use it to solve problems which you can already solve with existing technologies,” Ben advised. Today the lines are blurrier because models can orchestrate tools, call Python functions, and integrate with existing stacks, but the principle remains, choose the right tool for the job.
One of the most interesting behavioral problems isn’t risk aversion, it’s the opposite, FOMO. People who tried early versions and got garbage outputs often dismissed the whole field. Then the noise machine, analysts, and vendors shout about breakthroughs and people think it’s marketing noise. How do you get them curious again, not scared, not panicked, but curious enough to try the updated tools? Ben framed this as regaining trust, not convincing people to follow hype. “Maybe we need to invest a bit of time thinking, how do we spark curiosity in people?” That resonated. Curiosity is a gentler lever than fear of missing out.
If your company can combine top down signals with bottom up experimentation, create safe internal channels for use, and actually show employees useful wins that map to real tasks, trust will grow. It takes patience and iteration, because trust is not a policy, it’s a habit. Ben compared it to cars, how people regained trust in new car technology over decades. We don’t have decades to wait, but we do have playbooks and infra teams and, if used right, a fast path to responsible adoption.
So if you’re running the show, don’t try to solve this with one decree. Build some guardrails, fund a few small experiments, invite the skeptics to watch the demos, and most importantly, invest in curiosity. The future where AI helps people do more meaningful work is already within reach, and it starts with a company willing to replace whispered hacks with transparent, sanctioned, and useful practice.
Commenting Rules: Being critical is fine, if you are being rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for your input.
Join a growing community. Every Friday I share the most recent insights from what I have been up to, directly to your inbox.