Trust is not binary, it’s fragile, and it breaks faster than any rollout plan. One misplaced document in a public chatbot, and suddenly everyone in IT is running to block websites, while half the company quietly keeps using those same tools, because, well, work still needs doing. I’ve seen that exact pattern play out, and it all comes down to one ugly truth, people will use the tools they think help them, whether IT approves or not. So we need a better plan than blocking websites and hoping the problem goes away.
When trust gets broken
A few years back, headlines about companies accidentally exposing secrets to public LLMs changed the conversation overnight. As Ben put it, “that trust was broken, when these data leaks occurred.” For companies that had already experimented with chatbots, the fallout meant two things, a board-level panic and a grassroots scramble. People who were already using these systems suddenly got told they couldn’t, and those who had been skeptical felt justified in staying skeptical. The result was messy, and in some companies it looked like “a black market kind of” usage, where nobody openly acknowledged the tools but everyone relied on them.
So how do you rebuild trust? Or better, how do you avoid burning it down in the first place and still let people actually get work done?
Start with real examples, not hype
You can show people slides about possibilities until the cows come home, but it rarely moves the needle. What works is tangible, relatable showcases where an employee, someone with a job similar to theirs, uses an AI tool to solve a problem they actually have. We began making short case videos and internal showcases that let people see peers use AI on real tasks. It is simple human psychology, if someone in my team is using this and it’s helping them, it’s suddenly not theoretical anymore.
Pair that with practical training. People don’t need a PhD in transformers to get value, they need guided hands-on experience and sensible guardrails that answer the question, what can I safely put into this system? So we built internal trainings and a recommender inside the learning platform that nudges relevant employees toward appropriate sessions. That combination, inspiration plus training, starts to shrink the gap between curiosity and actual adoption.
Top-level sanction matters
Culture changes two ways, from the top down and from the grassroots up. Both help, but nothing accelerates adoption like visible senior leadership using a tool. As Ben noted, if “the boss of the boss of the boss is using this technology, then maybe it’s okay if I use it too.” That demonstration does something policies rarely do, it legitimizes. When execs publicly use a controlled, compliant tool, it signals permission, and permission is a surprisingly scarce resource in big companies.
Build the right internal plumbing
For big organizations the answer isn’t to ban the web. The smarter move is to offer a safe, internal alternative and make it easy to use. We put together cross-functional teams, including information security, cloud, legal, and product, to design an internal GenAI platform that meets compliance needs while still being useful. When people are forced to improvise, copying proprietary docs into a free public chatbot, you get leaks. Give them a secure, well-documented option, and most will take it.
Make guidelines practical, not philosophical
Guidelines shouldn’t be a legal novel. People want short, practical rules that answer the two biggest questions, what can I put into the tool, and how should I use it to get decent results. We paired data classifications with examples, and followed that with “how-to” mini-sessions on crafting prompts and checking outputs. Yes, prompt engineering is less mystical than it used to be, but explaining the common pitfalls still saves time and trust.
Regaining trust after bad experiences
Some people tried early versions of the tech and came away offended. They said it was useless and never tried again. Convincing those people to re-engage is tough because their prior experience is proof they were right. So you need a patient approach, small wins, and a few new success stories that show the tech has evolved. Transparency helps too, explain what changed, why results are better now, and which safeguards are in place.
Determinism, reproducibility, and the user experience
One of the recurring worries is the perceived randomness of LLMs. Ben explained it bluntly, “they are deterministic because it is just a computer algorithm running on a computer.” The catch, for users, is that unless you control the random seed, the model version, and the exact context, outputs can vary. For practical adoption, that means two things, when you need deterministic results call deterministic tools, and when you want flexible, idea-generating work, let LLMs roam a bit. A good pattern is to let the model orchestrate which tool to use, and hand off to deterministic processes when consistency matters.
The assistant promise, minus the privacy nightmare
Think of AI as an assistant, but an assistant needs rights to act. You have to decide how much access to give it, which systems it can touch, and what financial or operational authority it holds. As Ben put it, “you need to give the assistant the same rights that you have or the rights that you’re willing to give an assistant to delegate certain tasks.” That is the blunt reality, if you want the assistant to actually save time, you must build secure mechanisms for it to act on your behalf.
Curiosity beats FOMO every time
There are three reactions among leaders, full throttle, blanket ban, and terrified midway. All feel like fear in different costumes. My preferred lever is curiosity, not urgency driven by FOMO. Spark curiosity, show real examples, give safe tools, and then let people play. Real adoption follows from repeated useful interactions, not from memos that shout about transformation.
Where this gets exciting is simple integration, we already have the tech to embed AI into the tools people use daily. When Teams, IDEs, and CRM all have safe, context-aware assistants built in, adoption becomes invisible. People get better work done, without moral panic about data.
If you lead a team, focus on three things, make it safe, make it useful, and make it visible. Get execs to model behavior, let curious teams experiment, and keep the door open for people who tried and failed earlier. Building trust in AI is not about convincing skeptics that magic exists, it’s about showing that the machine can be a useful, responsible coworker. That is worth doing, and worth doing carefully.
Commenting Rules: Being critical is fine, if you are being rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for your input.
Join a growing community. Every Friday I share the most recent insights from what I have been up to, directly to your inbox.