If your plan for scaling AI inside a company starts and ends with spinning up more GPUs, we need to talk. Scaling AI is half infrastructure, half anthropology, and the second half is the part most teams forget. You can make a system infinitely fast, but if people do not trust it, do not know how to use it, or do not see how it fits their day, nobody will use it. That is where the real work lives.
Early on in my chat with Ben Hoyle, he put it plainly, “what are your tasks? I don’t know what your tasks are.” That is the whole point. You cannot design adoption from an ivory tower. You need to bring the use cases to the people, and the people to the use cases.
Ben’s team tackled adoption by showcasing real, relatable wins. Instead of lecturing, they recorded employees solving their own problems, and published those as internal showcases. Seeing a peer in your department use a chatbot on engineering specs or accelerate a report is far more persuasive than a glossy vendor deck. Ben framed it well, “I’m the right person to show them what generative AI can do and then to open the door for them to think for themselves.”
This matters because inspiration opens the ideation funnel. Once someone sees a nearby colleague get more done, their mental toolbox expands. They start asking, “Could this help me?” That’s where you want them.
Large companies face a unique tension, you need control for privacy and compliance, but you also need to enable creativity. Ben described the common broken state, where everyone is using public tools under the table, which is unsafe and toxic for culture. The answer was a cross functional approach, legal, security, cloud, and product, to offer an internal, compliant alternative. Block external risky APIs, give employees a supported internal option, and document clear rules of engagement.
Concrete pieces that help:
Those sound boring, but they are the scaffolding that turns a handful of enthusiasts into a companywide practice.
Culture change is not binary, it’s multilayered. Ben recommended a two-pronged approach, grassroots communities plus board level sponsorship. The fastest lever was getting senior leaders visibly using tools. Ben observed, “if the boss of the boss of the boss is using this technology, then maybe it’s okay if I use it too.” That legitimacy matters. Pair that with community-led hackathons, and you get both permission and execution.
That explains why internal datathons are such a powerful pattern. They are a way to convert business curiosities into testable projects, bring together data people and domain experts, and create a path from prototype to scalable product.
Training needs to be role specific, continuous, and practical. Surprisingly, the prompt engineering craze blew up expectations and then faded. The real win is teaching how to get reliable outputs, and how to combine LLMs with deterministic tools. Ben said this evolution perfectly, use LLMs as the brains to orchestrate deterministic functions, not as a sledgehammer for everything.
So train people to:
A practical example is embedding Copilot inside Office, so employees get the power of LLMs inside the familiar flow, rather than forcing them to learn a new app.
Trust is the invisible currency in AI adoption. Ben asked the essential question, “how do you build trust in a system?” Many teams are recovering from earlier data leaks and poor early chatbots. The fix is not spin, it is rigor. Demonstrate secure architectures, show reproducible results, and provide easy ways for people to experiment without risking sensitive data.
He used an apt analogy, cars got reliable over decades, and trust got rebuilt. We do not have decades, so we need to accelerate trust regeneration through governance, transparent testing, and repeated small wins.
Yes, you must solve the infrastructure problem. Make models available, autoscale, manage costs, and integrate with identity and data stores. But the harder, slower, human problem is adoption. Ben put it neatly, automating servers can be five clicks. Getting people to actually use the tool is a cultural process.
Practical checklist for balanced scaling
Ben made an interesting point about assistants, and I agree. For these systems to be useful we must be willing to delegate authority, not just queries. The assistant needs rights, and we need to design safe delegation frameworks. That is both technical and ethical design.
If you focus on the people, the processes, and the trust, infrastructure follows. Build the platform, yes, but spend equal time on the translation between tech and teams. Otherwise you will have the fastest, most brilliant engine in history sitting in a garage, doing nothing.
Commenting Rules: Being critical is fine, if you are being rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for your input.
Join a growing community. Every Friday I share the most recent insights from what I have been up to, directly to your inbox.