The central question
If your plan for scaling AI inside a company starts and ends with spinning up more GPUs, we need to talk. Scaling AI is half infrastructure, half anthropology, and the second half is the part most teams forget. You can make a system infinitely fast, but if people do not trust it, do not know how to use it, or do not see how it fits their day, nobody will use it. That is where the real work lives.
Use cases start with real work
I don’t know what your tasks are.” That is the whole point. You cannot design adoption from an ivory tower. You need to bring the use cases to the people, and the people to the use cases.
Start with real examples, not hype
Ben’s team tackled adoption by showcasing real, relatable wins. Instead of lecturing, they recorded employees solving their own problems, and published those as internal showcases. Seeing a peer in your department use a chatbot on engineering specs or accelerate a report is far more persuasive than a glossy vendor deck. Ben framed it well, “I’m the right person to show them what generative AI can do and then to open the door for them to think for themselves.”
Why examples work better than hype
This matters because inspiration opens the ideation funnel. Once someone sees a nearby colleague get more done, their mental toolbox expands. They start asking, “Could this help me?” That’s where you want them.
Governance, yes, but user-friendly
Large companies face a unique tension, you need control for privacy and compliance, but you also need to enable creativity. Ben described the common broken state, where everyone is using public tools under the table, which is unsafe and toxic for culture. The answer was a cross functional approach, legal, security, cloud, and product, to offer an internal, compliant alternative. Block external risky APIs, give employees a supported internal option, and document clear rules of engagement.
What makes governance usable
Concrete pieces that help:
Governance building blocks
- Clear guidelines on what kind of data can be shared with which tools.
- Training pathways that are recommended based on role, not a generic checklist.
- Internal catalogs of showcases and tools so people see relevant examples.
Scaffolding makes adoption repeatable
Those sound boring, but they are the scaffolding that turns a handful of enthusiasts into a companywide practice.
Top down and bottom up, both at once
Culture change is not binary, it’s multilayered. Ben recommended a two-pronged approach, grassroots communities plus board level sponsorship. The fastest lever was getting senior leaders visibly using tools. Ben observed, “if the boss of the boss of the boss is using this technology, then maybe it’s okay if I use it too.” That legitimacy matters. Pair that with community-led hackathons, and you get both permission and execution.
Why datathons help
That explains why internal datathons are such a powerful pattern. They are a way to convert business curiosities into testable projects, bring together data people and domain experts, and create a path from prototype to scalable product.
Training is not optional, it is strategic
Training needs to be role specific, continuous, and practical. Surprisingly, the prompt engineering craze blew up expectations and then faded. The real win is teaching how to get reliable outputs, and how to combine LLMs with deterministic tools. Ben said this evolution perfectly, use LLMs as the brains to orchestrate deterministic functions, not as a sledgehammer for everything.
What training should cover
So train people to:
What training should cover
- Define the task clearly, inputs and outputs.
- Use LLMs to generate code or orchestrate tools, while relying on deterministic systems for final, audited results.
- Validate outputs, and understand where the model is likely to hallucinate.
Bring the tools into existing work
A practical example is embedding Copilot inside Office, so employees get the power of LLMs inside the familiar flow, rather than forcing them to learn a new app.
Rebuilding trust, slowly and deliberately
Trust is the invisible currency in AI adoption. Ben asked the essential question, “how do you build trust in a system?” Many teams are recovering from earlier data leaks and poor early chatbots. The fix is not spin, it is rigor. Demonstrate secure architectures, show reproducible results, and provide easy ways for people to experiment without risking sensitive data.
How trust gets rebuilt
He used an apt analogy, cars got reliable over decades, and trust got rebuilt. We do not have decades, so we need to accelerate trust regeneration through governance, transparent testing, and repeated small wins.
Technical scaling and human scaling are different projects
Yes, you must solve the infrastructure problem. Make models available, autoscale, manage costs, and integrate with identity and data stores. But the harder, slower, human problem is adoption. Ben put it neatly, automating servers can be five clicks. Getting people to actually use the tool is a cultural process.
A balanced scaling checklist
Practical checklist for balanced scaling
A balanced scaling checklist
- Build the secure, scalable platform with clear integration points.
- Create and publish role-based guidelines and trainings.
- Run internal showcases and datathons to surface relevant use cases.
- Empower grassroots communities and secure executive champions.
- Integrate tools into existing workflows, so people do not need to switch context.
- Treat LLMs as orchestrators that call deterministic services for critical operations.
- Measure adoption, but measure meaningful impact, not just clicks.
The subtle final piece, delegation
Ben made an interesting point about assistants, and I agree. For these systems to be useful we must be willing to delegate authority, not just queries. The assistant needs rights, and we need to design safe delegation frameworks. That is both technical and ethical design.
The practical point
If you focus on the people, the processes, and the trust, infrastructure follows. Build the platform, yes, but spend equal time on the translation between tech and teams. Otherwise you will have the fastest, most brilliant engine in history sitting in a garage, doing nothing.
