Navigating the Cultural Shift: Integrating Generative AI in Large Established Companies

Recently I had one of those conversations that pulls you out of the tech bubble and smacks you with a very simple question, who exactly are we trying to help with generative AI? That question kept coming up again and again, because the tech is flashy and fast, but companies are big, slow, and full of people who did not grow up thinking in model layers and prompt tokens. If you want to integrate generative AI into a large established company, the technical stuff is the easy part, the human stuff is where you lose or win.

Break the bubble, show real examples

Everybody knows ChatGPT, but most people do not care about model architecture. They care about the task in front of them. As Ben put it, the first question he asks when asked “how can we use generative AI to help us?” is, “what are your tasks? I don’t know what your tasks are.” That sums it up, painfully and perfectly. You cannot hand a general purpose model to someone and expect them to translate it into real impact. You have to show them, with people like them, solving problems like theirs.

So make showcases, not lectures. Take an employee who actually needed help, record a short demo or a small video, and publish it internally. People will say, oh, that person in my department is using a chatbot on requirements docs, maybe I can too. That internal translation of problems into concrete solutions is what sparks adoption. It is inspiration, not evangelism.

Train, but start from the problem

Training matters, yes. But training without context is like teaching someone to drive in a parking lot and expecting them to race in the Grand Prix. Offer training that is tied to the use case, not abstract theory. Ben’s team integrated training into the company learning platform and nudged employees with recommendations, so someone working on requirements could see a relevant course and then try a hands on demo. Give people both the “how” and the “why”, and let them test it on their own work.

Top down and bottom up, both are required

Cultural change does not happen from one direction. You need a board that says this is important, and you need grassroots communities where developers and back office staff get to experiment. Ben described this as a two pronged approach, with the added trick of getting senior leaders visibly using the tools. If “the boss of the boss of the boss is using this technology, then maybe it’s okay if I use it too,” then policy shifts from forbidden fantasy into sanctioned practice. That kind of legitimization matters more than memos.

Building trust, not just hype

Trust is the quiet problem. Companies remember the times data leaked into public models, and people who tried early chatbots and got bad answers remember being burned. “How do you have them regain trust in a technology they’ve used and maybe got bad results from?” Ben asked. It is a real question. Trust breaks fast and rebuilds slowly, and you have to do something concrete to rebuild it.

Practical steps that work include:

  • Create cross functional teams with legal, info sec, cloud and product, so you are not building in a vacuum.
  • Build internal, compliant instances or vetted vendor solutions so people can stop using external tools under the table.
  • Pair guidelines about what data is allowed, with quick, focused trainings on how to use the tool well.

If access to a sanctioned platform is paired with clear rules and examples of good outcomes, people are more likely to try the tech again in a controlled way. Curiosity is what you want, not FOMO. Curiosity produces experiments, experiments produce learnings, learnings produce confidence.

Sometimes don’t use GenAI, and that is fine

Generative AI is not the answer for every problem. Two years ago it was wildly common to throw a big Excel file at a language model and ask it to summarize columns, while a perfectly reliable Excel formula would do the job with absolute determinism. Ben said, if you can achieve the result with an existing, tested tool, use that tool. Today the lines are murkier because models can call tools and write deterministic functions, but the principle remains, if you need consistent, tested, deterministic output, choose reliability over novelty. Use generative AI as a copilot, not as a sledgehammer for nails.

Scale technology, scale adoption

Scaling infrastructure is an ops problem. Scaling adoption is a people problem. You can spin up 500 servers with a click, you cannot make 5,000 employees adopt a new workflow with a click. The path looks like this, in practice:

  1. Build or select secure, scalable infrastructure.
  2. Create easy to use interfaces embedded in existing workflows, like Teams or Excel.
  3. Publish internal guidelines and quick trainings that reduce the fear of getting it wrong.
  4. Showcase solved problems, then offer pathways from prototype to product with clear support and funding.

Integration into existing tools is the multiplier. When knowledge retrieval and document chat live inside the systems people already use, adoption explodes. That is not magic, it is UX plus trust.

Getting comfortable with the future, without panicking

If you wait for perfect models or perfect regulation you will miss the gains available now through pragmatic integration. If you rush without governance you will create a shadow economy inside your company where everyone uses unvetted tools in secret. Neither extreme works.

So my stance is simple, slightly pragmatic, and a bit stubborn. Do not treat generative AI as an optional shiny thing or a forbidden fruit, treat it as an engineering and cultural program. Bring the board on board, create real internal use cases, build safe infrastructure, give people clear rules and practical training, and show them peers using it successfully. That combination builds curiosity and trust, and once you have those, adoption will follow.

At the end of the day the question is not can generative AI do it, but should we let it do it for us, and how do we make that decision responsibly. If your company can answer that with a clear problem, a safe path, and a real human on the other end to validate outcomes, you are already ahead of most organizations.

Check the full podcast

Search

Commenting Rules: Being critical is fine, if you are being rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for your input.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

✉️ Subscribe to the Newsletter

Join a growing community. Every Friday I share the most recent insights from what I have been up to, directly to your inbox.