Beyond the Hype: How to Identify When and How Generative AI Should Be Used in Business Processes

There is a lot of noise about generative AI right now, so much that CEOs either panic, worship it, or spend their calendar flip-flopping between the two. I’ve been thinking about a more useful question, one that actually matters for running a business: when should you use generative AI, and how should you bring it into real world workflows without turning your company into a black market of copied and pasted prompts?

Start with the problem, not the model One of the simplest but most overlooked pieces of advice is to ask the obvious, painful question first, “what are your tasks?” As Ben put it, when teams ask how they can use generative AI, his first question is, “what are your tasks? I don’t know what your tasks are.” That cuts straight to the heart of the matter. Too many people carry a hammer and then look for nails. The smarter move is to start in the problem space, dig into the real pain points, and then ask if AI is the right tool.

If a problem can be solved reliably with an existing deterministic tool, don’t replace it just because AI is trendy. Ben’s example is perfect, “if you can do a task in Excel using Excel functions, then jumping to generative AI to solve the same task for you is actually a bad use of that technology.” Use the right tool for the job, not the loudest one.

Show, don’t just tell, then teach Culture matters as much as capability. Large organizations need both top down endorsement and grassroots adoption. Ben explains a two-way approach, get the board aligned, and simultaneously empower developers, analysts, and back office staff to experiment. He’s found a fast track is to get an influential leader to use the technology publicly, because “if the boss of the boss of the boss is using this technology, then maybe it’s okay if I use it too.” That’s not manipulation, it’s practical onboarding.

But don’t stop at demos. People need hands-on exposure and training. Build showcases that map solutions to concrete problems, and publish them internally so coworkers can say, “oh, someone in my department used it for that, I could try it too.” Training should cover two things, how to use the tools well, and what you are allowed to put into them, because governance without clarity just invites a shadow economy of risky usage.

Make governance practical, not scary Companies often swing between laissez-faire and a blanket ban when trying to manage generative AI. Neither stance works. The sweet spot is a secure, internal platform combined with clear guidelines and training on data handling. Ben’s team had to create a cross-functional effort covering information security, cloud infrastructure, legal, and compliance to give employees safe access to generative AI. That’s the operational baseline, not a luxury.

If your teams are using public chatbots to discuss sensitive IP, you have a cultural problem and a compliance problem. The fix is not policing alone, it’s providing an approved alternative and training people to use it. That levels the playing field and deprives the black market of its supply.

Use LLMs as copilots and orchestrators Generative models are not magic oracles, they are powerful copilots. A modern best practice is to let the model orchestrate when to call deterministic tools, APIs, or custom functions. Ben calls this using LLMs as the “brains at decision nodes,” orchestrating which tool to call and when. That approach combines the strengths of language models with the reliability of deterministic systems.

A great rule of thumb, adapted from Ben’s advice, is to ask the model itself whether it’s a fit for the task. Tell it the problem, ask it to propose a solution and the implementation path. Often it will suggest a hybrid approach, like writing a robust Python function or leveraging Excel with Copilot. If the model recommends Excel or a straightforward deterministic approach, take the hint.

Build a pipeline from hack to scale Start small, show value, then scale. Internal data science hackathons, or datathons, are a low friction way to surface problems, prototype solutions, and build momentum. Ben’s team uses these events to pair business owners, data architects, developers, and cloud engineers to rapidly test feasibility. If the idea works and people want it, move it into a controlled, scalable environment.

Scaling has two parts, infrastructure and adoption. Spinning up more servers is trivial, getting hundreds of employees to actually use a tool responsibly is not. That requires guidelines, continuous training, and showcasing early wins. Think of infrastructure as the plumbing, and adoption as the water. You need both.

Regaining trust, not forcing it Trust breaks fast and rebuilds slowly. If people have tried early chatbots and left thinking “this was useless,” you cannot force them to try again. Ben frames a useful mental model, how do you rebuild trust in a system that once failed you? The answer isn’t a memo, it’s patient exposure, improved experiences, and relevance. Spark curiosity rather than fear. Show them a specific task they actually care about, and then make it work.

A practical checklist for leaders – Start with the problem, not the tech. – Map solutions to specific employee tasks, create internal showcases. – Provide secure, approved platforms and clear data guidelines. – Build hands-on training and lightweight governance together. – Encourage execs to demonstrate responsible use. – Use LLMs as orchestrators, call deterministic tools where reliability matters. – Run internal datathons to prototype and build momentum. – Measure value, then scale infrastructure and adoption in parallel.

We are past “is it useful?” and into “how do we do it sensibly?” The answer is less about chasing every flashy use case and more about translating real problems into practical experiments, securing the right infrastructure, and creating a culture that lets people experiment without fear. Do that, and the hype quietly becomes productivity. Or at least less annoying.

Check the full podcast

Search

Commenting Rules: Being critical is fine, if you are being rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for your input.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

✉️ Subscribe to the Newsletter

Join a growing community. Every Friday I share the most recent insights from what I have been up to, directly to your inbox.