I love shiny new tools, but I also love things that actually solve problems. Lately that tension keeps showing up in conversations about generative AI, because everyone assumes the answer to every problem is, sure, throw an LLM at it. Spoiler, that is not always the answer. The real question I keep asking, and the one I pushed on during a long chat with Ben Hoyle, is simple, blunt, and oddly underused, “what are your tasks?” If you cannot answer that, nothing else matters.
This is where most people trip up. We saw teams paste giant Excel sheets into chat windows and ask for analysis. Ben put it bluntly, “If you can do a task in Excel using Excel functions, then jumping to generative AI to solve the same task for you is actually a bad use of that technology.” He was exactly right.
Generative AI is not a sledgehammer for every problem. Sometimes it’s a hammer for some nails, and sometimes you need a precision screwdriver.
There are clear, practical cases where GenAI wins, and wins hard.
Generative AI moves the needle most when it reduces friction between the human question and the answer, especially when the answer requires synthesizing many sources or translating between vocabularies.
Here is the litmus test I now use when someone asks, should we use GenAI for this?
These are not theoretical exercises. Ben and his teams ran internal data-hackathons to surface real problems and then pick the best technical approach. That sequence works, because it starts with the problem, not the tool.
This is the political engineering as much as the tech. Two things matter more than clever models.
First, top-level sanction. If a CEO or a senior leader demonstrates an internal GenAI tool, people stop treating it as forbidden, secret, or a black market hack. Ben explained it bluntly, get the highest ranking member to use the system, show a live demo, and then the rest will feel legitimate doing it too.
Second, grassroots access and training. Showcase internal examples where someone in a similar role used the tool successfully. That closes the translation gap between “knowledge retrieval” as a concept and “this is how it helps me write a spec.”
And yes, you need guidelines, not just hype. Employees want to know what data they can safely put into a model, and how to phrase requests to get useful outputs. Prompt craft used to be everything, now it’s less critical, but teaching people to use models as sparring partners is essential.
A frequent objection is randomness, and it’s valid for most users. Models appear non deterministic because of changing seeds, different servers, evolving models, and context drift. Ben pointed out you can host a model locally, set the seed, and achieve deterministic behavior. For most businesses that means you need infrastructure and governance if determinism matters.
A practical compromise is to design hybrid workflows, where the LLM does the creative or synthesizing work, and deterministic code handles exact calculations and compliance.
Generative AI is incredibly capable, but capability is not a mandate for universal use. The right move is less about choosing AI over traditional tools in absolute terms, and more about choosing the right combination for the job. Start from the problem, not the sparkly model. Use GenAI where it reduces friction, enhances synthesis, or acts as an orchestrator. Use Excel and tried and tested functions where you need precision and auditability. Build trust from the top, give people safe ways to explore, and let the technology slot into workflows, not replace them by decree.
If you do that, you get the upside without the chaos. And honestly, that sounds like a good day at work.
Commenting Rules: Being critical is fine, if you are being rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for your input.
Join a growing community. Every Friday I share the most recent insights from what I have been up to, directly to your inbox.