Recently, I kept returning to one simple thought, AI won’t just change what we do at work, it will change what we want to do. That might sound grandiose, but after a long conversation with someone deep in the trenches of enterprise AI, it’s the clearest thread running through every practical example and every cultural hurdle I heard about. The future of work will be about reassigning the boring, repetitive parts to machines, then deciding what humans will spend freed up time doing, and what new markets will spring up because humans prefer human-made, human-led experiences.
You can throw money at cloud infrastructure and scale servers in five clicks, but you can’t scale curiosity, or policy, or trust with the same ease. Big companies already have the pieces, they have security, legal, cloud teams, but what they often lack is a way to bridge the gap between engineers and the rest of the workforce.
As I heard it put bluntly, “first of all, you need to convince your board of directors that this is a technology that you need to be ready for and we need to prepare the workforce for.” That top down signal matters, but it’s only half the job. You also need the grassroots, the people on the floor experimenting and sharing success stories, so the message spreads sideways and actually lands. The sweet spot is both, board level down, grassroots up, plus a few visible champions using the tech to normalize it. Once “if this person can use the tech then for sure you can as well” becomes common thinking, adoption moves from secretive to sanctioned, and the company culture starts to adapt.
I kept hearing the phrase, “we’re in this bubble of, let’s say a little technology bubble,” which is useful because it reminds us that most people are not living inside the loop. They might have tried something years ago, gotten garbage results, and decided it’s useless. But the tech moves fast, and humans forget how quickly things can improve. We need to sow curiosity again, not push fear or FOMO. That’s where training, showcases, and internal success stories come in, not fear-driven mandates.
There’s a practical rule that still applies, despite the hype, “sometimes you shouldn’t do it.” A few years back, feeding a large Excel sheet to an LLM and expecting a perfect audit was a bad idea. Today, the lines are fuzzier because models can call deterministic tools, run Python functions, and integrate into existing workflows. So now the right approach is to ask the model itself, can you solve this task for me, and let it guide whether the solution should be a simple Excel formula, a Python script, or a full-blown chatbot.
More interestingly, we’re moving from using LLMs as black box scribes to using them as orchestrators, “you’re using large language models as an orchestrator or as the brains of an orchestration software.” That shift is huge. Instead of letting the model do everything, you let it choose the right tool at the right time and hand off deterministic parts to systems that are predictable and auditable. It’s the best of human judgment, paired with machine scale.
The biggest short term gains don’t come from a new algorithm, they come from making AI part of the tools people already use. When Copilot is embedded in Teams or Excel and connected to your company knowledge base, people don’t have to learn a separate workflow, they just get better faster. The future where you don’t need to pull your phone out to book a trip, buy a train ticket, check into a hotel, and grab a sandwich, is not about magical new tech, it’s about integration. We already have most of the pieces, what’s missing is the plumbing and an agent to handle the messy bits.
That leads to one crucial point, “the assistant needs to be able to act on your behalf.” For AI to replace rote work it needs delegated rights, secure access, and clear guardrails. Companies that figure out safe delegation models will get disproportionate value, because they’ll free human hours for things that matter.
Here’s the part people miss when they panic about jobs, humans value human-made human experiences. Even if an AI can write code, some people will value code written by a human, with stories attached. Even if AI can cook, a lot of us will pay to be served by a human chef in a room that feels curated. As my conversation nudged me toward, this is not about fewer markets, it’s about different markets. Craftsmanship, bespoke services, human-centered experiences, these will all gain value because humans assign meaning to human effort.
Freeing people from tedious tasks could nudge society toward work that is more aligned with what people actually enjoy, more creative, more social. That’s not guaranteed, but the economic incentives will create space for new businesses, and yes, for niche markets that trade on the human story. It’s not dystopia, it’s reallocation.
If you run a team or a company, start by becoming a translator. Don’t ask if AI should be used, ask, how should we use it, and who can help us figure that out. Build showcases, run internal data thons, and make it safe to experiment. And for the love of sanity, don’t let usage remain a “black market” in your org. Either ban it and be stuck, or embrace it with rules that protect data and empower staff.
If you’re an individual, think about what you’d still want to do even if a machine could do it for you. That’s where your future craft or niche will likely be.
We’re not waiting for a miracle algorithm to hand us a new future, we’re assembling it one integration, one translated problem, one inspired employee at a time. Keep curious, keep building, and don’t confuse noise with progress.
Commenting Rules: Being critical is fine, if you are being rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for your input.
Join a growing community. Every Friday I share the most recent insights from what I have been up to, directly to your inbox.