There's something fundamentally misaligned in how we're talking about AI and the workforce. The conversation keeps circling around job displacement - and I understand why. But the real question isn't whether AI replaces jobs. To me, it's about how we integrate it into work that already exists.
AI development is slowing down in ways that matter. Around 95% of AI experts surveyed don't believe we currently have the technology needed for AGI. The performance gap between leading models has collapsed from 4.9% to 0.7% over the past year. Even Sam Altman admitted recently that current models have "saturated the chat use case." These aren't signs of exponential takeoff - they're signs of a maturing technology hitting practical limits.
At the same time, the infrastructure costs are getting harder to ignore. One AI lab (you know which one) reportedly secured 40% of global DRAM output for training runs. Memory prices have more than doubled. DeepSeek managed to train a frontier model for around $6 million instead of hundreds of millions, but that efficiency gain came from rethinking architecture, not from throwing more compute at the problem.
Token costs have already dropped from $20 to $0.07 per million tokens - a 280-fold reduction in just two years. The constraint has shifted from intelligence to integration.
In my own work with automations, the failures aren't about the models anymore - they're about organizational readiness, workflow design, and cultural fit. The barriers are operational, not technical. This tracks with the widely reported figure that around 95% of AI pilot programs fail to reach production.
I've been particularly interested in AI tooling lately, and two stood out to me. The first was OpenClaw, a terminal-based agent that reflects on its own actions and iteratively adjusts its approach. Watching it work can be unsettling - not because it sometimes fails, but because it succeeded in ways that make me uncomfortable about how much control I'm personally willing to hand over. The second was chatjimmy.ai, which can process around 17,000 tokens per second by optimizing for speed over accuracy. It made me wonder whether intelligence is the only constant we should be aiming to solve. Both examples point tells me that the bottleneck isn't capability anymore - it's figuring out where and how to deploy these tools.
I've been working more with terminal-based agents recently, and there's something oddly liberating about the constraints. You can scope exactly what the system has access to. You can limit it to small, specific tasks. You're not trying to hand over judgment calls - you're trying to automate the repetitive parts that don't need judgment. The division of labor becomes clearer and, I believe, more efficient when the boundaries are explicit.
Whenever I see discussion on this topic, I can't help but reflect on the analogy to the computer. The prediction was mass unemployment for secretaries, clerks, and middle managers. But instead of eliminating office work, it transformed it. Spreadsheets replaced ledgers, but someone still needed to build the models, interpret the outputs, and make decisions. The technology absorbed certain tasks while creating demand for new ones. I suspect AI follows a similar pattern - less about replacing roles, more about reshaping what those roles involve. In five years we might struggle to remember how Word used to open to a blank page instead of a context-aware, pre-filled suggestion.
But then again, AI feels more like something that operates alongside you - or instead of you. That distinction matters psychologically, even if the economic pattern ends up being similar.
Maybe the actual work ahead isn't building AI that replaces us. Maybe it's figuring out how to be better guides for systems that are powerful but narrow. The projected productivity gain from full AI integration across enterprise workflows is around 15% - meaningful, but not the wholesale transformation the AI lab CEOs wants us to fear. That feels about right for a technology that makes existing work faster, not one that eliminates the need for the work itself.
Google I/O is coming up in a couple of months. My bet is that Google has the upper hand right now - not just because of model quality, but because of distribution and integration across existing workflows. The next battleground isn't better models - it's who can integrate AI into existing workflows with least visibility. It'll be interesting to see whether that thesis holds or something else entirely surprises us.