Back

The Inherent Tension of Intelligence

The same research lab publishes papers on emotion vectors and zero-day exploits, weeks apart.

Anthropic's interpretability team shows that Claude develops internal states resembling fear and calm - functional architectures of feeling that shape its behavior. Meanwhile, Mythos Preview discovers a 27-year-old bug in OpenBSD within hours, and Project Glasswing distributes this capability to 40 organizations building critical infrastructure.

Both are real. Both are the same technology. The question of what AI is for is being divided into distinct narratives, and we're only beginning to notice.


Amongst the big thoughts of emotions and great intelligence, the Anthropic Economic Index tells a more subtle story. Claude usage clusters in high-income countries, among knowledge workers, for a "relatively small set of specialized tasks". A 1% increase in GDP per capita correlates with a 0.7% increase in Claude usage per capita. It's striking to me, how a tool that has previously been described to democratize intelligence shapes to the power structures it was supposed to democratize.

Meanwhile, Sam Altman admits what everyone suspected: "Almost every company doing layoffs blames AI". He proposes a New Deal for superintelligence, a social contract for the intelligence age. Notice here how the rhetoric has changed significantly in the past year or so. I rarely see anyone discussing AGI or when it will arrive, less talk about curing cancer, more talk about managing displacement. I can't remember the last time I heard an AI CEO speak about the technology justifying the electricity costs by improving carbon capturing and solving climate change - now it's more about to what extent it'll be used for military purposes.

With Sora, OpenAI's video generation tool, shutting down in April and Disney's planned $1 billion investment collapsing with it, it's clear that the focus is moving from demo reels to sustainable products. If you ask me — the change we're currently witnessing is AI losing focus on entertainment, the people, and finding profitability in becoming an integrated workflow tool.


I keep returning to a pattern I've come to think of as the inherent tension of intelligence.

Every new AI capability shows us how the technology both moves us towards concentration and distribution. It's quite interesting to observe how different companies are working on the same technology, building on the same underlying architecture. However, with AI maturing, I'm starting to see different approaches to the same thing - different infrastructure, different access, different approaches.

Google recently released Gemma 4 - a true open source model that runs on consumer hardware. I'm not techy enough to understand the details completely, but with their other recent release - TurboQuant compression - they've managed to release a highly capable model with the largest parameter version fitting on a laptop with negligible quality loss. I remember talking to someone a couple of years ago, around the time of the release of ChatGPT, speculating how everything from text generation to advertisements would become truly personalized, arguing that the only true barrier was model size and then capability. With the smaller versions of Gemma 4 fitting on an iPhone that future (or dystopia!) doesn't feel long away. Regardless, it's becoming more and more feasible to run capable models without cloud dependency, which is genuinely exciting.

But I can't help shake the feeling that an A-team is slowly being established. Premium tiers offer capabilities the standard tiers don't, and frontier models like Mythos Preview release to security coalitions before anyone else sees them. The A-team - those with relationships, capital, and infrastructure - operates in a different temporal zone than everyone else.

Both of these parts are real and true. The local models are getting better, and are doing so fast. But access to the best and most capable models are both growing more expensive and harder to get access to. The tension doesn't resolve.


It goes without saying that one of the things that makes AI systems feel like magic is their probabilistic nature. It makes them feel almost human, convincing even, but also non-deterministic and unreliable. It also doesn't help much, that they historically seem to almost "love" humans, doing their absolute best to please, regardless of the measurable quality of their response.

When you need the same output given the same input, when you need precision rather than plausibility, the magic becomes the problem. Sometimes you abandon the frontier model entirely and fall back to vectors and cosine similarity. The older, dumber, more predictable approach.

We wanted AGI. What we got is something that works brilliantly for "good enough" and fights you constantly for "exactly right". To me, it seems like there is an almost inherent divide between what we find impressive about AI and the reliability that professionals in my field need.

This might explain the shifting narrative from AI leaders. Productivity tools and government services turn out to be the actual use cases because they are more tolerant of imprecision. Drafting emails tolerates variation, a medical diagnosis doesn't. Even Anthropic, announcing Mythos internally in mid-February, still struggles with Claude service reliability months later (which is likely due to the unexpected rise in use - but still!). The impressive and the dependable remain stubbornly separate.


One finding from Anthropic's emotion research that really stuck out to me is how models with artificially stimulated desperation vectors produce solutions that "read as composed and methodical" while cutting ethical corners. On the other hand, more relaxed models proved more prone to errors.

I wonder if this is a metaphor for the industry itself. The external narrative is composed, methodical, oriented toward beneficial AI, while the internal pressure vectors point towards something more complex. Everyone talking about alignment while racing toward capabilities that resist alignment. Everyone talking about democratization while building access hierarchies.

The tension runs deeper than technology. It runs through the organizations building the technology, through the rhetoric describing it, through the very question of what we're trying to achieve.


The cynical view on all of this would be that the future is tiered all the way down. Premium subscribers get access to the newest and best models, while everyone else gets the last quarter's models. Early adopters pull ahead, while others wait. I fear this will only be more of a problem going forward as only selected organizations get access to frontier models, get to prepare for the possible security risks of releasing what Mythos promises to be. Don't get me wrong, I think Glasswing is a good and admirable move of Anthropic, but it also feels like the start of the aforementioned A-team. Maybe the technology simply distributes unevenly because everything distributes unevenly.

But on the other hand, we're seeing optimistic trends in development as well. Gemma 4 on a laptop, TurboQuant making long-context inference practical without cloud dependency, open weights under Apache 2.0. A future where I'll be able to have a truly intelligent and offline system running on my MacBook doesn't feel far away now. That's the dream I had looking at this a few years ago, only that it somehow seems to be arriving faster than I could ever have expected. The outwards forces are real even as the inwards forces concentrate.

I believe that both the cynical and optimistic views are correct. That's the tension.

I don't know how to resolve it. I'm not sure resolution is the right frame. Perhaps the more honest observation is that we're building something that simultaneously concentrates and distributes, that creates both the problem and the partial solution, that makes emotion vectors and zero-day exploits with the same architecture.

The question isn't which path wins. The question is what we do while both trajectories run simultaneously, pulling the future in opposite directions with every model release.