A Pause for Thought
After a few years of breath-taking breakthroughs in AI, it feels like the exponential curve is temporarily catching its breath.
This isn’t stagnation - it’s an opportunity.
Across the landscape, generative AI has become 'good enough' at a baseline. We’re watching the early signs of commoditisation. Open-source contenders are increasingly matching or exceeding proprietary offerings in specific use cases. GPT, Claude, Gemini, etc. - they’re all starting to converge on roughly comparable capabilities. Long context windows are the current hot feature. Useful? Absolutely. Transformational? Not quite.
Now, I’m biased - my work pushes at the fault lines of this technology. But even outside the bubble, it’s hard to ignore the signs. GPT-4.5 seems to mark a current limit of pretraining scaling laws. Reasoning models were a big deal, but it’s not clear that the same simple scaling laws apply to test-time compute.
But under the surface, deeper issues still simmer. Three core limitations continue to stalk the transformer architecture like ghosts in the machine:
🔵 The inability to form or express truly powerful abstractions.
🔵 Brittle reasoning in out-of-distribution scenarios.
🔵 The persistent problem of 'hallucinations' - either making things up or missing things out.
These problems are entangled. Training on reasoning from verifiable domains may help crack them - or they might be features of the foundation: consequences of the autoregressive, token-by-token nature of how these models think. Either way, my sense is that, for a moment at least, progress has slowed.
So yes, it feels like we’ve hit a pause. But for organisations, that pause is an important moment. Here’s the real story: the tech is already dangerously useful. Put a human in the loop, give it domain knowledge, and today’s LLMs can do serious work. But they need scaffolding. Structure. Guardrails. That’s where knowledge graphs come in.
We can use language models to rough out the scaffolding of understanding - to propose entities, sketch relationships, map terms. But (at least as of now) only humans can refine the abstractions. Only humans can curate the meaning.
Once that solid, curated scaffolding exists, it can be fed back into the LLM - making it sharper, more grounded, and far less likely to hallucinate. But let’s be clear: there’s no free lunch. Today’s models can help sketch the map, but they can’t build the territory. Without humans in the loop, the foundation stays brittle. Anyone who tells you otherwise is selling snake oil.
Make no mistake: the next wave is coming. We don’t have long to prepare, and there’s a lot of work to do. So in this quiet moment, while the hype machine spins and the frontier models mature behind closed doors, the strategic advantage is clear:
⚡ Get your data in order. Build structure. Build meaning. Build your graph! ⚡
This isn’t a pause in progress. It’s a handoff. The models have done their bit. Now it’s our turn.