Context Rot: Why Bigger Context Windows Aren’t the Answer for Retrieval
Bigger context windows don’t automatically improve retrieval. Real gains come from reasoned, precise context—structured and guided by ontologies and knowledge graphs.
From Transduction to Abduction: Building Disciplined Reasoning in AI
Large language models excel at transduction — drawing analogies across cases — and hint at induction, learning patterns from data. But true reasoning demands abduction: generating structured explanations. By pairing LLMs with ontologies and symbolic logic, organisations can move beyond fuzzy resemblance toward grounded, conceptual intelligence.
Reasoning Will Fall
OpenAI’s o3 model has set new highs in significant benchmarks—and that's a game-changer for all of us. If AI can reason, code, and excel in maths and science, it’s only a matter of time before it starts reshaping tasks critical to nearly every business. Let’s dive into how o3 performed on key benchmarks: