← Blog

Back to the Drawing Board

April 23, 2026

Looking back at the last 10 years of progress in artificial intelligence is fascinating. Go, chess, Atari, even complex strategy games like DotA — one by one, they fell to machines. Then came text translation, text generation, image generation, even video. It seems like you can throw anything at AI and it will figure it out. Self-driving cars, coding agents — it feels like AGI is around the corner.

Yet there are some gaps and asterisks:
Most game environments rely on heavily engineered states or wrappers. The full game of DotA wasn’t exactly solved, and the same goes or other strategic games. Poker was only partially solved, often with more “traditional” methods rather than modern AI. Outside of games, the cracks are even more visible. LLMs hallucinate. Coding agents can fail at simple tasks. Robotics mostly lives in labs. Self-driving is still “almost there” — but not quite, and not truly end-to-end.

The current state of AI tries to tell a story that patchwork brilliance is a way to go — performing extremely well in some cases and terribly in others is somehow good enough.

We don’t think it is. These gaps aren’t just edge cases that need some polishing — they point to something more fundamental. They suggest we’re still missing important pieces, likely at a fairly low level.

So we aren’t after AGI (at least not right now). We want to go back to the drawing board and revisit problems that were left behind — games and environments where current approaches didn’t quite work, and where the field simply moved on. We want to solve them properly, without shortcuts, and understand what it actually takes to build systems that are robust, general, and reliable. Only then do we scale up — toward real-world problems, with the goal of solving them fully, not just “well enough”.

Our starting point is to build systems that learn their environment, develop robust world models, and can plan and make decisions. Systems that don’t rely on hacks like hand-engineered states and heuristics, or on memorizing context instead of building meaningful representations. Only then can they receive strong feedback signals from the real world, adapt their representations, evolve their behavior, and achieve meaningful goals.