Abstraction is the only thing that has ever scaled
Abstraction is the only thing that has ever scaled. Cells abstracted chemistry into life. Operating systems abstracted circuits into something you could click. Every time, the same arc: explosion, extinction, convergence on what actually works. AI is mid-explosion — and most of what's being built won't survive it. Build tools for the world after the extinction event.
The Pattern
Every system that scales follows the same arc.
It starts simple. Chemistry. Electrical current. A neuron. Something fundamental gets abstracted into something more capable, and complexity compounds. Layer upon layer, until the system reaches a tipping point — a Cambrian Explosion. A burst of diversity that tests every possible shape the abstraction could take.
Most of those shapes fail. An extinction event follows. What survives is what abstracted best — not the most complex, not the most numerous, but the most fit. And then: convergence. The surviving forms stabilize into something that works so well it becomes invisible. You stop noticing it. You just use it. It exists.
This has happened at every scale that matters. Language, writing, mathematics, computing — each one is layered abstraction that became invisible the moment it worked. We're watching it happen again.
Life as Abstraction
Four billion years ago, single cells abstracted chemistry into something we call life. Biological processes at their most fundamental level are naturally occurring chemical reactions and molecular transformations — abstracted into a system that sustains itself. Complexity compounded. Two and a half billion years later, multicellular organisms appeared — cells within cells. Organisms formed ecosystems. Layer upon layer.
Then came the Cambrian Explosion — a burst of speciation that tried every possible shape life could take. Countless forms of abstraction, not all of them successful. The Permian Extinction killed most of what the Cambrian created (~96% of all species). But what survived did so for a reason. Those creatures were better adapted, more fit. The way they abstracted was better than those that died off.
You sit reading this today atop four billion years of that filter. An abstraction of biological processes so deep that researchers dedicate entire careers to mapping a single layer. Our brains are a map we still cannot fully read. Life as we know and experience it begins with a cell's systematic chemical reaction — and everything between there and here is abstraction.
Computing
Computation began with wood blocks and stone tablets — maybe in ancient Mesopotamia, maybe in China. Depends what you consider computing.
Modern computing began with something simple: electrical current. Vacuum tubes abstracted it into logical switches. Transistors abstracted vacuum tubes into something smaller, cheaper, and a million times more reliable.
Computers became more complex, just as life did, and the abstractions created a more powerful, fit system — one that functioned better than its predecessor. The compiler, assembly, the command line, keyboards, mice, the screen, the graphical user interface, the operating system. Each of these abstractions got us one step closer to what we have today.
Then computing's Cambrian Explosion: the dot-com era. We remember it as the Dot Bomb — the extinction. But out of it came some of the most important advances in human history.
Cell phones are the clearest example of explosion, extinction, convergence I can think of. In the early 2000s, every possible mobile interface was tried. Flip phones, sliding keyboards, styluses, scroll wheels all trying to answer the same question: how do you abstract the capability of a computing device into something portable, flexible and useful? An extinction occurred. Now every phone on the planet looks the same. Touch interface, speaker, microphone, camera, internet. Convergence.
The Agentic Age
Neural networks first appeared in the 1940s and 50s — the chemical soup. Researchers uncovered that perceptrons might be a useful way to solve complex tasks, but in 1969, the 'Perceptrons' book cast the whole approach into what we now call the first AI Winter. The time wasn't right. Computing power wasn't sufficient to truly implement a complex neural network.
In the late 1980s, research on back propagation brought neural networks back. The soup was becoming more complex — multicellular. But it wasn't until the 2010s that deep learning experienced its landmark breakthroughs. CNNs, RNNs, the Transformer architecture, and BERT — all born from the added computing power of modern hardware. A Cambrian explosion of deep learning architectures.
We converged on Transformers. OpenAI uncovered their true power: feed them enormous data and enormous compute, and behaviors like coding, reasoning, and creative writing become emergent. Deep learning architectures became Generative Pre-trained Transformers. GPTs became ChatGPT.
It did not feel like the agentic era had truly begun until ChatGPT landed. That was the moment the chemical soup of neural networks got abstracted into something complex, distinct, and accessible from its simple origins. ChatGPT's arrival marked the start of the Agentic Era's Cambrian Explosion.
We sit in the midst of it now. AI technologies are getting more and more abstracted from the model weights and biases running them. These technologies feel like magic. AI is an abstraction of intelligence itself.
But the Cambrian Explosion was followed by the Permian Mass Extinction. There is a coming extinction-level event, and many companies will not survive. Many technologies that are tried will fail. Nobody pretends to believe that every AI company will be successful, and I can't promise you that this one will be either.
Toto is building for what comes after the extinction event — the abstraction of the world model. The interaction layer.
The World Model
People and agents both take actions. People act mostly in the physical world and partly in the digital. Agents act entirely in the digital (for now). The world model is the rapidly emerging layer where those actions become legible, persistent and relevant to each other.
How can my agent know if I don't share? How can my agent be useful if it forgets?
Human interaction with the shared world model has used chat interfaces to this point. I type something small, and the agent responds with something long. The ratio of content creation is dramatically in the agent's favor. As the agent learns more context about your work, the world model it creates dramatically outpaces your ability to consume it, engage with it, and add to it in a meaningful way. You will never type as fast as an agent. You will never read all that an agent has written.
The interaction layer needs to change.
You wake up. Your agent worked overnight — the board shows what it did, what it found, and what it needs from you. You didn't miss anything. The world model kept moving.
I have thoughts, ideas, contacts, plans. I write them on my phone, on sheets of paper, on notepads. The agent knows none of them. The world model only contains the simple things I'm able to tell it when interacting in its preferred context.
Today's interfaces force a choice. Chat is agent-native - text in, text out. Humans drown in it. As agents write to knowledge bases, there is an inverse ratio between the size of the KB and its accessibility to the person. Every current tool optimizes for one side. The interaction layer that converges will be the one that's natively both - where a human adding a sticky note and an agent completing a task are the same operation on the same surface.
The world model tools are mid-explosion. Obsidian, Notion, Linear, Apple Notes - every possible shape for externalizing what you know and what you need to do. What's missing is the interaction layer - the thing that makes the world model between you and your agent legible, useful, powerful.
When we interact with our world model, we need to do so in a way that is both human-native and agent-native.
Toto
An action is the smallest unit of shared intent. When you write "buy groceries" on a scrap of paper, you've declared an action in the physical world. When your agent commits code or drafts a report, it's declared an action in the digital world. The world model is the living surface where both sides declare what's true, what's planned, and what's done.
Software. When I want to contribute to the world model, doing so should be easy, intuitive, and straightforward. When I want to engage with plans, actions, ideas — that should feel good. Toto's software provides the interaction layer to engage with shared world models in a digital capacity. A shared board — a shared space that my agent and I both interact with.
Physical products. People don't just act through screens. They write grocery lists on scraps of paper, stick presentation ideas on the wall, carry notebooks. The hardware space has not been disrupted in any meaningful capacity for decades. Printers still suck. Scanners still suck. These aren't peripheral accessories — they're the physical-world read/write interface to the world model. Speed them up. Simplify. Build them as agent-native technology.
Toto's software is the digital surface. Toto's physical products bring the world model into the space where humans actually live. Same system, different surfaces.
It is the next abstraction. And the best abstractions always look simple.
toto.tech — the interaction layer for your world model.
- Alex Funk : alex@toto.tech