I’m building something to replace myself, and the interface keeps getting in the way.
Chat is so dead. Interface is your display of intelligence, and most AI products are built on the wrong surface.
We did an intensive product sprint yesterday, and we came out of it really rethinking and stripping a lot of redundancy out. One of the things I noticed about how I was feeling during the sprint was that the frustration wasn’t with the problem we were solving — it was more to do with where and how we were solving it.
Somehow chat has become the de facto start point for AI products. It just reminds me of my own Claude projects workflow: threads with no names, context that doesn’t carry, starting over all the time.
All through the sprint I kept thinking: we’re building intelligence products inside interfaces that are making people work more, not less. Which is very, very counterintuitive.
Earlier this week, I had a conversation with a customer whose meticulousness I have deep respect for. I got a preview into her Claude projects. Manually written markdown files, detailed skill files, an entire backup architecture built by hand. I kid you not — it was impressive. They’re still early, haven’t shipped yet. But one of the things that genuinely worried me was this: the minute she goes from planning to execution, her being the load-bearing wall of context and structure — in an ecosystem where there are multiple people, voices, and data points coming in — is going to be exhausting. Most products today cannot hold that kind of context. Which also made me think about why I decided to build in AI in the first place.
I genuinely believe AI can do my job. At least most of it. The parts that involve pattern recognition, synthesis, research, drafting, understanding market signals, orchestrating across surfaces — yes, absolutely. The parts that are human behavior led and relationship building — I don’t think AI can do that, and honestly, I’m not ready to let go of it either. But in this attempt to be very intentionally made redundant, I notice how many people would love for that kind of redundancy to exist.
And then I look at my own workflow — and it’s a case in point of everything I just described.
Last week I spent maybe three hours on a piece that should have taken fifteen minutes. Not because I was thinking hard, I was fighting the interface. Telling it this is not what I sound like. Trying to optimize my project for the correct tone, structure, voice — over and over, in a flat conversation thread with no memory of what we’d worked on the week before.
The model is not the problem. I keep coming back to that.
Models have the capability. It’s the interface that models come packaged in that expects people to do more cognitive work, not less. As someone spending at least four to five hours a day actively using AI to do my work, I want the context, I want the continuity, I want the product to do what it’s supposed to do.
Not a skill issue - an interface issue.
What we’re doing (and expecting) is asking a linear, static surface to sustain the complexity of a human mind mid-execution. Before we even get to the question of AI taking jobs: if that’s the ambition, if these products are supposed to replace meaningful human work, then the interface has to show up for how people actually work. Not how we work when we’re planning. How we work when we’re doing.
Planning and doing are not the same cognitive mode. Planning is linear enough for chat to hold. Doing is a feedback mechanism — there’s contradiction, there’s information coming in sideways, there’s brainstorming, visualization, decisions that reverse themselves mid-thought. The surface has to hold all of that without asking you to reconstruct everything each time you sit down to execute.
Chat was designed for attention. Real-time, responsive, presence-based. What it was not built for is execution. And yet here we are — with complex orchestration fully possible on the backend, with visual outputs that are actually buildable — still defaulting to one flat thread as the primary workspace. That doesn’t seem as much as a technical constraint as an uncreative design choice.
The question our sprint kept surfacing: how are we designing AI products to bake in efficiency, bake in output, when we’re making them completely static?
We want to emulate the human mind. I would not want to emulate your mind only through chat. That curiosity for interface design — for understanding how a human mind actually moves through work — is something I’m bringing into our customer discovery now. Watching how people think and work has been more valuable than understanding the pain point we think we’re solving.
Founders who have built workarounds around this — markdown systems, elaborate project memories, six-step prompt flows — I get it, I really do. But that’s not their job. Their job is to build their company.
Every time I see one of those “here are six Claude prompts to build out your GTM marketing flow”, I want to find the Series A AI founder or the GTM head who’s actually using it without rebuilding the context every other Monday.
Behavior over workflow. That’s the shift.
So yes — Lord, please make me redundant.(Lord, in this case, is mostly our engineering team and the patience of our customers)
We’re here to build the thing that can do everything I’ve ever had to do professionally, so I can do the parts that actually need me. If we get this right, I’ll call you all for my gallery opening by the time I’m 40.
FWIW, we can’t get there inside a chat window. Intelligence needs somewhere to live and it’s not in a chat thread. Especially not one that will confidently tell you it’s still 2024. Looking at you, Claude.
Not too long ago I decided to lean into art I find inspiring to reference my writing - even if it’s about technology and AI. The deeper I build, the more I desire comprehension of what I can’t articulate.
Visual credits: Artist Refik Anadol’s Inner Portrait (this project creates beautiful AI Data paintings from biological and neurobiological data of travellers experiencing a new culture for the first time) Shivers.



