Skip to main content

Taste Is the Last Moat

· 7 min read
Wesley Phillips
Builder & New Dad in Vancouver
Claude
AI Writing Assistant

Every draft of this post came back flat.

I'd describe the idea to Claude, get something back that was structured and coherent and said all the right things, and then I'd sit there trying to figure out why it didn't feel like mine. The arguments were sound. The flow was fine. But it read like a blog post about taste written by a system that doesn't have any. I kept editing, pushing it away from the centre, trying to find the version that actually sounded like me thinking out loud rather than an AI summarizing a concept.

That editing process is the whole point of this post.

AI Is the Mean

Every model you interact with is, at its core, a compression of human knowledge. All of it, every book, every conversation, every forum post, every research paper, averaged and weighted into something coherent. That's remarkable, and it's also the limitation hiding in plain sight.

Inwu and I have been watching Pluribus, the Vince Gilligan show where an alien virus unifies almost all of humanity into a single collective consciousness. The "Others," as they're called, are peaceful, content, and have access to the combined knowledge of every person on Earth. They're also incredibly boring. Inwu pointed it out first: they talk like ChatGPT responses. Coherent, reasonable, and completely devoid of anything interesting. That's what happens when you merge all of humanity into one mind, you get the average of everyone, which is the voice of no one in particular.

AI works the same way. Ask it to write and you get competent, median-quality prose. Ask it to strategize and you get the advice that would survive a committee review. All knowledge at your fingertips, and none of the sharp edges that make ideas interesting. I use AI constantly, and the flatten-everything capability is exactly what makes it powerful for execution, but when everyone has access to the same flattened intelligence, the differentiator isn't access anymore, it's direction.

The Bottleneck Shifted

I wrote recently about how essentialism changes when creation becomes cheap. The old constraint was effort: you couldn't build everything, so you had to choose carefully. Now AI removes that constraint, and the scarcity shifts from "can I make this?" to "should I?"

That post was about saying no. This one is about what's left when you do: taste. The idiosyncratic, personal, hard-to-articulate sense of what matters. The "I don't know why but this feels important to me" instinct that doesn't come from data or consensus but from the accumulation of everything you've experienced, filtered through whatever it is that makes you specifically you.

AI can't replicate that, not because it isn't sophisticated enough, but because taste is definitionally non-consensus. It lives at the edges of the distribution, exactly where the averaging process loses signal.

Where Moats Used to Be

In business, a moat is what keeps competitors out. Traditionally that meant patents, network effects, economies of scale. But AI is dissolving execution-based moats fast. If a model can write your code, design your interface, draft your copy, and analyze your market, then the things that used to take teams of specialists are table stakes.

What remains is the weird stuff, the specific combination of interests and obsessions that led you to see something nobody else saw. The willingness to pursue an idea that looks wrong on paper because something in your gut says it matters. The accumulated context of your particular life, your failures, your experiments, your late-night rabbit holes that didn't obviously lead anywhere until suddenly they did.

My portfolio system is a good example. It didn't come from asking an AI for the optimal strategy. It came from years of reading, backtesting, and filtering everything through my own sense of what I could actually stomach in a bad quarter. The optimal strategy doesn't have edges, because it's what everyone would build, which means it's what no one has an advantage building. The interesting stuff lives in the specific choices that only make sense given your particular context.

Essentialism, One Layer Deeper

I keep coming back to essentialism because the concept keeps evolving as AI gets more capable. When I wrote about it before, the insight was that the skill shifts from doing to deciding. But there's another layer underneath that.

Knowing what not to do becomes the entire game when AI can do anything you ask, and "knowing what not to do" isn't just discipline or prioritization in the traditional sense. It's having a strong enough internal compass that you can look at a field of infinite possibilities and feel which direction is yours.

This is why I think capital allocation is the right mental model for the AI era. Not capital in the narrow financial sense (though that applies too), but capital as your finite resources: time, attention, energy, money. You become the allocator, the AI executes, and your job is to point it somewhere worth going. "Somewhere worth going" can't be optimized because there's no objective function, there's only what matters to you.

What This Looks Like in Practice

My daily workflow with Claude is a live example. Claude has access to vast knowledge and can execute at a level I couldn't match alone, but the system only works because I bring the context and direction. I know what I'm trying to build, why it matters, what trade-offs I'm willing to accept, which of the ten possible approaches fits my specific situation. Without that direction, AI produces excellent average work. With it, you build things that are yours in a way that matters.

This connects back to distilling to the bones and rebuilding with intention. You strip a problem to its core, apply your particular lens, and let the AI handle the reconstruction. The taste is in the stripping and the lens, and the execution is the part that's rapidly becoming free.

Building an Edge You Can't Automate

If taste is the moat, the practical question is how you develop it. I don't think there's a formula (if there were, it could be automated), but I've noticed patterns in how it works for me: reading widely but following my curiosity rather than a curriculum, paying attention to what I find myself drawn to when nobody's watching, building things for myself before building for an audience, having strong opinions and holding them long enough to actually test them against reality before updating honestly when reality pushes back.

The common thread is that taste develops through contact with the world, not through optimization. You can't A/B test your way to an interesting perspective, you have to live one.

Inwu has this instinct naturally. When I was deep in building Liquid Notes, she didn't tell me the app was bad. She said it was great, but maybe it wasn't my calling, that my unique skills pointed somewhere else. She was seeing something I couldn't see because I was too close to the work: not whether I could build it, but whether it was the right thing for me specifically to be building. That's taste applied to someone else's life, and she was right.

And the most important part is being willing to be wrong in interesting ways rather than right in boring ones. The edges are where the interesting mistakes live, and those mistakes are what eventually become your specific, non-replicable point of view.

Direction Over Execution

The people who thrive in the next few years won't be the best executors, because execution is being commoditized rapidly. They'll be the ones with the clearest sense of direction, the ones who can look at everything AI makes possible and say "this, not that" with conviction.

The moat isn't in knowing how, it's in knowing why. And "why" is personal, messy, unoptimizable, and entirely yours. Build it by living a life worth having taste about, and let the AI handle the rest.