Unseizable

Good enough is rational — until your skin is in the game

  • thinking
  • agency
  • pragmatism
  • incentives
  • decision-making
  • skin-in-the-game

I keep thinking about the usefulness framing in a context I haven't explored yet: corporate life.

In corporate environments, you build models that are good enough. Good enough to meet targets. Good enough to satisfy the person above you. Good enough to get through the quarter. And this is entirely rational — the incentive structure rewards adequacy, not accuracy.

Your boss doesn't need you to understand the deep structure of the market. They need you to hit a number. Your model of how customers behave doesn't need to be true. It needs to produce a forecast that's close enough to defend in a review meeting. The gap between "close enough" and "actually true" is real, but nobody is paying you to close it.

This isn't laziness. It's calibration. You're matching effort to incentive structure. The usefulness threshold is set by what the system rewards — and in most corporate systems, "good enough" clears the bar with energy to spare.

Compounding for whom?

The usefulness framework says you evaluate memes on whether they produce results that compound. But compound for whom? In a corporate job, your heuristics compound toward someone else's goals. The model doesn't need to be durable — it needs to survive until the next reorg. It doesn't need to be robust across contexts — it needs to work in this quarter, in this team, under this manager.

That changes the moment your skin is in the game.

Nassim Taleb's core argument: skin in the game — bearing the consequences of your own decisions — is what aligns incentives with reality. "Never trust anyone who doesn't have skin in the game. Without it, fools and crooks will benefit, and their mistakes will never come back to haunt them."

When bad models cost you directly, the threshold for "useful" rises sharply. You stop asking "is this good enough to present?" and start asking "is this true enough to bet on?"

A founder can't survive on "good enough" models of their market. A consultant selling a strategy they won't implement can. The difference isn't intelligence or diligence. It's who pays when the model breaks.

The Kresy case as extreme skin in the game

The Kresy families had the most extreme skin in the game possible: survival. When the USSR deported them, every heuristic they held was pressure-tested against reality overnight. "Invest in property" broke. "Trust the state" broke. "Invest in what can't be seized" — skills, knowledge, portable human capital — held.

That's not a truth claim verified through philosophical inquiry. It's a strategy validated by the harshest possible consequences. The families that survived and rebuilt were running heuristics that were durably useful — useful across displacement, across political systems, across generations. The durability came from the fact that they couldn't afford "good enough." Their models had to be true enough to survive contact with reality at its most unforgiving.

This is the link between the usefulness framework and why truth still matters. Truth is the durability filter. Skin in the game is what determines how much durability you need.

The uneconomical bet

Here's the part I keep circling back to.

The Kresy families didn't start investing in portable capital after they were deported. The ones who rebuilt fastest were the ones who had already been investing in education, skills, and knowledge — before they needed to. Before the context demanded it. At the time, that investment looked uneconomical. Their neighbours were buying land, building estates, accumulating the kind of wealth that made sense in a stable environment. Investing in "what can't be seized" only looked smart in retrospect, after everything else had been seized.

So when does a corporate person — someone working for others, whose incentive structure rewards "good enough" — decide to build a better model than everyone around them needs?

That's the agency move. Not waiting for the context to shift and then scrambling to catch up. Choosing, deliberately, to invest in a more accurate model of the world than your current situation demands. Knowing it won't pay off this quarter. Knowing your boss doesn't need it. Knowing it looks uneconomical right now.

This is a short-term uneconomical decision. You're spending energy — finite energy — on something the system isn't rewarding. You're building a map that's more detailed than the territory your current job requires you to navigate. From the outside, it looks like overthinking. From the inside, it's a bet that your context will change — and when it does, you'll have a model that's already been pressure-tested beyond "good enough."

Why most people don't make the bet

The incentive structure actively discourages it. Building a better model than your peers takes time that could go toward hitting targets. It creates friction — if your model is more accurate than your manager's, you see problems they don't, and raising those problems is rarely rewarded. "Why are you worrying about that? That's not in scope." The corporate environment selects for people who match their models to the system's resolution, not people who exceed it.

And the memetic environment reinforces it. If everyone around you runs "good enough" heuristics, that's the pattern your System 1 absorbs. The meme is: match your accuracy to what's rewarded. It's a rational meme. It's useful. It's just not durable.

Taleb again: "The curse of modernity is that we are increasingly populated by a class of people who are better at explaining than understanding." Good-enough models are optimised for explaining — in meetings, in decks, in reviews. The uneconomical bet is to optimise for understanding instead, even when nobody is asking you to.

What the bet looks like

I don't think this is about quitting your job or becoming a founder. It's about recognising that the model your incentive structure rewards and the model reality demands are two different things — and choosing to invest in the second one on your own time, with your own energy, at your own cost.

The control-influence-accept framework sorts problems by leverage. The memetic hygiene framework audits which memes are consuming your resources. This adds a question that sits underneath both: are you building models that are good enough for your current context, or true enough for the context you're heading toward?

The corporate environment isn't wrong to reward "good enough." It's a rational equilibrium for a context where someone else holds the risk. The agency move is building beyond that equilibrium before you need to — making the uneconomical bet that your future self will need a model that's better than the one you're currently paid to have.