Useful, not true — a better way to evaluate memes
Something has been bothering me about the way the parasitic meme test is framed.
The test asks whether a meme is symbiotic or parasitic. Useful distinction. But underneath it, the whole framework still treats truth as the primary axis. We ask "is this meme true?" first, then "is it serving me?" second. The FAQ even addresses the edge case: "Can a meme be simultaneously true and parasitic? — Yes." As if truth is the default and parasitism is the surprising exception.
But look at what the agency hypothesis actually claims. The families in the Hungarian study and the Kresy case aren't transmitting truth claims. They're transmitting strategies. "Invest in what can't be seized" isn't true or false. It's a heuristic. It works or it doesn't. It compounds or it doesn't.
The right axis isn't true/false. It's useful/not useful.
What changes
This isn't a small tweak. It dissolves several problems at once.
The parasitic meme paradox disappears. Under the truth framing, you need a special explanation for how a "true" idea can harm you. Under a usefulness framing, the question is simpler: does this meme direct energy toward things I control? If not, it's not useful. The truth content is irrelevant. A factually correct belief about geopolitics that consumes your attention without producing any action in your control bucket is just... not useful. No paradox needed.
The thick/thin desire test gets cleaner. Thin desires aren't "false" — that was never the claim. But under a truth framing, there's a persistent confusion about what's wrong with them. Under a usefulness framing, it's obvious: thin desires are someone else's heuristic running on your hardware. They're not useful to you. They consume cycles without producing results that compound in your life.
Memetic hygiene shifts from fact-checking to systems administration. The truth framing makes you a detective: which of my beliefs are wrong? The usefulness framing makes you a sysadmin: which processes deserve CPU time? You're not looking for lies. You're looking for programs that consume resources without producing output. That's a much more tractable problem.
The agency thesis fits better. The whole project argues that what transmits across generations isn't knowledge or beliefs — it's dispositions, orientations, decision heuristics. None of those are truth claims. They're operating strategies. Evaluating them on truth is a category error. You evaluate a strategy on whether it works.
The pragmatist connection
William James argued that truth is usefulness — "the true is the name of whatever proves itself to be good in the way of belief." You don't even need to go that far. You can sidestep the epistemology entirely.
The Kresy families didn't sit around debating whether "invest in portable human capital" was philosophically defensible. They lived it because it worked. Displacement taught them that material wealth could be seized overnight, but skills and knowledge couldn't. That's not a truth claim verified against reality. It's a strategy validated by results.
John Dewey made a similar move: beliefs are instruments for action, not mirrors of reality. You don't ask whether a hammer is "true." You ask whether it drives nails. The memes running your System 1 are hammers. The question is whether they drive nails.
What this means for the tests
The four parasitic meme tests mostly survive this reframe — they were already closer to usefulness than truth. But the framing around them shifts:
- Energy accounting test — already usefulness-native. No change needed. This was always the strongest test.
- Thick desire test — reframe from "is this desire authentically mine?" to "does this desire produce results that compound in my life?" Authenticity is a truth question. Compounding is a usefulness question.
- Immune defence test — reframe from "is this idea protecting itself from scrutiny?" to "is this meme preventing me from evaluating its usefulness?" Same test, better reason to care.
- Decomposition test — reframe from "which components are true?" to "which components produce results?" A memeplex bundles useful and useless memes together. Decompose and test each on output, not on truth value.
The audit question changes
Old question: Are my beliefs correct?
New question: Are the memes running my System 1 producing results that compound in the direction I want?
The second question is more actionable because it has observable outputs. You can look at what your daily defaults actually produce over six months. You can't easily verify whether your beliefs correspond to reality — that's philosophy. But you can check whether your heuristics are generating the outcomes you care about. That's measurement.
This is what the Kresy families were doing, whether they knew it or not. Not asking "is this true?" but "does this work?" And the answer compounded across three generations.