
Discover more from The Save Journalism Committee
AI & The Myth Of The Rational Reader
If readers hate friction more than they value true insight, what's going to happen to the business of journalism as AI brings friction towards zero?
Here’s this essay in a nutshell: in a theoretical world in which readers overwhelmingly prefer content that merely playacts as informative, how would we expect journalists and nonfic writers to respond? How many would pass up the traffic of the many to write for the few? How many publishers would even allow them to do the latter?
Now imagine that AI were to make pseudo-informative content vastly cheaper and faster to produce. Would newspapers not be at least tempted? What exactly would their business case be for opting for slower and more expensive supply if their customers not only didn’t value it much but actively preferred the other stuff?
My argument: this is more or less our world, and the mismatch between the staggering implications of this dynamic and the level of attention we actually give it explains why we’re still going backwards on solving the credibility crisis. And AI, despite its nearly magical properties, is going to make this much trickier.
Note: I mean nothing but deep respect to my colleagues working on AI. Their products are of great future importance to writers, and are already of real utility both to some forms of commercial writing and to a dozen other fields not in scope here. My concern is purely how these products are used, and with what expectations.
Also, a brief update to my last post about Ukraine: I did go, for two weeks. 100% of your donations were put to fabulous charitable use. I look forward to sharing more details about that—along with some other local stories/interviews—over coming weeks.
Setting the scene
One simplistic
story of the history of economic thought is that it was long premised on a very mistaken assumption: that consumers approached their decisions rationally. And then Tversky and Kahneman et al came along and pointed out “lmao no they absolutely do not”, after which we began cataloging and accommodating the lengthy list of biases that actually drive median consumer behaviour. Which is to say that we began allowing humans to be exactly what we are: distracted, status-thirsty, inclined to lazy pattern-matching, largely rubbish at math, and far, far more susceptible to manipulation than we’d like to believe.Now, the rub: what if we’ve been applying a rhyming assumption to readers? What if most readers don’t approach most texts with a rational lens at all? The implications—for journalists, for writers, and for the rise of AI—would be hard to overstate.
While we’ll cover each of those three contexts here, my main interest is looking at how the last one interacts with the first two. AI models like GPT-4, despite their utility in other areas, mostly offer just one thing to publishers today: significant reductions in the speed and price of producing writing that’s clear, compelling, and not at all aligned with human progress.
Of course, even if true, this would be a largely neutral fact if most readers simply discounted this kind of writing. But what if they don’t? And what if they can’t?
Progress-useful writing
As a note on terminology, in my first draft I suggested that all writing fell into one of two buckets: (1) “useful”, or (2) everything else. But this was unfair in suggesting that most forms of writing are useless in some universal sense, which they surely aren’t. So I’ve swapped to the uglier but more precise “progress-useful”, which we might define by inverting Harry Frankfurt’s famous description of bullshit: ie. writing is progress-useful when it’s “germane to the enterprise of describing reality”; when it sheds the clarifying light that makes progress more likely, or at least possible.
Now, obviously lots of writing isn't progress-useful in this sense. Take undergrad essays. They’re a necessary step in producing progress-useful writers, but don’t in themselves often add anything useful to the stock of human understanding. Texts generated by current AI models are largely in this phase now. They’re designed to be plausible and gist-y, not progress-useful. One day perhaps they will rise to the latter, and I applaud those devoting their brilliance to that end. But what they offer today is largely just the form—really the illusion—of competence.
Rationality and writing
My general theory of the world goes something like:
The three main dynamics that inhibit human progress are (a) the hard limits of scientific possibility, (b) human frailty, (c) the poverty of our explanations.
A good explanation is just a correct and artful summing of the best understanding of some position or subject.
As we increase the richness and power of our explanations, we increase what humanity is capable of doing.
Society thus improves in some rough proportion to the creation—and consumption—of particularly powerful (ie. progress-useful) explanations.
Societies that produce/consume mostly bad explanations are a bit fucked.
Societies whose readers who can’t tell a good explanation from a bad one will have a very hard time getting un-fucked.
If we accept this model as true enough, the obvious inference is that writers and readers share an overwhelming incentive to produce and consume progress-useful explanations. And in even a mostly rational world there’s a strong feedback loop helping here: when someone puts out a bad explanation, rational readers will discount both it and them. And if future output doesn’t improve, eventually their work will just be ignored, with the rewards shifting to those producing better explanations.
But what happens when most readers have no interest in judging whether writing is progress-useful? Or when they begin preferring writing that isn’t? With less reader attention comes less money. And with less money, how do you improve output?
Journalism’s story
There’s a conventional story that journalism used to be better because of the power of bundling. You could almost give away each newspaper by padding it with cheap filler like editorials, classifieds, TV timetables, comic strips, and so on. While this extra stuff was rarely progress-useful, it’s what drew the eyeballs and the ad dollars, which is what funded the real journalism. By packaging it all together, newspapers could produce civic good for engaged readers while still appealing to those who didn’t particularly value the informational stuff. While a bit of a sad compromise perhaps, it was still an enlightened one. So long as the editorials didn’t undermine the news, and so long as the news was well told, it was still net beneficial.
The reason I call this a conventional story though is that this was never quite true. Newspapers, like any product, have always been sensitive to scale. Very sensitive. Given that content is far more expensive than marginal distribution, there’s a powerful incentive for publishers to optimize content for the largest audience possible. In a free market, the spoils go to those who make the most readers happy, even if what makes them happy is the informational equivalent of a Krispy Kreme donut. And if you don’t offer it, your competitor will. And they will crush you.
The career of Edward R. Murrow, patron saint of journalists, illustrates what I mean here. Just four years after he used his platform to rally America against McCarthyism, he gave his most famous address, which was really a cultural concession speech. His news program, once appointment viewing, had been smashed out of rotation by the mass appeal of quiz shows and the desires of scale-conscious advertisers. As he put it:
I am frightened by the imbalance, the constant striving to reach the largest possible audience for everything; by the absence of a sustained study of the state of the nation. […] But let us not shoot the wrong piano player. Do not be deluded into believing that the titular heads of the networks control what appears on their networks. They all have better taste. All are responsible to stockholders, and in my experience all are honorable men. But they must schedule what they can sell in the public market.
Which is to say that the problem in his day, as now, was consumers. They wanted what they wanted, not what was good for them. And getting what they wanted—and only what they wanted—has gotten easier ever since. Indeed, where Murrow and his peers failed most was in not quite following this line of thinking to its natural conclusion.
When readers can’t read
An upsetting but foundational fact about the world: the general reading public is only literate in quite a narrow sense of the word.
The data on global literacy (and numeracy) is so extreme that it’s a bit hard to take seriously in its full implications. While this story is partially told via stats—like that roughly half of American adults can’t understand a book written at an 8th-grade level—the deeper issue is where literacy stops as you go up the ladder:
...only 2 percent of adults [across OECD countries] performed at Level 5 on many of the variables in the literacy and numeracy scales
Or, to put this in chart form for literacy specifically:
Or if you want to see it globally (looking at the grey column here, though the blue column measured roughly the same thing):
Indeed, so few people score at Level 5 that reporting now generally just collapses the top two levels together. But even when you combine them you still end up with a substantially smaller group than those stuck at Level 1 or below—ie. those who have little hope of meaningfully grappling with written text at all.
So before we even get into rationality as a matter of reader engagement, there’s just the capacity itself. Not that readers aren’t smart enough in some general sense; it’s more a communal failure to develop specific, supremely important muscles. But the upshot is that if you want to write for audiences that can deeply reason with your text, you’re already excluding a supermajority of the public. While the masses might accept or reject your text as a whole, what they won’t do with it is discern and swallow only the progress-useful bits. Because by and large they can’t. If that winnowing is important, it has to be solved for on the supply side.
The people perish
The hard sciences and the technologies they make possible all progress somewhat mechanically via the stacking of good explanations—which have now brought us to heights fully unimaginable to our ancestors. This upward scaffolding has gotten us to the literal moon, and to within decades of wonders that even we in our far more privileged position can’t quite fully imagine ourselves.
This was “easy” for the hard sciences for a simple reason: bad science doesn’t work, and it’s very expensive to pretend otherwise. iPhones either work or they don’t, and if none of them do then none of Apple’s marketing prowess will help much. Likewise, if your base explanation is that the earth is flat, no amount of further explanation-stacking will lead to you developing GPS, satellite internet, or spaceflight. It’s a self-reinforcing loop: good science > technological progress > $$$ > more good science. Every alternative is more or less a cargo cult, which can only carry on so long until people realize the planes simply aren’t coming.
But step outside the hard sciences and…oh boy. Take, say, political science. I’m sympathetic to Fukuyama’s stance that we’ve already gotten pretty close to the pinnacle of that particular explanation stack with the refinement of liberal-capitalist democracy. While we can quibble about eg. the appropriate scope of state welfare, even that’s become sort of a last-mile problem in serious discussions—still real and tricky no doubt, but long situated within a general agreement about what works and what doesn’t that’s much wider and stabler than most pundits want to let on.
Even so, if you look at what actual politicians are saying—and winning elections by saying—you’d think the whole explanation stack is up for debate. This isn’t because of some new explosion of genuinely competing thought. It’s because politicians are fighting for mandates from a public that’s too illiterate to really judge the debate.
This is the consequence of giving people what they choose when they choose badly. Democracy depends on informed engagement—on firing up people with holy zeal to consume and wrestle with progress-useful content. But doing so is exhausting, and it’s much easier to turn the reins of our information-processing over to influencers of one stripe or another, whom we for some reason trust to do work we won’t do ourselves.
Journalism’s dilemma
In a world in which speed is god, tasking a breaking story to a subject-expert journalist and giving them enough time and resources to reach true depth means missing the majority attention window. And, well, oops, most newspapers have no idea how to monetize in that dead zone.
The natural result: the degree to which journalism is tied to tight distribution windows is roughly the degree to which it will propagate bullshit. The output that comes with speed just isn’t germane to the enterprise of improving our understanding of reality. This journalism will get some facts right, but many wrong, with reporters and editors being either largely or entirely reliant on Google and available "experts"
to affirm the accuracy and relevance of the arguments they advance. Which, lol.That this leads to bad outcomes should be no shock. There’s just a fundamental mismatch between what readers want and what the system can supply at the speed desired. But bad journalism persists anyway because most readers aren’t approaching their news consumption as rational consumers. They rarely have the subject-specific literacy to approach texts critically even if they wanted to. They just know they want it fast, with a certain bent, and with a very low maximum of cognitive friction.
This leaves journalism in quite the pickle! What’s the point of writing to the strict standards of progress-usefulness if only a tiny share of potential readers can even tell the difference? Why take the high road when there’s less traffic up that way?
The role of expert writing
As an aside, let’s consider where AI gets it training data from, and by extension what the max quality of its inputs are. Lots is pulled from newspapers, yes. But lots also ultimately comes from corporate publications. Are those reliable?
My day job is writing for tech companies. And I’m lucky, in that I’m able to be super picky in who I take on as a client. I get to filter for companies that explicitly recruit me to produce progress-useful writing. But even so, sometimes, eek.
An example: an executive at a prominent tech company saw something I’d written and had his recruiters bring me on to produce a series about some important and poorly-understood bit of infrastructure. I agreed, and invested weeks of deep study to ensure that the output would be progress-useful. (While I’m being light on specifics here, what’s important is that virtually anything you can find on this subject via obvious Google searches is wrong in a half-dozen important ways, not having been written by people curious enough to understand the subject deeply themselves. The reason I took this offer was that I really believed the client wanted to buck that trend.)
Anyway:
None of the half-dozen or so people at the company who edited my work understood the subject well either. Which I suppose would have been fine, except that their edits mostly pushed against the output being progress-useful.
It turns out that they wanted something closer to a ticked-off box than something that would enable their customers—who were building atop the infrastructure in question—to gain a progress-useful understanding of the subject.
We parted ways, and I asked for my name to be taken off the final publication.
Crucially, this isn’t an isolated thing. While I’m privileged enough to rarely run into this myself, my understanding from writing peers is that this is the norm. Most corporate writing rooms—and, I sense, newsrooms—are not run by people to whom the progress-useful concept is a top consideration.
Of course, there’s a way in which these editors are the rational ones: they know that most of their readers aren’t interested in investing into real understanding (no matter how artfully or whimsically presented). But in deciding to optimize for the masses, they’re producing content that’s (at best) a little less useful to the minority that do want to understand, and at worst leaves all readers some degree of misinformed.
Sit with that for a minute. If true, this is a deeply, deeply consequential thing. Unless you’re unusually equipped and motivated to find the best content
, what you'll find is more likely than not to give you false confidence that you understand a thing.AI, or prompt-your-own-Wikipedia
Why have AI models like GPT proven far more popular than their creators expected?
Let’s start with speed and breadth. AI models query massive libraries of human writing—some of it good; lots of it trash—then very cleverly guess at how to best reconstruct those inputs to address the user’s prompt. And they do this shockingly fast, (almost) no matter how obscure the subject. For people who want to scratch a curiosity itch, it’s like instant heroin.
But let’s consider the constraints here. AI models remix the words and forms of their libraries into rough composites, with minimal ability to judge the quality of those inputs, much less to mediate between them in the way required to unlock human progress. While AI can return answers that perhaps seven in ten readers would judge to be a product of human intelligence, all this tells us is that Turing’s Imitation Game may have confused which party’s reasoning powers were more likely to be measured.
Put another way, AI is Wikipedia on demand, without any human input to manage all the nuances, contradictions, and disputes inherent in any summary of knowledge. Is this kind of product educational? It depends on what you imagine the goal of education to be. If just “instilling a general sense of a subject”, then I suppose so. But is that actually useful to progress? If the reader’s reaction is “oh wow there’s way more to this than I thought; I now have a taste, but also a sense of how much I don’t know”, then sure! But lol this is not usually how that goes. Drop into any Reddit thread sometime for a taste. Our world has little need for yet another hubrist newly and vaguely aware of the three competing theories of X; but much need for the person who’s actually wrestled with those theories, and been humbled by them. And, sure, AI could help produce the latter as well as the former. But it won’t.
This isn’t a criticism of AI, mind you. We just need to be extraordinarily mindful of its limitations, and “drink deep, or taste not the Pierian spring”. But if journalists routinely ignore this advice when leaning on Wikipedia and Google, why wouldn’t they lean even harder on something that’s faster and more versatile?
So, why is AI so popular with journalists and writers? It varies. Some are fighting imposter syndrome (or what they like to call writer’s block); some are trying to learn just enough about a subject to meet a deadline; some are trying to 4-Hour Workweek their lives. And, hey, that’s all understandable to a point. But can the use of AI actually help them produce a meaningfully higher quantity of progress-useful writing? No, not really. And anyone who says otherwise is indeed selling you something.
Objections considered
There’s a popular argument that AI is mostly helpful in blitzing early drafts, which smart writers can then edit into some progress-useful product. They’re not delegating thinking to AI, they say, “just some of the menial work”. As with many bits of AI-generated text, this sounds fully plausible right up until you really think about it.
With some allowances for producing abstracts and unavoidable boilerplate, either:
You understand the subject better than AI. If so, just write the more useful and informative thing! If there’s some part of your explanation that’s unimportant enough to delegate to AI, why is it important enough to include?
You and AI understand it the same way. This implies that it’s probably common enough knowledge to just skip by anyway. AI is strongest at regurgitating concepts where it has plenty of consistent training data. If enough of it already exists, don’t you have something more original to be writing?
You understand it less than AI. If so, just walk away from the piece. Any writing you produce here can’t be progress-useful anyway, just by definition.
But it’s easy to choose to see this in other terms. Many like to position it as an arms race pitting John Henry and his hammer against the team with the steam-powered drill. But progress-useful writing isn’t something that can be sped up really. It can only come downstream from useful thought, and must build artfully on that which came before it. There are hard speed limits here.
Now, sure, there are categories of writing for which this isn’t quite true—eg. form emails, some sales copy, bad children’s books, screenplays for Avatar sequels
, etc. And maybe future AI models will gain the reasoning powers to do more? It sounds lovely, and I’m certainly cheering for a world with that kind of cognitive abundance! But we're not there today, and installing AI into Google Docs or whatever won't—and can’t—lead to better nonfic writing or journalism. It will just speed things up.But while we sometimes imagine that faster is always better and that removing friction is a virtue, it’s hard to see this as true with writing—or at least for the type of writing that might enable deeper understanding. Much of the friction in our way there has value, and to skip by it is to (often) confuse movement with progress.
The final boss: audience capture
The hardest thing for any creator is staying true enough to some higher principle to reject the immense temptation to kneel at the altar of product-market fit. What harm can there be in fast if fast sells so well? Indeed, as Murrow found, to hew to our north stars alone is to cede audience share to more alluring competition. But that’s ok! Or at least it’s ok to those who measure their worth by different means—who are ok with audiences coming and going (and whose employers share this generous spirit).
If there’s one thing I know about writing, it’s that nearly every writer I’ve ever met has admitted that their most meaningful work has been among their least popular. While there are delightful exceptions to the rule
, for most the exceptions usually only come a handful of times in a career. To sustain it, we must then be relentlessly topical. And to be topical, we must be fast. And to be fast, we must limit how useful and right we can be. There are speed limits all the way down.Most readers aren’t reading to understand the world. Some are, and god bless them. But most are just solving for boredom or confirmation hunger or a thin type of curiosity. They’re not looking to wrestle with their priors en route to a clearer view of the world; they’re looking to stare at knowledge’s ass as it walks by.
We can write for the general reader’s pleasure, or we can write for the interested reader’s betterment. The latter doesn’t pay quite as well, true. But it can pay pretty alright, especially in the currencies that last.
(Note: Paid subscriptions have been paused since late 2022, and won’t be restarted for a while. I’m trying to get out high-effort pieces here and there without feeling the stress of meeting the expectations implied with subscriber fees.)

Caveating because I haven’t gone deep into the literature. My underinformed sense is that behavioural economics has proven of greater interest to marketers and policy planners than aggregate-data economists. But it’s a complex topic, and my understanding is pretty thin.
Stolen from David Deutsch, though with the addition of human frailty as its own item. I suppose his version is more correct in a meta sense, as better explanations about human frailty should in theory reduce human frailty to whatever minimum is consistent with the laws of physics etc. But I find it more useful to consider it separately.
Taken from the same gloss linked in the prior paragraph, which itself references the 2012 results of the OECD’s Program for the International Assessment of Adult Competencies. For a granular look at what scoring a 4 or 5 on this test entailed, see here. While we can quibble with the study design and how it biased not just against low literacy but against low technical literacy, none of the data I’ve seen from similar studies has told a meaningfully different story. On a 5-level pyramid, those on levels 4 and 5 combined seem to be < 15% even in advanced economies.
Some bright eagle out there will say “aha, but subscriptions solve this!” Except they absolutely do not. For any sufficiently broad publication, the dynamics of “let’s get max eyeballs for ad revenue” and “let’s get max eyeballs for subscription conversion” become very hard to distinguish. Reporters will say “but I don’t even get numbers for my pieces”. And, sure, that’s probably true in some places. But that doesn’t mean those higher up are unaware of the numbers, much less indifferent to them! If you’re regularly coming in late to the attention window and pulling minimal traffic, your days of continuing this practice are going to be very, very numbered at most publications. Saying “ah but my pieces are more accurate because of the delay” will be met with some sympathy but also likely reassignment.
A classic asymmetry is that the “experts” most available—and most interested in being profiled—often have personal agendas, slanted professional interests, and so on. Their motivation(s) tends to be in building their personal brand, defending their team, advancing some thesis, and/or doing favors for their college buddies. The person you really want to talk to, as Michael Lewis put it, is the expert six levels down. But they’re nearly impossible to find on deadline unless you’re writing on a beat you know awfully well.
Increasingly their best bet is probably Substack etc. My sense is that the most progress-useful content is now in personal newsletters. And I rejoice that said writers now have wondrous monetization opportunities! We’re much richer for that being true. But even if their work can reach all their ideal readers, this doesn’t solve for the flood of progress-harmful content being pushed by more accessible outlets (where it can also be indexed by AI). If you wanted to understand, say, stock buybacks, you will have a very, very bad time trying to learn anything useful about them from outlets without hefty paywalls. Or worse you’ll pick the wrong paywalled “expert” and end up even worse off informationally at much higher cost. And there’s precious little guidance to help most readers tell one from another.
I saw Avatar 2 in Kyiv, without English subtitles. Crucially, I don’t understand a word of Ukrainian. Usually this would be…more of an obstacle? But I actually enjoyed freeing my brain to pay attention to the visual and technical artistry, which were magic.
My go-to example here is Money Stuff’s Matt Levine. While his traffic likely soars when he stays topical, he’s a rare unicorn who can do topical well. But this requires a pretty enormous base of existing knowledge, and a well-cultivated network of peer experts to tap very quickly. Precious few writers or journalists have that talent and those advantages.
AI & The Myth Of The Rational Reader
Second, I think you're missing some important bits of the AI story.
1. Go look at Ethan Mollick's recent experiments (https://www.oneusefulthing.org/p/it-is-starting-to-get-strange). Lots of niche or local journalism could benefit from more quantitative analysis in a way that isn't financially feasible if you have to hire a data scientist. Lots of journalism could also benefit from better display of its quantitative results (Tufte, etc). Seems like AI is ready to do both, more or less today!
2. Progress-utility isn't only about the leading edge, it's also (at least as much) about catch-up growth: poor countries becoming middle-income, boring industries adopting computers, software verticals moving to the cloud, etcetc. The audience for work aiming at this is, nearly by definition, not at the upper end of the literacy curve. As you note, such lower-literacy audiences are not very good at pulling in context from other work, transferring knowledge between domains, etc. AI is really good at tasks like (literal) language translation, providing context, transferring commonplaces into the argot of a different industry or place, etc. These are also extremely progress-useful things to do!
First, I agree very strongly with the first part, but it's interesting to think of historical examples where this has gone better. It seems like journalism really did improve between Hearst's day and the 90's NYT, whatever its flaws. How, and why, if not the bundling theory favored by Yglesias et al?
Another case study would be the transition between Abbe Migne's publishing machine (https://en.wikipedia.org/wiki/Jacques_Paul_Migne) which supplied new academic libraries all over the world with subscriptions to horrifically edited and sometimes outright forged texts and the modern university presses, which again whatever their flaws seem to do much better. Migne was already a bundler, so that can't be the issue there.
In these two cases is it that literacy just genuinely went up, perhaps because of public education in the former case and accreditation of tertiary education in the latter? If so that seems really important to achieve!