A Recommendation
Daniel Ptashny—a friend, foreign policy analyst, and serial nomad—has just written his first standalone piece for his revamped substack, Obscurity Complex. As I suspect readers here—and especially any who subscribed via our mutual pal Habib Fanny—will find Daniel a good follow, indulge me a short pitch.
His debut article grew out of a debate we had in early 2022. I’d grown fairly confident that Putin was going to order a full invasion into Ukraine. Daniel was less convinced.
In the months after the war broke out, he posed a question to me: why don’t we go there? Surely if there was something for us to learn about our blind spots, the best thing to do was just go and ask and see.
And so we did.
During our second reporting trip there, this past December, I floated the idea to him of writing a retrospective. What had he gotten wrong in 2022? What had he gotten right? What had he learned?
But lest this come across as point-scoring by an old man, let me make a few things clear: even at just 23, Daniel has travelled to more countries than me (he’s approaching 60!), speaks more languages than I could ever hope to learn, and has become my personal ChatGPT to understand the complex political histories of the places I visit. His breadth of knowledge frequently astounds me.
He was wrong a bit once, yes. And he will be again. But he’ll be less wrong and for less time than almost anyone else I know. He also believes that analysts should learn deeply from voices on the ground. He doesn’t just read about places. He goes to them—including the dusty little towns that others fly over. And he talks to people. And he listens. And he has a knack for knowing just how much to update his priors after each conversation. It’s not a gift, or at least that’s not all it is. It’s a cultivated habit.
If you enjoy learning about the world in its complexity, and gaining a rich first-person view of places and perspectives you otherwise might not hear much about, I think your curiosity will be well-rewarded.
A link to his debut: I Was Wrong About Ukraine.
PS - Daniel and I also have one more joint piece from our reporting trip still to publish in coming weeks: an oral account of the occupation of Bucha, and of the psychological fallout over the three years since.
An Update
Last September, I announced grand plans of working through a list of faster and less ambitious pieces. But by the time I’d exited Ukraine to return to it the world had changed. All my feeds were dominated by two things: AI and DOGE.1 And the more I dove into both—and the more I asked how my friends and colleagues were parsing the news they’d read—the more I felt compelled to flip my focus back for a bit.
I ended up drafting half a dozen pieces, but kept running into the same problem: the news cycle was moving so fast that by the time I’d done my due diligence I was three stories behind. On a deeper level I also felt that focusing on the immediate happenings wasn’t likely to really help anyone appreciate the larger and more important developing picture. So I decided to pull back my lens a bit. Instead of indexing on just recent events and coverage, I switched to working backwards from what current trendlines suggest the world will look like in the near future—particularly by the time the next US elections roll around in November 2026.
Unfortunately what grew out of that exercise was some 25,000 words, which even for me is a grossly unpublishable length. So I’ve been working to chop it down into a few lean standalone pieces, covering:
The argument against AI progress years ago was that a plateau was coming. Not only has this not happened, but cost curves and overall abilities are improving faster than ever. Can this trend hold? And what will the world look like in 18 months if they keep even close to the current pace?
Technologists—and in particular those who run the major AI labs—are gaining enormous power over our political systems. Is this a blip, or a new era? If the latter, how is this likely to shift what candidates focus on, how voters approach the ballot box, and what governments actually do?
More narrowly, what’s going with DOGE? Is it really just a frenetic cost-cutting campaign, or is there a second playbook there that most still aren’t seeing or taking seriously? And if it does achieve its ambitions, what might the aftermath look like (for better and worse)?
Democracies require strong social knowledge systems. The problem isn’t that strong democracies die in darkness. It’s that they die in divergence. Almost all that’s happening now is happening in plain view. There’s just a growing gap between reality and our common understandings of it. Or more accurately there are gaps. We’re drifting as much from it as from each other—not just into broad right vs. left political camps, but into increasingly discrete infospheres. How will AI affect this? Can it help us? Can we help it?
While I’m still iffy on how many pieces I’ll break this into, the first should drop next week. In the interim I’m going to pause all paid subscriptions for a bit. While I hope that what I’ve been working on will be worth the wait, I always want to be fair about long inbox breaks. I’ll give notice before unpausing them again.
An Essay
As a sort of preface for this coming series, I published a new essay elsewhere a few days ago. It’s a spiritual sequel to a past piece I published here in May 2023 on AI and journalism. This one is about AI and creative work in a larger sense.
The gist: demand is what informs supply. If we prefer a certain bent in what we consume (whether that be novels or journalism or influencer content), we’ll get a lot more of it. And given that attention is finite, the more we get of that, the less we get of anything else. While there’s been much public handwringing in recent years about AI safety in terms of avoiding some Skynet takeover, far less has been written about how powerful AI tools will favor creators who give people precisely what they want.
What do we want? Not in terms of the values we might list given enough time and thought, but in terms of revealed preferences in what we actually consume? My sense is that we’re all (including me!) far more ok with slop2 than we like to admit, and that the slop tsunami is still cresting—which will further shift demand away from the sort of content that our better angels might prefer. This seems important.
Any interested can read the essay here: The Tower of Babel Revisited. (It’s hosted on my other substack, To Hear God Laugh. I post occasional free essays there that don’t really fit the mission here. This particular one was 50-50, so I figured better to mention it in this larger package versus pushing it directly to y’all.)
More to come soon.
Jeremy
Though obviously also Trump and his non-DOGE stuff. But there’s ample reporting covering all that from both sides, and it’s not obvious to me that I have anything to add to those voices. While a lot is being hotly debated, I feel less is misunderstood. There are a few small edge stories I’ll work into the longer series, but it’s mostly about AI and DOGE.
This word means different things to different people. Though it’s now used most often as a derivative of eg. pig slop to mean something like “that which isn’t fit for beings of higher taste and consciousness”, I like to think it still keeps some of its historical relationship with “sloppy”—ie. it’s unfit for us, in the sense of a healthy diet, because it was produced with a careless indifference to any mental or spiritual nutrition. A bit of slop, like a bit of junk food is ok. It has its place. What matters is that we approach it with collective awareness.