7 Comments
author

A few things that didn't fit:

===

Investigative journalism is somewhat its own category, as the “new” thing being reported on actually happened some time ago and what’s new is that the journalist just found out / uncovered it. If they’re the only one who has, and they don’t expect to be scooped, they often have the time they need to be thorough and to consult/incorporate real expertise. This is one reason that even journalism critics acknowledge that lots of investigtive journalism is outstanding.

===

I think Clubhouse is a very clever app, and that they certainly mean to be democratic, and that gating access was a legitimate growth strategy. But as the gates come down they'll face the classic Eternal September problem. I'm optimistic that they'll come up with creative solutions there. That said, we need a solution that incorporates everything great about Clubhouse that goes beyond Clubhouse. Clubhouse makes conversations low-latency. How do we make good explanations low latency?

====

Things that should have good public explanations of the type described above, where they could be prepared as a general public ed thing and then linked/worked in for current news stories as useful:

- Short-selling (why it exists, how it works, the debate around how it gets abused and what sensible reform might look like, etc)

- High-frequency trading (why it exists, what it does / doesn't do, what guardrails we have in place, whether they're good enough, etc)

Expand full comment

It's a great post, and a laudable idea. While I fully agree with the necessity of a strong explanatory Wikipedia (which vox tried to build too), I have a contra view to your latency point though.

The issue I find is that our action feedback loops are much faster now for a whole variety of reasons. Whether it's fake news spreading on Twitter or a meme stock spreading on Reddit or even VCs FOMO-ing over the next great startup, they all fall prey to the low latency issue.

While it might seem that the ideal counter narrative is to also have a "good" speedy feedback loop to counter the "bad" speedy feedback loop, I'm not sure that it's feasible. For one thing, the existing narratives are simplistic and moralistic which help them plug into predefined categories in people's minds (wall st vs little guy). For another, at a sufficient level of abstraction, explanations become about which expert I should trust, and that's an issue with mood affiliation. So while there might be excellent explanations of the phenomenon on Bloomberg by Matt Levine, that's not the authority people are trusting when taking the action.

I think the answer has to include some canonical sources we can believe, which itself is hard, and introducing a bit more friction into our frictionless technology world so we're not compelled to act without following the thought chain from information -> analysis -> action. Our tech views have gotten warped in that we prize easy user experience over all other priorities and therefore making easy unilateral action egged on false info extremely simple. Our System 2 brains that kahneman mentioned don't get a chance to emerge.

As you can see, it's a topic that's been interesting me for a while. Though I've explored it wrt both the difficulties of hierarchical organisations dealing with incoming information which you might find interesting - https://www.strangeloopcanon.com/p/hierarchical-growth-trade-offs

And lastly, very happy to take you up on the offer point and be helpful wherever I can.

Expand full comment
author

Agree with lots of this. But basically my thinking is:

- The only explanatory institution with a decent feedback loop is Wikipedia (and even there it isn't great -- UI/UX isn't inviting, lots of friction, low incentives to edit)

- Feedback loops are what drive trust on long timelines, as good ones make sure the best input gets factored

- If you can get "wow this meme take is so obviously wrong" explanations out there fast enough (i.e, before people have doubled down / bought in / turned memes into mobs), you can both prevent some mob-formation and also train some people to look for that explanation in that place next time (for both rewards of being right and avoiding costs of being wrong)

- Ergo the friction we increase isn't technical, but that little internal sense of "maybe I should check x before I join in here"

- If you get the right people to increase that friction, you can induce a non-trivial amount of cultural change, which in turn reinforces the quality of feedback loop (more people committed to contributing and doing so at a fast clip)

- So while truth is still higher-latency than misinformation, the gap gets a bit narrower, and you maybe induce people to voluntary add just enough mental friction to not go full mob too fast

Now this may not work! Lots of potential break-points etc. But if a super-powered version of Snopes existed for each brewing mob belief, and did so fast enough, and was backed by a comparable (or better) level of trust, it isn't *crazy* to think we could move the needle a bit. And at the end of the day the upside of any progress here is worth orders of magnitude more than the labor required to try. So why not give it a go.

Expand full comment

Always in favour of smart folks trying to solve the hard ideas, so I'll be rooting for this! It's easy to destroy an idea, especially early on, so the fragility is very much noted too! You don't know if you're Don Quixote or Lancelot until you swing the sword...

What comes to mind (influenced heavily by what I'd like) is a sort of Wikipedia, but where the events' causal connections are actually shown/ linked. So you can follow a chain of logic visually from argument to analysis to data from other arguments, and so on. Wikipedia is wonderful for facts, but less so for context as that's buried in a thousand blue links.

Expand full comment

nice idea

Expand full comment

Isn't this Vox and their cardstack model that they moved away from? I guess if the goal is to shame the NYT, supplement them, or out-eyeball them rather than taking their lunch money the investment and hence the investor expectations can be correspondingly lower, and hence more sustainable? Anyway, I'm all for the idea, but I think it would be good to think hard about why Vox's efforts at this didn't work out.

Expand full comment

I'm loving the idea.

I'm not sure how easy it would be to recruit people to work on this. Of course, Wikipedia does it. But how would you start such a thing? And how would you keep people motivated to put their time and effort into it. Especially since, in the beginning, you'll most likely be ignored.

I'm thinking it might be easier if it was a combination of tech + people. Meaning: A bot that automatically aggregates sources on the same story. It builds a timeline of when which publication picks up the story. And it might even be able to identify people, places and dates. Based on that information, you'll have an up-to-date dashboard for any big story.

Pulling up headlines, mapping the spread of a story, outlining differences, pulling up related tweets... these are all things a computer can do. And once that grunt work is done, human contributors can do something with it. I guess it would be kinda inviting, to have all that data presented to you. You'd almost feel stupid if you didn't dig in. That's at least how I'm imagining it :-)

Expand full comment