When and How to Trust the Experts

On credentialism, conflicts of interest, and commercial aviation

Know someone who might like Capital Gains? Use the referral program to gain access to my database of book reviews (1), an invite to the Capital Gains Discord (2), stickers (10), and a mug (25). Scroll to the bottom of the email version of this edition or subscribe to get your referral link!

A few weeks ago, The Verge posted an understated, funny headline: "Google says a typical AI text prompt only uses 5 drops of water — experts say that’s misleading ." Like most journalistic dishonesty, it's technically true. Google said one thing, on its official site, and The Verge called up some experts on that topic who pointed out that Google was shading the truth. But here's a more strictly factual version of the same headline: "Experts at Google say a typical AI text prompt only uses 5 drops of water - other people with a bit less expertise disagree." Now it's a clunky, confusing headline. You can't even tell which side is supposed to be right!

And that's a big improvement. You shouldn't be dead certain about some issue just because a blog post highlighted one side's economic conflicts and another side's expertise on some factual matter. Instead, you need to consider some combination of what the underlying facts are and, because you probably won't be the world's leading expert on every single topic you ever read about, you'll have to have some model for who's more likely to lie to you, and how.

And part of that is seeing the writing that you're reading as the result of a process that did not start with the question "How do we make you, the reader, maximally well-informed?" That's obviously part of the motivation, but it synergizes with other motivations like: "How do I, the person running The Verge, maximize shareholder value or at least keep this business going?" or "How do I, a journalist, make rent this month?" That model works even if you phrase the goals nobly—asking about the maximum good you can do for public discourse while not starving. And it even works if you phrase them cynically! The output is just as good if the underlying motivation is a desire to humiliate those losers at TechCrunch and Business Insider who think they have any business telling people what to think about tech.1

As it turns out, being reasonably truthful is actually a decent way to accomplish these goals, but different people will have different ideas about where to draw the line between editorializing and giving readers needed context. This isn't easy; there are details one person might consider extraneous that another one thinks reframe the entire conversation, and people won't fully agree about this.

There's a paradox of journalism where 1) journalists absolutely love reading and writing about the field of journalism, but 2) for the purpose of any given straight news story, the idea of journalists as fallible human beings who may themselves have complicated incentives and biases fades into the background. Gell-Mann amnesia is real, and it's a sanity-preserving feature of human cognition. If you're reading a news story about a topic you're semi-interested in, but not an expert on, you simply can't function if the process involves five minutes of reading the content and five hours of investigating the writer's credentials. You just have to take some of it on faith.

Debates about whether or not to trust experts are really a meta debate over who gets to decide who's called an expert. There are, of course, plenty of sloppy people who will put scare quotes around "expert" when they decide to treat an illness by chugging apple cider vinegar instead of taking a pill that fixes it, but you really should but the word "expert" in quotation marks in these debates, to emphasize that the question is not so much the existence of expertise as it is the question of who has it. That question only gets more important as the world gets more complicated. The pro-expertise faction is correct to say that you're Trusting the Experts every time you take a flight, go to the doctor, consult a lawyer, etc. But when you read the news, you're trusting individual journalists to have the right taste in who knows the most, and in who's incentivized to be honest.

In the Verge piece on water usage, the first expert they cite is a professor at UC Riverside, who publishes a lot on the specific topic of AI energy consumption. The other is a PhD candidate who's been writing about the negative externalities of tech, including the energy consumption of crypto, for a long time. Google's paper has a dozen authors, including David Patterson, the guy who literally coined the term RISC; Jeff Dean, who got into deep neural networks a decade and a half ago and has made huge contributions across the stack; James Manyika (Oxford! JPL (briefly)! McKinsey! Reporting directly to Sundar Pichai with a mandate to oversee both research and the impact Google has on the wider world! The Google paper is absolutely stacked with experts.

And, of course, every one of those experts is heavily compensated in Google stock, probably identifies with Google's mission, wants to feel like they're being helpful to the world, etc. They're also going to be inclined to take a Google's-eye-view of things, and to focus on what they can actually measure rather than what they'd have to estimate.

So, of course, while these people are quite smart, and while they can either run a query or send an email to find out exactly how much water Google used and exactly how many Gemini queries that produced, they have natural, unavoidable biases. But so do the other experts! An academic whose specialty is the understated negative externalities of tech is in a pretty bad position, career-wise, when those externalities go away. Someone who's been blogging about datacenter energy use for a decade will have to find something else to talk about if datacenters get too much more efficient. In some ways, the outside experts have more of a conflict, because the skills that put you in a position to write a paper like the one Google produced are more transferable.

The academic experts are also going to tend to focus on actual or perceived downsides to what people in industry are doing. What would it even mean to be an expert in the nonexistence or insignificance of some phenomenon? There's a market for books like Not the End of the World or Everything Bad is Good for You, but that market has to exist as a complement to a larger market for the opposite point of view (Reason is actually very good on this beat, if you want to know that cops don't overdose on fentanyl from briefly touching it, or microplastics are not that big a deal or whatever). You wouldn't read a book called Liechtenstein Does Not Have a Master Plan to Conquer the World and Enslave Us All, or Making a Pun Won't Give You Ebola, because you had zero reason to worry about those things in the first place. 

It would be very tidy if, in the end, the Verge experts were annoying alarmists and Team Google was right after all. But, at least based on my read, the experts did in fact wield their expertise well: the Google paper provides a technically correct answer to the question of how much water is used in a datacenter, but doesn't include the water use required to produce the energy used in the query. Given that AI use will be an increasingly material contributor to overall US energy consumption unless something really wild happens on the supply- or demand-side, that does understate the impact, though it's accurate about the part of the impact that Google can actually control. This paper, cited by Google’s own researchers, estimates that 87% of the water used to train GPT-3 was off-site rather than in the datacenter itself. This is pretty defensible, just from the standpoint of usefully answering a question—if the number you know is smaller than the margin of error on the number you don't, adding the two together is not especially useful. But the paper does estimate energy use and the attendant carbon emissions. Again, all of this is fair play by Google, and explicitly laid out in the paper. They could have been more direct about it in the blog post, but that would be tacking a big question mark onto the result.

Experts show their work, and they tend to be familiar with the relevant academic literature. If you're worried about the death of expertise, and you know that everyone who's well-informed has conflicts and incentives, you'll have to judge the kinds of experts who lay out the details of their argument.

The Diff is implicitly writing about expertise all the time, but sometimes it's more explicit.

Share Capital Gains

Subscribed readers can participate in our referral program! If you're not already subscribed, click the button below and we'll email you your link; if you are already subscribed, you can find your referral link in the email version of this edition.

Join the discussion!

1  For transparency, Capital Gains/The Diff is about 70% motivated by the desire to make money, 20% curiosity/fun, 10% resentment at the quality of coverage at certain large publications, whose influence on the general discourse and public policy I’d like to reduce. So it's mostly a money-grab. But since I intend to be grabbing money over a long period, and to do so increasingly through getting onto the right cap tables as early as possible, the dollar-maximizing move is to be as helpful and informative as possible so there's a big consumer surplus I can monetize over time. To readers who stick to low-priced top-of-funnel offerings like the newsletter: you're welcome! And to the ones who subsidize them through Anomaly, Diff Jobs, etc.: thanks!

Reply

or to participate.