The Poverty Line Trap

It's useful to measure poverty, but its connotation is more stable than its denotation

Know someone who might like Capital Gains? Use the referral program to gain access to my database of book reviews (1), an invite to the Capital Gains Discord (2), stickers (10), and a mug (25). Scroll to the bottom of the email version of this edition or subscribe to get your referral link!

This piece about updating the poverty line for 2025 spending has been getting some attention. Mike Green, the author, has made some clever points in the past (during the great inverse-vol bubble of 2018, he outlined what would happen in advance). He's also been early to thinking through some of the subtle impacts of index fund investing, though it's always hard to model how other market participants will handle that.

So, smart guy, with lots of variance from one take to another.

This take is that the poverty line was first calculated in 1963, and if you use the same methodology today, you'd get a poverty line of $140k. The basic argument circa 1963 was: we have good data on food prices but not on other products. We know that the average family spends one third of their money on food. And the USDA had put together a report on the minimum cost to feed people of various ages. So, assume that one third of the budget on food is normal, calculate the income that would make that minimum food budget equal to one third of income, and you know the line at which someone is poor enough that they're literally reducing their calorie consumption in order to pay for other necessities.

As the essay notes, this was something we did at a time of information scarcity, when we just didn't have enough information about overall expenditures. It is, in one sense, a pretty strict definition: even back then, there were plenty of people who'd consider themselves poor even if they weren't going to be hungry for financial reasons.

But now, grocery spending is about 4.9% of household expenditures. Food out of home, which was a quarter of grocery spending in 1963, is actually higher than grocery spending, at 5.5%. And even that understates how much food consumption has shifted from buying and preparing food to paying someone else to prepare it and consuming the finished product, because some of the incremental grocery dollars you spend come from things like buying pre-sliced fruits and vegetables or entire prepared meals. We also snack more than we did historically.1

If grocery spending has been crowded out because we're all meeting our basic needs and spend our incremental income on discretionary purchases, then basically everyone in America has cleared the poverty line. If we're still spending-constrained because there are new needs, or newly-expensive ones, then we can potentially be poorer.

But even in that latter case, we have to be careful. The more of your spending that's absorbed by services, the more your spending turns directly into income for somebody else. We can't all be poor because of the high cost of non-tradable outputs like healthcare and education. Some of us have to be providing those services, and if buying them makes everyone else poor, but GDP per capita is rising, then either the providers of those services are getting quite rich indeed or there are middlemen who are minting money and they're where all the wealth goes.

What Green does in this essay is to compare the median household income ($80k) to some estimated median expenditures for a family of four. He comes to a final poverty line—the income below which a family is desperately poor—of $140k. This seems high. The first item is $32k for childcare. Ouch!

But wait! First of all, those childcare expenditures are some combination of daycare, sitters, and tuition. But 87% of school-age children are in public schools. So when we're looking at that average, over the course of 18 years, we have maybe six years of potentially needing full-time care, and then 0.13*12 = 1.56 years of private school tuition per child. Looking at full-time care alone, the numbers only tie out if either a) we're specifically measuring poverty for families with young children, or b) we're assuming that the average cost of daycare or tuition is $76k per child-year, since we only need 7.56 child-years of care for an 18-year childhood. (Also, average earnings for families with kids are slightly higher than the national average, even though wages tend to peak later in people’s careers.)

And also, let's wait some more: this family is earning $80k. And we're assuming they need full-time childcare, which implies two full-time earners. They're in the 12% Federal tax bracket and have another 7.65% in payroll taxes. Depending on where they live, they also face state taxes. Paying $32k is plausibly roughly breakeven if both members of the couple earn the same amount and they live in a state without income taxes, but it doesn't take much variance between them at all for that childcare to be a bad deal after taxes—if the split is an entirely plausible $30k/$50k, they're poorer having paid $32k for childcare to enable another parent to work even ignoring the tax impact. (Unless they happen to live in a handful of metro areas with good public transportation—where they'll be earning more, spending more, and very unlikely to have two kids with an $80k household income—the cost of an extra vehicle will probably wipe out these savings.)

Now, it's easy to push back against that, but in a way that complicates the picture further. You can note that some people don't want to spend all of their time on childcare, which is absolutely true. But poverty is not the state of having to make tradeoffs that have downsides—that state is the state of existence. There are non-financial tradeoffs, of course; one member of the couple might not want a career interruption that would make it harder to reenter the workforce later. But for that to be an economically rational decision, they have to be operating on the assumption that they'll earn more relative to their childcare costs in the future. In other words, this is a snapshot of someone who looks poorer-on-paper because some of their consumption is actually an investment (in retaining access to their chosen career track) that will yield higher income in the future. It's just very hard to point to someone who is buying full-time childcare in order to have more time to pursue a career in which they expect future raises—or paying for childcare temporarily in order to maintain their position in a career where they expect stable income—and call them poor. You're either looking at someone at a low point in their discretionary spending who is also having more kids than most people of their income level (and more kids than most people in the US will have!), or you're looking at the spending patterns of someone who is sacrificing in one area because they'd prefer to spend their money on something else.

And there's pushback-to-the-pushback: if you graph hourly wages against hours worked, you see that for earners in the bottom half, lower hourly earnings predict fewer hours worked. In communities where underemployment is the norm and multigenerational families are common, there's a sort of informal in-kind welfare state: grandparents babysit, nephews crash on the couch, etc. This is less common among higher earners, but it's another way to state the claim that some consumption of services is market-based at higher incomes and informal at lower incomes, so it describes what kind of spending makes sense as the opportunity cost of your time rises, not what the baseline necessity is.

Other entries in the list are more sensible. Healthcare, for example, really is expensive, though if you're comparing average household income to average healthcare spending, 1) you need to consider that ~60% of working-age households have access to employer-subsidized health insurance, and 2) health expenditures overwhelmingly occur later in life; if you're a peak healthcare spender, it's very unlikely that you're paying for daycare, though your kids may be paying for your grandkids to go to private school. So it’s a duration mismatch. 

As a general rule, you're only thinking clearly about healthcare when someone is calling you a moron for how unsustainable your spending plans are or a monster for depriving people of care. The US is definitely rich enough to provide any one person with the most comprehensive healthcare imaginable, and we don't really have a political culture that would trade more generous health benefits for stricter regulations on unhealthy behavior. A single-payer system coupled with retina scans when you buy snack food and prison sentences for the sale or distribution of combustible tobacco products would drive down healthcare costs, at least in the short term, but would be opposed by basically every mainstream politician. So the politically-tenable choices are: various kinds of quasi-socialism with nightmarishly complicated payment systems, or, well, other kinds of quasi-socialism with different complications instead. The only places that don't have to have some mechanism for rationing healthcare are the ones that don't have any healthcare at all. For everyone else, it's a choice between some people being sick because they couldn't afford healthcare, or people being sick because it's free and there's a waitlist. In some places, if you don't want to die while waiting for treatment, there are other options; in 2023, 4.7% of Canadian deaths were through assisted suicide.

Healthcare is just full of unpleasant tradeoffs, and in general when you see what looks like an easy win, there's going to be someone who is very upset that you took away either their lifestyle subsidy or their income. You may not care about the former—personally, I'm fine with the cost of smoking falling on smokers—but in the case of the latter, you also have to worry about lobbying (and when healthcare providers defend their incomes, they'll tend to use sympathetic healthcare recipients as a human shield).

One of the most unpleasant tradeoffs is that the high fixed cost of research, and the risk-averse nature of the industry, mean that promising new treatments often start out very expensive and sometimes never get cheap. In fact, it's hard to tell what anything costs, given the level of cross-subsidies both at the level of care for individual patients and at the level of ensuring that hospitals provide emergency treatment to people who need it regardless of their ability to pay and still collect enough money to keep the lights on. But even when things do follow the path from expensive to ubiquitous, that means that at the start, their existence makes people feel relatively worse off: they're sick, there's a treatment, they can't afford it. Pharmaceutical companies do try to price-discriminate here, but it's a messy and inconsistent system.

But you can't really buy 1963-level healthcare. Nor would you want to; polio and measles were still common, though vaccination was cutting that risk drastically; Hib influenza, which can cause deafness and learning disabilities, did not yet have a vaccine (today, cases in the developed world are almost nonexistent); Rotavirus and RSV were putting lots of babies in hospitals; the survival rate for acute lymphoblastic leukemia was ~10% instead of today's ~90%; and infant mortality was four times higher. All of this has a cost; the healthcare system was a lot cheaper when the only option was for more people to die, and that's not the case any more.

So, are you poorer? In some sense, you absolutely are, but it's in the same sense that you're poorer when your boss gives you a raise and a promotion and suddenly you find that you're working longer hours and stressing out more. You can also look backwards: if the level of healthcare the median American pays for is a necessity, and poverty means being unable to afford necessities, then everyone was below the poverty line before these products were invented. But it’s not especially useful to say that the poverty rate is higher than you think, but fortunately has declined from 100%—if you’re tracking a statistic, it’s presumably because you care about the level and the trend, and a statistic that’s fixed at the most extreme reading it can give for the majority of the time series in question just isn’t a very sensitive metric.

And another important point the piece makes is housing. If there is a coherent theory by which output grows over time, but standards of living decline, it basically has to pass through housing supply restrictions. In that case, the standard-of-living problem comes down to marginal propensity to consume: if the upside from growth accrues to property owners, but they don't spend their gains, then you get a low-growth environment. Housing has gotten more expensive, but we also use a lot more of it than we used to; the average house has 40% more square footage than in 1973, and average household size dropped 17% over that time period. Housing is another category where it's hard to even find the 1963 consumption basket of a smaller dwelling with no air conditioning. Housing policy has pretty clear economics and much trickier politics; the people who own homes vote more than the people who rent because they can't afford a house yet, and obviously the counterfactual children who don't exist because their parents delayed family formation due to housing costs and had fewer kids are 0% of the electorate. But it's not a problem that can be easily solved with redistribution, rather than supply increases. We'd all fight about housing less if it were easy to build more of it.

The piece does raise one very important point, which is that means-testing with cliffs leads to all sorts of crazy distortions, like when Pennsylvania's system meant that a single mom with two children would be able to consume more when earning $29k than she would earning $69k. This basically turns career progression in the bottom half of the distribution into a kind of video game, where you can make a number go up and feel proud of it but get very little real-world upside. You do get the upside of having more control over what you spend money on—if you lose a Section 8 voucher and have cash instead, maybe your preference is to live with roommates and spend that housing money on something more fun, or to save it, or whatever. But you have to clear a high hurdle before you reach the point where making more money makes you better-off.

But that's also a function of how taxes and redistribution work—if there's some basic standard of living that everyone needs to get, the two ways to achieve this are 1) a big flat UBI that covers those basics, and progressive taxes to fund it, or 2) means-tested transfers such that we give more to people who earn less and vice-versa, which means we flatten the curve of after-tax income graphed against pretax income. Lowering the effective marginal tax rate of the lowest earners means some combination of transferring less to the poorest and raising taxes on everyone else. And that, once again, runs into the problem that the electorate is not the same as the population: higher earners vote more, and donate much more, so they have a bigger say in how things turn out.

When people from outside the US visit American friends of the same relative socioeconomic status, i.e. if a middle-class French person comes to the US to hang out with a middle-class American, they don't tend to wonder why everyone is poor. They will sometimes wonder just how many pickup trucks per capita Americans need, why there are so many parking lots, and why these pickup trucks need to be roughly the size of a tank. They also sometimes remark that the Standard American Climate, in July and December and in both Maine and Phoenix, is a steady 71 degrees year-round. It takes living in a very rich country to feel poor at $140k, but if Americans weren't capable of feeling that way, we'd probably take more vacations and grind a bit less.

It's just very hard to draw an accurate poverty line in a society that's well-off in the aggregate, offers lots of means for self-expression by way of consumption, but also has big non-tradable sectors like healthcare, housing, and education that suffer from a mix of cost disease, structural inefficiency, regulatory overhead, and regulatory capture. Some of the things you need to not feel poor today are unimaginable miracles by 1963 standards, and some of the things people took for granted then, particularly unmeasurable ones, are now high-class luxuries. It's very impressive that we can have standards as high as we do, but we need to be honest about how high those standards are historically and relative to other countries, and about what specific problem we're trying to measure when we talk about poverty.

Healthcare, education, and housing have risen as a share of household budgets in part because that’s where more marginal consumption dollars go when basic needs have been met. If you’re poor enough that you can’t afford to eat, your next dollar probably goes to a meal. But if you’re reasonably well-off already and your income suddenly doubles, you’re more likely to increase the square footage of your home (or move somewhere with pricier square feet) than to double your calorie consumption. Healthcare and education can absorb surprisingly large sums in a rich country because that country has to pay people more to do those jobs; housing can absorb a lot more money because people put a high value on living in particular places. All of this would be true even if there weren’t regulatory and industry structure reasons for costs to spiral. But all of those extra layers of inefficiency are paying people, even if they shouldn’t be. So it’s inflating the cost of a middle-class lifestyle, but also inflating the incomes of workers who can’t get jobs in higher-output sectors. It’s hard to solve a cost-of-living problem purely through redistribution, but even in a very streamlined economy, higher output would make these specific problems—you have to pay service workers more per hour the richer the country gets, and these kinds of spending will represent a growing share of your spending basket—worse. 

The skeleton key to a lot of this discourse might be surprising: one reason millennials feel poorer than previous generations despite having higher inflation-adjusted net worths at the same age is that their parents, especially the richer ones, had kids later in life. If you remember when your parents were in their 20s, you probably remember your parents having to scrimp and budget a bit, but if you were born when your parents were in their mid-30s, and your first memories are from when they were in their 40s, they were already pretty well-established. It's one thing to ride the bus with Dad; it's another thing entirely to hear Dad talk about how he used to have to ride the bus to work every morning, and to hear that story from the back seat of a BMW. If your parents had you later in life, your first experience of what it's like to be in your early 20s and nearly broke is when you reach your early 20s and find that you're nearly broke. It's quite a shock!2 But it's also a kind of reenchantment: if you're an only child, having roommates as a young adult means you can truly appreciate the experience of finally being able to afford a place of your own. If you've had to run the numbers on calories per dollar because you just ran the numbers on bills due before your next paycheck, it's an incredibly liberating experience to suddenly discover that you can buy a cup of coffee without looking at the price tag. One of the biggest political issues in the US is, and has been for decades, the question of how we decide which of the millions of people who want to come to this country get to do so. Did the whole world get poorer? Is America incredibly good at marketing an image of success that doesn't align with people's expectations, and if that's the case why don't people bounce back to their more prosperous home countries? It's not impossible for this to be the case, but it does imply that the real question at hand is how people who were fortunate enough to grow up in the richest country in the world still feel poor.

And, at the same time: it's true that if you extrapolate the economic growth previous generations experienced, you get much higher GDP per capita numbers today. We really did have a long lull in productivity growth starting in 1971, though recent numbers actually look a lot more like what we had during that mid-century boom. That makes it a very tricky time for economic populism: there's a new economic growth engine, it will absolutely produce a mix of winners and losers, and when the losers say that they did everything right and lost their jobs anyway, they'll have a point. It was a very fortunate historical coincidence that the US had just had a unifying national triumph in the form of the Second World War, and also faced a similarly unity-inspiring enemy in the form of the Soviet Union. This round, we started with less social trust, and our last big collective lifesaving triumph, the rollout of Covid vaccines, has been more or less disowned by the politician who could be most associated with it. At least we have a geopolitical rival to rally against, though. One of the great sacrifices Americans make for the common good is to feel broke earning $20k, $100k, $500k, $2.5m, etc. If you can trick yourself into doing that, and blame yourself for it, you're going to work a lot harder than you otherwise would and take the kinds of risks long-term growth is made of. But if you turn it into a political issue rather than a personal one, you're going to be tempted to look at the numbers that feel right rather than the numbers that accurately describe reality.

The Diff has written about the factors affecting economic growth, mobility, and inequality in pieces that look at specific industries as well as the broader context. Among them:

Share Capital Gains

Subscribed readers can participate in our referral program! If you're not already subscribed, click the button below and we'll email you your link; if you are already subscribed, you can find your referral link in the email version of this edition.

Join the discussion!

1  Incidentally, specific snacking patterns are a really interesting class marker: bruschetta has a similar ingredients list to a Party Size bag of Tostitos and a bowl of Old El Paso salsa, but is also a way to demonstrate that you can withstand the temptations of flour-oil-tomato concoctions for long enough to prepare something elaborate. Long prep-time snacks are conspicuous consumption of willpower.

2  One of the Diff’s modest proposals for education reform is that schools' spending on dorms/food/amenities for students should be capped at the amount that someone earning the average salary of a graduate of that school would be able to spend on those, at least for schools that receive federal funding. It's another welfare cliff if completing your degree and entering the workforce means that you no longer have access to cheap, reasonably nutritious meals and an excellent gym—we want entry into the workforce to be a lifestyle upgrade. Granted, this would mean that some schools would have to offer such a meagre experience for students that they wouldn't be able to attract any. Good! Those are the schools that are investing more in student leisure than in student learning, and they shouldn't be getting government subsidies to do this.

Reply

or to participate.