Ezra Klein is upset about inequality and those of us who try and take a more nuanced approach to the data. Here is his first reaction to my observation that almost everyone born in America after 1920 are among the richest people in human history:
This strikes me as comparable in usefulness to telling
a poor person, "well, at least you’ve got good skin!" Yes, we’re better
off than we were a hundred years ago. No, that’s not a sufficient
answer, or even relevant comment, to questions of distribution and
justice. If you don’t think inequality is a problem or the current
distribution of wealth is troubling, that’s a position. If you’re going
to respond to the impoverished or the laid-off by explaining that thing
sure were tough in 1912, that’s not even an anecdote — it’s an utterly
Actually the point was to show how arbitrary any claim of injustice really is by examining this slice or that slice of the income distribution. The other point was much more important. I care less about inequality and more about whether people are getting ahead and having better lives.
Klein relents a little bit in this post and admits there might be some legitimacy to my point about the historical record. But then he gives this argument:
The Libertarian choice of pre-1920s America is quite helpful to our
case, because that’s right about the time inequality in this country
peaked and the American economy entered a long and fruitful corrective
process. Here’s the Saez-Piketty data showing the income-share held by the top 1 percent:
Indeed, what you see in post-1930s America is an economic system explicitly attempting to reduce income inequality and distribute growth gains more broadly.
This is a beautiful example of how hard it is to interpret economic data. First of all, talking about the top 1% makes it sound like the data refer to the American Economic Aristocracy. The haves vs. the rest of us. But the people in the top 1% in 1929 aren’t the same people in the top 1% even in 1935, let alone much later. So this chart isn’t about a particular group and how they fare over time.
But the subtler problem with this picture is that it doesn’t tell you anything about how the rest of us are doing. It’s the share of the top 1%. When they get more in a particular year, it sounds like that means there has to be less for the rest of us. After all, if you get more of the pie, doesn’t that mean less for me? But it doesn’t, because it doesn’t tell you about how big the pie is. If I get a smaller share of a bigger pie, I can have more to eat.
The opposite is also true. If I get a bigger share of a smaller pie, I can be worse off. It’s hard to read Klein’s chart, but if you go to Piketty and Saez’s numbers (go to Table A3, column 3 for the fraction of income going to the top 1%, including capital gains) you’ll see that the share going to the top 1% peaked in 1929 at 23.94%. In 1933, the number was 16.46.
Hurray! The rest of us, the bottom 99% were getting a bigger share in 1933. The economy was more fair! But there was nothing to cheer about. Real GDP per capita (cool tool—don’t miss it), fell 29%.
Do you think that was a good time for the average American or a bad time? Do you think the fall in the average was a statistical artifact caused by the right-hand tail of the distribution being lopped off? No. The fall in the share of the top 1% had nothing to do with social justice or redistributive policy or the rise of unions. It had nothing to do with any attempt of any system to explicitly or implicitly do something. It was caused by a lousy economy that hurt the average person and poor people and yes, rich people. The share going to the rich fell because it became really hard to make a lot of money. But it also became really hard to make a little money.
And yes, I know I’m cherry-picking four years to make a point. It doesn’t disprove everything people say about income shares. But it shows the danger of using income shares as a measure of well-being. I’ll write about the longer trends in the data, soon.
Alan Reynolds has recently critiqued Piketty and Saez and Saez and others have responded. Maybe I’ll talk about that in another post. But the biggest flaw isn’t in their data. It’s how they’re used.