≡ Menu

Fooling ourselves

At the University of Chicago, where I went to graduate school, a quote from Lord Kelvin, was carved in stone at the Social Sciences Research Building: When you cannot measure…your knowledge is meager and unsatisfactory. We graduate students took in this credo like mother’s milk. I came out of Chicago with a tremendous confidence in the power of economics and the ability to quantify that power.

But over the years, I have become increasingly skeptical of the power of statistical techniques to measure causation in complex systems. Edward Leamer’s indictment of modern econometrics, “Let’s take the ‘con’ out of econometrics” is the best known critique of our habits as empirical economists but it has not been taken to heart by the profession.

My thoughts on this issue came to a head with my recent podcast with Ian Ayres on his new book SuperCrunchers. The book is about the power of statistics to improve decision-making. And of course, facts and numbers are crucial for making wise decisions. And there are many examples where statistical analysis helps us in our private and political lives to overcome irrational prejudice or bad ideas. 

But in the course of preparing for the interview, I realized in a way I hadn’t before, that how we feel about the reliability of statistical results lines up incredibly neatly with our political and ideological biases. One example I use in the podcast is the debate over whether allowing citizens to carry concealed handguns deters crime. John Lott and others say yes and trot out the analysis that proves they’re right. Their opponents trot out a different analysis and prove the advocates for concealed carry are wrong.

Now I happen to believe that concealed handguns do deter crime and allowing concealed handguns is a good thing. And you can claim that the evidence that shows I’m right is "good" statistical analysis. The other side disagrees. They claim it’s "bad" statistical analysis. Who’s right? I have no idea. But what’s clear to me is that my belief in the virtues of allowing concealed hand guns has little to do with the empirical evidence. And I would argue that the opponents are really in the same boat. They just don’t like guns and they’ve dressed up their prejudices in fancy statistical analysis.

I came to this realization because Ayres thinks LoJack deters car theft. LoJack is a hidden device you put in your car that lets the police trace your car.  Ayres and Steve Levitt found that LoJack has an incredible deterrent effect on car theft. But they think John Lott’s work on concealed hand guns is irrevocably flawed. But LoJack is the same thing as a concealed handgun. Ayres and Levitt should like Lott’s work and Lott should like Ayres and Levitt. But Lott doesn’t like Ayres and Levitt and the feeling is mutual.

It’s obvious why neither respects the other. It’s not the quality of the empirical evidence. It’s just bias. Ayres and Levitt don’t like guns and Lott doesn’t like the idea that insurance companies (and criminals) could ignore the impact of LoJack if it’s really so big. Ayres and Levitt see LoJack as an example of market failure and Lott thinks market failure in this case requires too much ignorance.

The nature of the analysis is such that neither side can convince the other that "their" analysis is reliable. That’s not always true. As I suggest in the podcast, Milton Friedman was able to convince the skeptics that inflation is everywhere and always a monetary phenomenon. Friedman won the debate. But how many other studies can you think of where someone staked out a controversial position and convinced the skeptics based on empirical analysis? I think it can be done, but it’s rare. And in today’s world, most of the interesting empirical claims are being made in cases where  the data are too incomplete and the issue is so complex that we can’t move to a consensus. The empirical work doesn’t improve our understanding of what’s going on. It masks what’s going on. It gives a patina of science when in effect the numbers aren’t really informing the debate.

In the case of crime, to  isolate the effect of LoJack or concealed hand guns, you have to control for any other causes of crime and control for the simultaneity problem that causation could be running the other way. I’m not sure that can be done. My other favorite example of this is WalMart. There are economists out there who claim that WalMart lowers wages when it comes to a town, even a big town, such as Los Angeles. I don’t find this argument believable. But the proponents of such arguments claim to just be using the numbers to tell them what’s going on rather than relying on their prejudices as I am. But the numbers can’t be crunched sufficiently well to come to a conclusion on WalMart. To do that, you have to control for a bunch of factors that can’t be controlled for in real-world data situations.

And that’s why there are studies on the other side showing how great WalMart is for a town. But are we moving toward a consensus about the impact of WalMart? Do the analyses improve our understanding? I don’t think so. And that’s because of the way modern econometrics is done. Regression is cheap so we buy a lot of it. Leamer’s point is that this is "faith-based" empirical work. You just keep running the regressions including or excluding this or that, trying this or that specification until you find the result that confirms your worldview before you started the work.

The pragmatists (Peirce and James) and Hayek understood the dangers of rationality and what is essentially fake science. I’ll write more on this another time and maybe do a podcast just on this issue.

I’ve closed comments here. If you want to comment, please listen to the Ayres podcast at EconTalk and let’s talk over there in the comments section to the Ayres podcast.

Comments