Milton Friedman laid out his methodological approach to doing economics in his 1953 essay, The Methodology of Positive Economics. This essay gives his answer to the question: What constitutes a good economic model? Or, put differently, how would you recognize a good economic model if you saw one?
According to Friedman, “the only relevant test of the validity of a hypothesis is the comparison of its predictions with experience. The hypothesis is rejected if its predictions are contradicted; it is accepted if its predictions are not contradicted.” All that matters is whether or not a model fits the data. Assumptions? Priors? Intuition? All that stuff is just moonshine. Empirical fit reigns supreme. This is an extreme view!
For example, in Friedman’s eyes, a good model of how leaves are distributed about the canopy of an oak tree is a model in which each leaf optimally chooses its position and orientation relative to its neighbors. Yes, we know that leaves don’t have brains. They can’t actually make decisions like this. But it is ‘as if’ they could. So a model in which each leaf strategically chooses where to grow is a good model of leaf placement.
A good model of how an expert billiards player makes difficult shots would be a model in which “he knew the complicated mathematical formulas that would give the optimum directions of travel, could estimate accurately by eye the angles, could make lightning calculations from the formulas, and could then make the balls travel in the direction indicated by the formulas.” So what if the player can’t do these things? We know he regularly makes difficult shots, so it’s ‘as if’ he can. Friedman tells us to just model him like that anyway.
In Friedman’s view, “a theory cannot be tested by comparing its ‘assumptions’ directly with ‘reality.’ Indeed, there is no meaningful way in which this can be done.” In fact, Friedman argues that insisting on reasonable assumptions can be misleading. “The more significant the theory, the more unrealistic the assumptions.”
Every economist knows about Friedman’s ‘as if’ approach to model evaluation. If asked, most economists will say that Friedman’s methodological approach is, if not correct, then at least reasonable. They will argue that it’s at least important to consider ‘as if’ justifications when evaluating a model.
But here’s the thing: no working economist actually evaluates models this way! Aside from one glaring exception, no economist actually thinks ‘as if’ models are helpful. Ask yourself: Is the factor zoo a problem for asset pricing? Yes. But what is a spurious factor? It’s a factor that fits the data for wrong reasons. It is ‘as if’ investors were using it to price assets even though they aren’t. And that’s precisely the problem!
The idea that we can’t test (or shouldn’t even bother testing) the assumptions behind our economic models is simply preposterous. It’s a claim that Steven Pinker would call a “conventional absurdity: a statement that goes against all common sense but that everyone believes because they dimly recall having heard it somewhere and because it is so pregnant with implications.” No economist does research this way!
Why not replace all economic models with uninterpretable machine-learning (ML) algorithms? ML algorithms can fit the data well precisely because they contain no economic assumptions. But TANSTAAFL! It is precisely the economic assumptions about what agents are trying to do that give us confidence a model’s predictions will hold up when conditions change. In other words, these assumptions are what allow economists to use the model for counterfactual analysis—i.e., to make predictions in new and as-yet-unseen environments. The right assumptions embedded in a good economic model are responsible for its robust predictions. If you’re going to ignore all such economic restrictions, then there’s no point in writing down an economic model in the first place. There are better ways to do pure prediction.
I’m by no means the first person to highlight these issues. They long predate the factor zoo and the popularity of ML algorithms. If I had to pick one person to judge the quality of an economic model, that person would be Paul Samuelson. And Samuelson strongly disagreed with Friedman’s ‘as if’ approach. Samuelson clearly recognized the importance of evaluating your assumptions, disparagingly referring to Friedman’s ‘as if’ methodology as the “F-Twist” in a 1963 discussion paper.
Moreover, in almost every context, economists approach research in a manner more consist with Samuelson than with Friedman. They firmly believe it’s important to verify one’s assumptions. This is why we see papers with titles like Do Measures of Financial Constraints Measure Financial Constraints? getting hundreds of cites a year. This influential paper is entirely concerned with testing our working assumptions.
As far as I can tell, there is only one context in which economists actually use ‘as if’ reasoning to constrain the research process—namely, when interpreting survey data. Standard asset-pricing models assume investors are solving an optimization problem that looks something like
(1)
Economists regularly test assumptions about investor preferences , the law of motion for wealth, , and various other kinds of economic constraints, . However, for some reason, it’s entirely taboo to ask investors whether they are actually trying to this problem in the first place.
Friedman directly calls out survey data in his 1953 essay, writing that “questionnaire studies of businessmen’s or others’ motives or beliefs about the forces affecting their behavior… seem to me almost entirely useless as a means of testing the validity of economic hypotheses.” However, he offers no concrete reasons why economists should think about the “maximize” part of investors’ optimization problem any differently than the “subject to” part. Both are assumptions. In Friedman’s eyes, both are untestable.
Yes, survey data can be misleading. Above I describe a situation where surveying economists about their views on ‘as if’ reasoning would yield specious evidence. But all data can be misleading. It’s not like NOT using survey data has resolved the factor zoo. Sometimes investors give uninformative answers which might lead researchers down the wrong path. But this doesn’t mean that we can’t learn anything concrete about how investors price assets from a well-constructed survey. Not every regression result is informative. Some regression estimates can even be misleading. None of this implies that regression analysis is worthless.
Friedman’s 1953 essay outlines a bad approach to model evaluation. There’s more to a good model than . Paul Samuelson knew this to be true. And, except when they’re looking at survey data, every other economist knows it to be true as well. There’s no reason for us to continue applying ‘as if’ reasoning only in this particular context. It’s just not a valid argument for dismissing survey evidence about a model.