In a recent Bloomberg article, Noah Smith celebrates the increasing trend of empirical work in economics over the years. Purely theoretical papers are on the decline as a share of all published work. More and more economists are utilizing data to estimate the magnitude of various effects or to estimate specific parameters in theoretical models.
Empirical work is on the rise
The following figure from Angrist et al (2017)1 backs up Smith’s claims — empirical work is on the rise.
Of course, the distinction between a purely theoretical paper and an empirical paper in the mainstream is quite different from the Misesian distinction between economic theory and empirical work (history). But the trend is undeniable — economists are using data in more of their research than they used to.
Not only is empirical work in general on the rise, but one particular source of data is more popular than ever: surveys. To proxy the growth in popularity of surveys, I’ve plotted the National Longitudinal Survey citations by year since 1968:
I’d like to focus on labor economics because survey data is especially popular in that field. Other fields also use survey data, though sometimes in an indirect way. Macroeconomists, for example, indirectly and perhaps unwittingly use survey data whenever they use price indices and unemployment rate data from the Bureau of Labor Statistics.
The BLS and their NLS
The National Longitudinal Surveys are a product of the Bureau of Labor Statistics. The US spends close to $3 billion a year on collecting statistics about American citizens. The largest expenditure comes from the census (over $1 billion), but the Bureau of Labor Statistics comes in second with a budget of $618 million.
The BLS administers a large collection of surveys to construct and calculate macroeconomic data (like price indices and unemployment rates), as well as to inform policymakers and bureaucrats as they carefully steer the country toward full employment and luxurious working conditions for all.
The National Longitudinal Surveys are of particular interest to labor economists because they interview “the same individuals every year or two for several decades” and collect answers from “detailed questions about all aspects of their lives”.
The NLSY97, for example, started following about 9,000 teenagers in 1997 and continues to this day, interviewing the now 30-somethings about their drug use, wages, religiosity, sexual activity, training and education, and other kinds of personal information.
Self-reported high school grades are inaccurate
Interestingly, the survey administrators asked the survey respondents about their high school grades, but they also collected the official transcripts of many of the respondents, which gives us a chance to fact-check the respondents. The distributions of their self-reported and actual grades are presented below.
While the self-reports seem significantly lopsided toward high grades, the errors by student aren’t so extreme. More people over-reported than under-reported, but the errors are mostly within one letter grade. Notably, the errors aren’t random, but are correlated with certain personal and peer characteristics, meaning statistical analysis based on the respondent’s self-reports would be biased. The errors are also larger the longer the time interval between graduating high school and being asked the high school grades question in the survey.
In an analysis of many papers on the validity of self-reported grades, Kuncel, Credé, and Thomas (2005)2 suggest that while self-reported high school grades are less accurate than college grades, the inaccuracies “may generalize to self-reports of other accomplishments.”
Indeed, if people lie about or can’t remember their own high school grades, what about other information? Do you know exactly how much you spent on food last month? Do we expect people to be perfectly honest about how much they donate to charity? All survey data should be called into question and used with extreme caution, and surveys certainly should not serve as the basis for US labor law.
Other problems with surveys
Surveys, it turns out, have big problems. The biggest problem is that we can’t trust people to accurately report their own personal information. The accuracy of survey data depends on the respondent’s own attention, memory, and attitude toward the survey.
Surveys were discounted by economists in the 70s and 80s for this very reason: survey respondents cannot be trusted to reveal accurate information about themselves. People tend to have an inflated view of themselves and exaggerate their personal information. But surveys suffer from other issues like selection bias and difficulties in identifying causation in panel data.3
Selection bias happens when people have certain characteristics that make them more likely to be included in the analysis than the rest of the population. It’s a problem for surveys because it’s difficult to randomly administer surveys, but it is easy to survey people from a certain region at a certain school who are of a certain age and are ok with taking a survey (maybe because they need the extra credit in their economics or psychology class) and then make the flimsy assumption that the sample is representative of the population.
It’s difficult to wrench cause and effect from panel survey data because life events can be related to each other and because of interpersonal differences between the respondents. Since the survey respondents’ lives are the subject matter for the labor economist, there is still no pure laboratory experiment in which one life event is administered to a treatment group and then the researcher compares that group to a control group.
People live their lives and select themselves into college, moving from place to place, marriage, having kids, etc., and these life events are often dependent on other life events that already happened and the individual’s own preferences and personality. These kinds of biases aren’t the type that “average out” if you just get a large enough sample size. Thus isolating the ultimate cause of some labor market outcome is an impossibly complex task, even if survey respondents have perfect memories and are totally honest.
Conclusion
Economists should go back to discounting survey data, and we should spend less time, money, and effort administering surveys for the purpose of economic research and informing policymakers. People can’t be trusted to take the surveys seriously and report accurate information. Even if people do give accurate information on a survey, selection bias and other issues make causal inference and generalization to the whole population difficult, if not impossible.
- 1Angrist, J., Azoulay, P., Ellison, G., Hill, R., & Lu, S. F. (2017). Economic Research Evolves: Fields and Styles. American Economic Review, 107(5), 293-297.
- 2Kuncel, N. R., Credé, M., & Thomas, L. L. (2005). The validity of self-reported grade point averages, class ranks, and test scores: A meta-analysis and review of the literature. Review of educational research, 75(1), 63-82.
- 3See James Morgan’s “Survey research” (In Econometrics, Palgrave Macmillan UK, 2009) for a brief review of some of these issues as well as a history of the use of survey data in economics since WWII.