How Economics Professors Can Stop Failing Us
Steven Payson
Lanham, MD: Lexington Books, 2017, xiii + 372 pp.
Steven Payson, the author of this provocatively title book, is a former career federal government economist who has the temerity to argue that economics could be a useful science if mainstream academic economist theoreticians would simply adopt and employ the scientific method in a serious effort to provide an understanding of the world in which we live. Instead, he convincingly argues, the culture of academic economists encourages and rewards a mathematical modeling onanism that is not only not “seminal,” but is instead practically barren of any contributions to that understanding. Payson argues that the main purpose of such model-building exercises is to achieve publication in what are believed to be the top economics journals and, consequently, to garner citations in the published work of other academic economists.
Because the book is almost totally critical and contains suggestions for improvement only in the concluding chapter, I think a more appropriate title for it might be “Why Mainstream Economics Professors Are Not Contributing to Useful Knowledge, and a Few Suggestions for Improvement.” There is much anger and outrage expressed by the author in the course of his argument, and yet the book is not just a polemic. If Payson’s critique is on the mark, the question of what to do is certainly an important one. Economic policy makers face a host of real world problems and need guidance in the face of them. What they get instead in some important instances is uncomprehending surprise followed by excuses and panic—a prime example being the mainstream economics profession’s response to the financial meltdown now termed “The Great Recession.” The result of that Federal Reserve-fueled debacle was the most simple-minded Keynesian money dump in decades, and with no end in sight at this writing.
The cover of the book features a chessboard showing a simple “fool’s mate.” This seems appropriate as it is Payson’s main contention that mainstream model-building founders quickly when it is realized that most of this activity consists of making a few simple assumptions and then engaging in a rigorous mathematical exercise to “rediscover” them. Other equally defensible assumptions would produce different implications. Little effort is devoted to the rigorous derivation and defense of assumptions, or to assessing the reliability of the data on which they may be based.1 Milton Friedman argued that it was predictability of a model that mattered, not realism in assumptions which need only be “sufficiently good approximations for the purpose at hand.” (p. 64) Too many economists took this to mean that only predictability mattered. Such an approach stands in stark contrast to that of the natural science practice of using the scientific method to achieve an understanding of the objective physical world that contributes to useful knowledge—that is, an “understanding of how the real world works.” (p. 53) Payson’s purpose is to counter the academic economist mainstream by pointing out that its mathematical clothing does not cover its explanatory vacuity.
By the natural science approach, Payson means the application of methods intended to reveal “the causality behind known and observable physical phenomena.” (p. 120) Natural scientists do use mathematical models to develop their understanding of causal relations; however, the mathematics is simply a tool in this quest, not a substitute for results. In natural science culture, methodological issues are key in research designed to achieve an understanding of actual phenomena. In academic economic culture, methodology is a specialized subfield and economics students seldom address the question of the use of scientific method in research. (p. 188)
Instead, graduate students in mainstream economics programs study complicated mathematical models constructed on the basis of a few restrictive assumptions, and learn to model-build themselves with a view to future publication in economics journals. The question of the accuracy of the assumptions with respect to ordinary human action is less important than the question of how to rack up as many publications as possible in journals believed to be top ranked among all those published. The end goal is to garner citations by other economists in their own publications, rather than to advance an understanding of human action that has useful policy applications or provides an advance in knowledge of praxeological processes. [My term, not Payson’s] In support of this claim, Payson references the Presidential Lecture by David Card to the 2016 Annual Meeting of the Western Economic Association International. In it, Card advised members of his audience to write papers intended to receive a multitude of citations if they desired publication in top-ranked journals. (pp. 111–113) That same year, at the annual conference of the Southern Economics Association, keynote speaker Andrei Schleifer was introduced as the most cited economist in the world, as if this was his key accomplishment. (p. 178) In addition, Payson adds, it is considered desirable to learn to do this research under professors who are leading lights of the profession in terms of their own citation counts in “top ranked” journals.
Payson both generalizes this activity as characteristic of mainstream academic research, as well as provides specific examples of conference presentations and published research that fit the stereotype. So far as the purpose of this activity is concerned, students and newly-minted doctorates are evaluated for employability, tenure, promotion, and career advancement based on how well they play the game. The result is an academic culture that encourages and sustains the subordination of research ends to means. The mathematical tail wags the research dog. Payson terms this “literature-only discourse,” and its result is “unscientific economic theory.” Its hallmarks are assumptions that, if slightly altered, would yield different results for the model, a methodology that is “understood, valued, and genuinely studied by a very small group of other economists with advanced expertise in that highly specific topic,” and findings that possess no real world explanatory value. (pp. 51–52)
Although econometric testing might seem to corroborate such a paper’s conclusions, there are many problems here, he argues. Simple-minded and inaccurate assumptions such as that there exist “constant elasticities of substitution among factor inputs,” or that the characteristics possible for a variable’s population are normally distributed, are all too prevalent. Association may be mistaken for causation, even in very complex multivariate analysis. Imprecise or arbitrary proxies are often used for variables in the model. “Data mining” is used to narrow down results to the plausible, and “statistical significance” is often mistaken for “importance.” (pp. 58–63) As other researchers build on these models, a “theoretical literature” is accumulated that is mistakenly viewed as a growth in “knowledge.”
Returning to the question of citation counts as a measure of scholarly achievement and a yardstick of professional ranking, Payson argues that there are a number of reasons for skepticism. For one, great discoveries in natural science are known and their authors acknowledged throughout the world. Not so for economists. Another problem concerns the ranking of what are considered to be the top economics journals. Depending on the weighting rules, the American Economic Review (AER) is either first or seventeenth, or maybe another ranking entirely.2
Another problem is the vulnerability of the system to be gamed. Researchers can solicit citations from colleagues, journals can solicit citations to particular scholars or to previous articles published in that journal, and scholars often cite their own work.3 In addition, there is no guarantee that a citation is particularly relevant to the article in which it is cited. It may be the author is just signaling that he is knowledgeable of previous scholarship on the subject, or is trying to show that his work is related to that of top-ranked scholars. The citation may even be a devastatingly critical one.
The bottom line on this is that citations have become a substitute for serious evaluation of the importance of publications. It simplifies decision-making in hiring, tenure, promotion, and professional ranking because evaluation for such decisions is difficult, highly personal, and those engaged in it may feel inadequate to the task. This is especially the case if the publication field is highly specialized and highly mathematical, even if the economic concepts at issue are relatively simple.
At one point, Payson makes a shocking admission: he believes that college and university economics professors should be performing research and preparing lectures directly relevant to their job of teaching and mentoring students. Instead, they have very strong incentives to starve that function by devoting so much time and effort to the publication game. (pp. 88–89) The current academic economist’s culture is undercutting what should be the main purpose of the academy, in this view. Further consequences include reduced time to read what is published in one’s field and a plethora of articles that are read by only a few specialists, few of them outside academia.
A number of ethical problems in the profession are briefly treated in the book. These include the failure of authors to clearly disclose when they may have conflicts of interest. The American Economic Association (AEA) has a “disclosure policy” for articles in its journals, but it may be difficult to track down the disclosure statement, and the policy only suggests that it may be to the author’s interest to make such a disclosure if acceptance is to be assured. A serious problem for AEA journals is that since 2011, its journals switched from a “double blind” review process to a “single blind” process for submitted articles. The rationale was that search engines now make it too easy for referees to identify authors, if they so choose. So, the Executive Committee removed the blinders, thus sanctifying what was previously considered unethical behavior. (pp. 213–217) Couple this with the AER reserving the right to reject papers without review and the foundation for basic fairness and scientific integrity is significantly weakened.
One would think that if publication for citation of journal articles that essentially contribute very little, if anything, to an understanding of real world human action is what characterizes the research activity of most academic economists, it would be noticed and discussed. And, indeed, Payson cites several scholars, most notably Robert Solow, Deirdre McCloskey, and Paul Ormerod, who have been publicly critical of it. The problem is that the public discussion of this issue has led to nothing but more public discussion while few, if any, actions have been taken to change the culture.
Payson argues that a good first step would be for the profession to adopt a code of professional ethics that promotes scientific integrity and the objectivity, reproducibility, and transparency of research in economics. He notes the existence of the Berkeley Initiative for Transparency in the Social Sciences, but argues that essentially all that is being done is to discuss the questions of ethics and scientific integrity in economic research, while taking no actual actions to attempt to change existing practices for the better. Payson founded the Association for Integrity and Responsible Leadership in Economics in 2007 in an attempt to encourage economists, especially those in academia, to take actions to change existing practices that are ethically suspect. Despite many papers and training sessions on the subject of ethics in economics, at his book’s publication there was still no code of professional ethics for economists in the United States. This may change. At this writing the AEA, under the leadership of Alvin Roth, has sent to its members for comment a draft Code of Professional Conduct. It calls for “intellectual and professional integrity” in research, objectivity, the disclosure of conflicts of interest, “civil and respectful dialogue,” and equal opportunity. It also assigns to economists the collective responsibility for “developing institutional arrangements and a professional environment that promote free expression concerning economics.” I suspect that just about anything that economists do that is not obviously a matter of simple wrongdoing, like lying or plagiarism, will survive this code. Notably absent is some statement to the effect that economists have some responsibility to the public for what they do.4
Payson would like a lot more than this to be done to change the culture of academic economics. For example, while he was a member of the board of the Society of Government Economists and organizing conference sessions, he initiated a requirement that paper proposals include a statement explaining “how the paper contributes to a better understanding of economics.” (p. 323) He was met with considerable pushback and the requirement was eliminated in two years. His conclusion: many economists “essentially have no justification, or defensible reason, for what they are doing” and resent being asked to provide one.
Payson desires an economic research culture that promotes work that has tangible social benefits. His suggestions for improvement are directed toward that end. First, stop the funding of research that consists of mathematical onanism. Those with power and authority in governmental and non-governmental grant-making institutions should stop funding such research. Second, senior faculty should take the lead in ending citation counts for the purpose of, hiring, tenure, and promotion decisions. Third, introduce required courses for economics degree-granting programs in “professional ethics, scientific integrity, and responsible leadership.” (p. 335) Finally, he argues that it is the responsibility of prominent economists to take the lead in cleaning the stable. They don’t hesitate to vocally address important issues outside the profession; they should do the same within it.
In assessing the main arguments in the book, there is a glaring flaw that most economists in the Austrian School tradition will immediately see. Payson acknowledges that there exists a serious discussion of the ontological, epistemological, and thus methodological differences between the natural science research program and that of economics. (p. 189) He chooses not to address it, while maintaining that it is still possible to direct mainstream economic research into the discovery of true causality. I suspect that the reason lies in his belief that Paul Samuelson, “one of the greatest economists who ever lived,” and whose Foundations of Economic Analysis became the Bible of mainstream economics, “did much more good than harm.” (p. 120) Well, if functionality is not causality, and there are no laws in economics that can be expressed as constant quantitative relations, what is the point of most of what mainstream economists do? How will they discover causal relations when their prime methodology is epistemologically unsuited to the task? His critique of mainstream academic economic culture and the preoccupation with mathematical onanism, rather than with seeking an understanding of the causal relations of human action, is compelling and timely. But a car is only as good as its engine and that of the economics that sprang from Samuelson has seized. Replacing the maps on the onboard GPS navigator will not improve the situation.
In closing, it is fair to ask if this book is likely to have any effect on the practices it critiques. I doubt it. Economists are well aware of the sunk cost fallacy; however, with respect to those who populate what are widely considered to be the upper ranks of the profession, a conversion to the goals that Payson advises would have serious consequences for them. It would mean that they would have to disavow most of their life’s work and act to drastically transform academic economics. Don’t hold your breath.
- 1Truman Capote is said to have once remarked that some people are writers and others are typists. Those who have little regard for the reliability of the data they use or the reality of the assumptions on which they rely may be regarded as falling into the latter category.
- 2As a young intelligence staff officer in the Office of the Deputy Chief of Staff for Intelligence at the headquarters of the United States Army, Europe, in the early 1970s, I was witness to a task force from Washington, D.C., whose job it was to rate our productivity. It finally was decided that numbers of pages in intelligence reports would do this. Particular tasks were to be rated by the time spent doing them. Yes, it was just that simple.
- 3Years ago, I reviewed a collection of articles that included one by a future Nobel laureate. Thirty-three of the seventy-five articles that he cited (44%) in his bibli-ography were his own.
- 4So far (June 2018) there has been no adoption of any code of professional conduct by the AEA. Instead, a January 2018 Ad Hoc Committee on the Professional Climate in Economics was created to evaluate the proposals of the Ad Hoc Committee to Consider a Code of Professional Conduct ‘with a particular focus on the issues faced by women and minority groups.’ The result: the April 2018 creation of a New Standing Committee on Equity, Diversity, and Professional Conduct. It is charged with evaluating and implementing the recommendations of the Ad Hoc Committee on the Professional Climate in Economics.