How to Fix the Curse of the Five

I recently came across this video link to a session held at the 2017 ASSA meetings on the ‘Curse of the Top Five’. The session was organised by Jim Heckman and involves a panel discussion with participation by Heckman, George Akerlof, Angus Deaton, Drew Fudenberg and Lars Hansen. I’m going to concentrate here on the presentations by Heckman and Akerlof.

Screen Shot 2017-10-28 at 5.16.33 PM.png

Heckman made several points in a talk informed by a series of fascinating slides that you can find linked here. He pointed out that, although many top economists publish important highly cited papers outside the top five journals, the influence of the top five is increasingly important in promotion and tenure decisions, a point I also made here.

Why is that a bad thing? One of the most insidious aspects of the curse of the five is that it concentrates power in the hands of a small group of insiders and that makes it much harder for new ideas to emerge. Figure 11, taken from Heckman’s talk, illustrates a density plot of the number of years served by editors in four of the top five journals. The QJE, a journal dominated by Harvard, is an outlier with slow turnover in editorial control. But the influence of the other top journals is also pervasive and entry to the club depends on success determined by its established members.

A friend of mine who is a senior academic at a top business school related the following story which encapsulates much that is wrong with the current system. A junior colleague, coming up for tenure, was waiting for a decision from the AER. In a departmental discussion, the point was made that hir tenure decision would be contingent on whether the paper was accepted there. As my friend remarked; why would we delegate our tenure decision to the editor of the AER?

George Akerlof has five recommendations, all of which I agree with. 1. Editors should take more responsibility for decisions by overruling referees more often. 2. We should revert to a situation where referees are advisors rather than the current situation where they often get to rewrite the paper. 3. We should work to diminish the role of top-five publications in tenure decisions. 4. We should ‘shame’ deans who act as top-five bean counters.  And 5. We must broaden the scope of areas that we deem to be intellectually acceptable to be admitted as a tenured member of our tribe.

I have two recommendations of my own for possible ways to fix the curse of the five.

First, those of us with influence on granting agencies should recommend that more than five journals be given equal weight when ranking research. In the UK, the research output of academic departments is assessed on a regular basis and referees are given guidelines in which they are encouraged to give more weight to articles published in the top five journals. That guidance should be broadened and referees should be advised instead to broaden the base to fifteen or twenty journals, selected for example, by RePEc rankings.

Second, when junior faculty come up for promotion they should be judged on their best three articles where the three articles are self-selected and, in some cases, might be replaced by a book. The current system provides incentives for junior scholars to publish large numbers of derivative works, much of which contribute little or nothing to the social good.

When I first moved to UCLA in the late 1980s, the senior faculty would read the work of our junior colleagues and make tenure decisions based on the content of their research papers. Slowly, over the years, it became more common to rely on the decisions of others by placing weight on where papers were published as opposed to their content.

I am encouraged by the positive message that arose from the ASSA panel. As the profession grows and journal space becomes more valuable, it is time to broaden the scope of those journals we judge to be the gatekeepers of knowledge. We should trust our own judgement and carefully read the work of our colleagues. That, I believe, is the right way to fix the curse of the five.

Reflections on My Interview with Cloud Yip: Part 2

Roger.png

Cloud Yip is running a series of interviews under the title of “Where is the General Theory of the 21st Century” and I was privileged to be included in that series. Last week I put up my first post about the interview. This week’s post is the second in a series where I expand on my answers to Cloud. Here, I discuss my views on rational expectations and I talk about a new version of search theory, Keynesian Search Theory, that underpins my joint papers with Giovanni Nicolò on “Keynesian Economics without the Phillips Curve” and with Konstantin Platonov, “Animal Spirits in a Monetary Model”. The paper with Konstantin uses Keynesian Search Theory to provide an updated version of the IS-LM model which we call the IS-LM-NAC model. The paper with Giovanni estimates a version of this model on U.S. data and demonstrates that it provides a better way of explaining data than the failed Phillips curve. 

I have been making the argument in my books, academic articles and op eds for at least seven years that the Phillips curve is broken and there is a better alternative that I call the belief function. I presented this work at a conference in New York in honour of Edmund Phelps where the paper was discussed by Olivier Blanchard. I’m pleased to see that the importance of this topic is now being widely recognised and my Phillips Curve scepticism has become mainstream.  

Here is what I said on the topic in a previous blog post...

Policy makers at central banks have been puzzled by the fact that inflation is weak even though the unemployment rate is low and the economy is operating at or close to capacity. Their puzzlement arises from the fact that they are looking at data through the lens of the New Keynesian (NK) model in which the connection between the unemployment rate and the inflation rate is driven by the Phillips curve…
…The research programme we are engaged in should be of interest to policy makers in central banks and treasuries throughout the world who are increasingly realising that the Phillips curve is broken. In Keynesian Economics Without the Phillips Curve, we have shown how to replace the Phillips curve with the belief function, an alternative theory of the connection between unemployment and inflation that better explains the facts. 

That leads me to the main focus of today’s post: What’s wrong with rational expectations and how is that connected with my replacement for the Phillips Curve? Over to Cloud…

Q: What is your view on the role of the rational expectations approach in macroeconomics?

“F: The classical reformulation of macroeconomics developed by Lucas and Prescott required a radical reformulation of expectations. In the Keynesian model of the 1950s, expectations were determined with a separate equation called adaptive expectations. In the Keynesian model beliefs about future prices might be different from the realization of the future prices. Because of that, those models needed another equation to explain how beliefs or expectations were determined.
Lucas, writing in 1972, removed the adaptive expectations equation and he argued that beliefs are not independent; they are endogenous and must be explained within the model. He argued the world is random. As a consequence of randomness, prices aren’t always equal to what people expect them to be and he introduced the idea of rational expectations into macroeconomics. Instead of adding an equation, adaptive expectations, to determine beliefs, Lucas closed his model by arguing that beliefs should be right on average. He argued that people wouldn't be expected to be fooled in the long run, and that we can model beliefs or expectations as probability distributions that coincide with the distribution of the actual realizations.
That all sounds very sensible, but it only makes sense in models where there is a unique equilibrium. Even in the model that Lucas wrote down in 1972, there were multiple equilibria. For me, the existence of multiple equilibria is not a problem. It is an opportunity.” 

I discussed the role of rational expectations in a world of animal spirits in a 2014 blog linked here. When I describe multiplicity as an opportunity, I mean that it opens the possibility to marry psychology with economics in a new and interesting way. If economic models have multiple possible equilibria, we can model how stories are transmitted through social networks to explain which equilibrium occurs in practice. Economists are good at building models of the macro economy. Psychologists are good at understanding the spread of beliefs. There are clearly gains from collaborative research which was the topic of the conference I helped organize at the Bank of England in July of 2017.

I have been working on models of multiple equilibria since the early 1980s but my early work on this topic dealt with models where there is a unique steady state and the economy is self-stabilizing. In my survey paper on Endogenous Business Cycles I described these models as first-generation models of endogenous fluctuations and I contrasted them with second-generation models in which there is a continuum of steady state equilibria. To explain why there may be many steady state equilibria, I developed a version of search theory that I call Keynesian search theory. That is the topic that Cloud asked me about next.  Back to Cloud…

Q: What is the "Keynesian search model" that you are advocating in your book “Prosperity for All”? How is it different from the mainstream search model that you refer to as classical search theory?

“The Keynesian search model is a variant of what I call classical search models. By classical, I mean the work that evolved from Peter Diamond, Dale Mortensen and Chris Pissarides. In the classical search model, there is a unique equilibrium in the labour market pinned down by the bargaining power of workers relative to firms. In the Keynesian search model, there is a continuum of equilibria and the equilibrium that occurs is selected by aggregate demand, just as in the Keynesian models of the 1950s.
The Keynesian Search Model maintains Keynes' idea, which I think is important, that beliefs are fundamental. Animal spirits, confidence and self-fulfilling beliefs can influence outcomes. In every single equilibrium of the Keynesian search model there is no incentive for either firms or workers to change their behaviours. The reason has nothing to do with sticky prices; it has to do with the fact that there are incomplete factor markets.
The search model has a search technology, separate from the production technology, that moves people from home to jobs. That technology has two inputs; the searching time of workers and the searching time of the recruiting department of a firm. Because there are two inputs, for the market to function well, there must be two prices. One price for the searching time for workers and another for the searching time for recruiters.
You could imagine a recruiting firm which would offer to purchase the right to find an unemployed worker a job and offer to buy the right to fill the vacancy of the company. This market would operate a little bit like a dating website, where the firm would take the two searching parties, match them and sell the match back to the worker-firm pair.
We do not see the market working in that way, largely because there are moral hazard issues. If I am unemployed and you are paying me to be unemployed, I do not see why I would ever accept a job. As a consequence of the failure of that market, there are equilibria with search externalities that can support equilibria with any level of unemployment.
My Keynesian search model solves the problem of understanding Keynes's General Theory in a way that is different from the sticky price approach that Samuelson initiated and that continues to be perpetuated by New Keynesian economists today.”

Next week, I will talk about why economists should stop pretending that unemployment is voluntary. It’s time to reintroduce the term, ‘involuntary unemployment’. Stay tuned!

My Interview with Cloud Yip: Part 1:

Roger.png

A couple of months ago, I had the pleasure of speaking with Cloud Yip. Cloud is running a series of interviews under the title of “Where is the General Theory of the 21st Century” and I was privileged to be included in that series. The interview was published in its entirety a couple of weeks ago but, because it is quite long, I will be serialising it on my blog over the next few weeks.

In his series, Cloud asks prominent macroeconomists: “Why haven’t economists come up with a new General Theory after the Great Recession?” Those of you who have been following my blog will not be surprised by my answer. The theory of macroeconomics, described in my book Prosperity for All, makes fundamental changes to the dominant paradigm. And it leads to fundamentally different policy conclusions from either classical or New Keynesian alternatives. 

Q: Do you think that there have been "revolutionary" changes in macroeconomics since the Great Recession?

F: Yes and no. In my own work, I have made some major changes to macroeconomics. I will leave it to others to decide if they are revolutionary. But in my view, most macroeconomists are carrying on with business as usual. And that is discouraging because macroeconomics needs to change.
The dominant paradigm before the Great Recession was New-Keynesian economics. That paradigm is widely perceived to have failed in two key dimensions. It didn’t include a financial sector and it had no role for unemployment. New Keynesian economists have tried to fix the NK model by adding in these features and there have been some notable contributions. But for the most part, attempts to fix the NK model are akin to rearranging the deckchairs on the Titanic.
Economics is not an experimental science. As a consequence, frequently, people pursue avenues of research that are simply wrong or mistaken.
book.png
In my view, economics took a wrong path in the 1950s. Back in 1928, there was a book published by Pigou called "Industrial Fluctuations." It is a very rich verbal theory about the causes of business cycles. According to Pigou, there are six different causes of business cycles. Those include what we would now call productivity shocks, monetary disturbances, sunspot shocks, that is, shocks to business confidence; agricultural disturbances, changes in tastes and news shocks.
Then in 1929, there was the stock market crash, and in 1936, Keynes wrote the General Theory. The General Theory was a revolutionary change in the way we think about the world. It was revolutionary because, instead of thinking of the economic system in a capitalist economy as self-stabilizing, Keynes's vision was of a dysfunctional world in which high unemployment can persist for a very long time.
A few years ago, I wrote a book called "How the Economy Works". In it I described two metaphors. The first was that of Pigou's book in which the economy is like a rocking horse hit repeatedly and randomly by a kid with a club. The movement of the rocking horse is partly caused by the shocks of the club and partly caused by the internal dynamics of the rocker. We've modelled this system for decades using linear stochastic difference equations.
In my book, I provide a different metaphor to capture Keynes' insight that the economy can get stuck in an equilibrium with high unemployment. I call that metaphor the "windy-boat model". The economy is not like a rocking horse; it is like a sailboat on the ocean with a broken rudder. When the wind blows the boat, instead of always returning to the same point, the boat can become stranded a long way from a safe harbour. 
In the language of equilibrium theory, Frisch's analogy leads to a model with a unique steady-state equilibrium: the rocking horse always comes to rest at the same point. In the windy-boat model, which is, I think, the essence of the General Theory, the economy can get stuck with high unemployment for an extended period.
In the immediate aftermath of the Great Depression in the 1940s and 1950s, the economic model we were using was based on ideas from the General Theory. Then in 1955, Samuelson wrote the third edition of his introductory textbook, in which he introduced the concept of the neo-classical synthesis.
In Samuelson's view, a view that has dominated the discipline since 1955, the economy is classical in the long run but Keynesian in the short run. Samuelson defined the short-run as the period over which prices don't adjust. He defined the long-run as the period over which the economy has had enough time to return to a classical full-employment equilibrium. According to the neo-classical synthesis, the economy is temporarily away from the "social planning optimum", but only temporarily.
In 1982, with the birth of Real Business Cycle Theory (RBC), economists gave up on Keynesian economics and we returned to the ideas of Pigou. Real Business Cycle theory formalized Pigou's model of the economy, but instead of the rich verbal theory of Industrial Fluctuations, RBC theorists constructed complicated mathematical models. And because the mathematics was complicated, the models were very simple and, initially, driven by a single productivity shock. In the period from 1982 up through 2008, most macroeconomists were engaged in a research program that was, essentially, adding the shocks back to Pigou's vision of the rocking horse model.
What happened in 2008 and in the aftermath of the Great Recession has, or should, cause us to rethink the entire enterprise of macroeconomics. In my work, I have formalized the main ideas in Keynes' General Theory. These ideas are vastly different to those that preceded Keynes and they are very different from the ideas that have guided macroeconomics since the 1980s. Keynes argued that there are multiple steady-state equilibria and that the economy can get stuck in an equilibrium with high persistent involuntary unemployment. In my work, I have formalized that idea.

Q: Why, in your view, did economists, in the 1980s, give up on Keynesian economics?

F: The General Theory was incomplete. It was incomplete because it eliminated the idea of the labour supply curve but didn’t replace it with any convincing alternative.
Keynes argued that the economy is on the labour demand curve, but he threw away the labour supply curve and replaced it with the idea of involuntary unemployment. That was always somewhat unsatisfactory theoretically.  Involuntary unemployment is open to a number of criticisms. For example, why don’t firms offer to employ unemployed workers for lower wages when those workers would willingly accept a lower wage if they are involuntarily unemployed? This is a theoretical problem that was left hanging in the General Theory.

 

Then the other issue in the General Theory is that there was never a theory of what determines the price level. Hicks and Hansen, who interpreted the General Theory, considered it to be a short-term theory in which prices are temporarily fixed. Around the time that Samuelson was writing the third edition of his textbook, a New Zealander, William Phillips, published the article "The Relation between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1861-1957". This empirical article demonstrated that there had been a stable relationship between wage inflation and unemployment in nearly a century of UK data. This has been known ever since as the Phillips curve.
Samuelson used the Phillips curve to bring together the short run and the long run. He saw it as a wage adjustment equation which explained how excess demand pressure would cause wages to rise. As wages and prices changed, the economy would return to its long-run steady state. The problem with that explanation is that as soon as Phillips had written the article, the Phillips Curve disappeared. There hasn't been a stable Phillips Curve in data anywhere in any advanced economy that I know of since the mid 1960s.

Giovanni Nicolò and I wrote a paper recently (Farmer and Nicolò 2017) that replaces the Phillips Curve with an alternative equation, the Belief Function,  that I introduced in my 1993 book, The Macroeconomics of Self-fulfilling Prophecies. Giovanni and I showed in our paper that a three-equation model closed with the Belief Function instead of the Philips Curve provides a much better fit to US data. We find that a Bayesian economist who placed equal weight on both theories before confronting them with data would find overwhelming evidence that the Belief Function was the better approach.

Next week I will continue this serialisation of my interview with Cloud, and among other things, I will discuss my views on rational expectations.

And the 2017 Economics Nobel Prize goes to ...

thaler.jpg

Today’s announcement of a Nobel Prize for Richard Thaler is richly deserved and I congratulate the Nobel committee for recognising the importance of the growing influence of behavioural economics that Richard helped to create. This is a significant ‘nudge’ towards recognising the importance of beliefs as fundamental, an idea that I use in my own work in a macroeconomic context. In July, I co-organized a conference at the Bank of England on the connection between behavioural economics and macroeconomics so I am pleased that the connection of psychology to economics will be more widely perceived as significant with the award of this year’s Nobel Prize.

Richard Thaler’s work is widely cited as recognising that human beings are not rational and in a very narrow sense, that is true. On hearing that he had won the Nobel prize, Richard is quoted as saying the most important impact of his work is the recognitions that “economic agents are humans” and “money decisions are not made strictly rationally”.  

Rationality means many things to many people and there are both broad and narrow definitions of what exactly it means. Under the broad definition, one that I have always liked, it is an organising principle that categorises human action. Rationality means that we always choose our preferred action. What is our preferred action? It is the one we choose. This idea is captured in Samuelson’s discussion of revealed preference in Foundations of Economic Analysis. Although rationality by this definition is a tautology, it is a useful tautology that plays the same role in economics as the Newtonian concept of “action at a distance”.

There is another, much narrower definition of rationality, that is formalized in a set of axioms that was introduced by John Von Neumann and Oscar Morgenstern in their magisterial tome, the Theory of Games and Economic Behavior”. Those axioms make a great deal of sense when applied to choice over monetary outcomes. They make much less sense when applied to complex decisions that involve sequential choices and payoffs of different commodities at different point in time.  It is this second definition of rationality that has shown to be violated in experimental situations and that is the take-off point for Thaler’s work on how best to present choices to people that help them make ‘good’ decisions.

If you want to know more about Richard’s work, I highly recommend the book Nudge, where you will learn about them in Richard’s own words along with that of his co-author Cass Sunstein. That work has already found its way into public policy decisions and, in the U.K., led to the creation of the ‘Nudge’ unit, an arm of the U.K. government that uses Thaler’s work to influence public decisions.

Keynesian Economics Without the Phillips Curve

Policy makers at central banks have been puzzled by the fact that inflation is weak even though the unemployment rate is low and the economy is operating at or close to capacity. Their puzzlement arises from the fact that they are looking at data through the lens of the New Keynesian (NK) model in which the connection between the unemployment rate and the inflation rate is driven by the Phillips curve.

inflationunem.jpg

In a recent paper joint with Giovanni Nicolò, we compared two models of the interest rate, the unemployment rate and the inflation rate.  One theory, the NK model, consists of a demand equation, a policy rule and a Phillips curve. The other, the Farmer Monetary (FM) model, replaces the Phillips curve with a new equation: the belief function. We show that the FM model outperforms the NK model by a large margin when used to explain United States data. 

To make this case, we ran a horse race in which we assigned equal prior probability to two models. One was a conventional New Keynesian model that consists of a demand equation, a policy rule and a Phillips curve. The other was the FM model. The FM model shares the demand curve and the policy rule in common with the NK model but replaces the Phillips curve with a new equation; the belief function.

The belief function captures the idea that psychology, aka animal spirits, drive aggregate demand. It is a fundamental equation with the same methodological status as preferences and technology.  To operationalise the belief function, we assumed that people make forecasts of future nominal income growth based on observations of current nominal income growth. If x is the percentage growth rate of nominal GDP this year and E[x’] is the expected rate of growth of nominal GDP growth next year we assumed that x = E[x’].

We estimated both models using Bayesian statistics and we compared their posterior probabilities. Our findings are summarised in Table 2, reproduced from our paper.  The table reports what statisticians call the posterior odds ratio. As is common in this literature, we compared the models over two separate sub-samples; one for the period from 1954 to 1979 and the other from 1983 to 2007. Our findings show that an agnostic economist who placed equal prior weight on both theories would conclude that the FM model knocks the NK model out of the ball park. The data overwhelmingly favours the FM model.

table2.png

We explain our findings in the paper by appealing to a property that mathematicians call hysteresis.

Conventional dynamical systems have a stable steady state that acts as an attractor. The economy will converge to that steady state, no matter where it starts. The FM model does not share that property. Although the economy follows a unique path from any initial condition, the FM model has a continuum of possible steady states and which one the economy ends up at depends on initial conditions.

The FM model explains the data better than the NK model because the unemployment rate in US data does not return to any single point. In some decades, the average unemployment rate is 6%: in others, it is 3%. And in the Great Depression it did not fall below 15% for a decade. The unemployment rate, the inflation rate and the interest rate are so persistent in US data that they are better explained as co-integrated random walks than as mean-reverting processes.  The FM model captures that fact. The NK model does not.

What does it mean for two series to be co-integrated? I have explained that idea elsewhere by offering the metaphor of two drunks walking down the street, tied together with a rope. The drunks can end up anywhere, but they will never end up too far apart. The same is true of the inflation rate, the unemployment rate and the interest rate in the US data.

As I have argued on many occasions, the NK model is wrong and there has been no stable Phillips curve in the data of any country I am aware ever since Phillips wrote his eponymous article in 1958. My paper with Giovanni provides further empirical evidence for the Farmer Monetary Model, an alternative paradigm that I have written about in a series of books and papers. Most recently, in Prosperity for All, I make the case for active central bank intervention in the asset markets as a complimentary approach to interest rate control.

In a separate paper, Animal Spirits in a Monetary Model, Konstantin Platonov and I have explored the theory that underlies the empirical work in my joint work with Giovanni. The research programme we are engaged in should be of interest to policy makers in central banks and treasuries throughout the world who are increasingly realising that the Phillips curve is broken. In Keynesian Economics Without the Phillips Curve, we have shown how to replace the Phillips curve with the belief function, an alternative theory of the connection between unemployment and inflation that better explains the facts.