Speech
Panel Remarks at the Brookings Institution
October 15, 2015
William C. Dudley, President and Chief Executive Officer
Remarks at The Fed at a crossroads: Where to go next?, Brookings Institution, Washington, D.C. As prepared for delivery
It is a great pleasure to be here today to participate in this panel with John Taylor. I am going to take today’s topic―“Where to go next?”―to address the issue of how monetary policy should be conducted. This is an issue that is getting considerable attention among policymakers here in Washington, D.C. To put succinctly the question I wish to tackle: Is it better for policymakers to start with a formal rule as the default position, or for policymakers to have a more flexible approach that considers a broader set of factors in setting the monetary policy stance? As always, what I have to say today reflects my own views and not necessarily those of the Federal Open Market Committee (FOMC) or the Federal Reserve System.1

To get right to the punch line, I favor a more flexible approach that incorporates a broader set of factors into the monetary policy decision-making process. The world is complex and ever-changing. There are many factors that can affect the economic outlook and the attainment of the Federal Reserve’s mandated objectives and, thereby, the appropriate stance of monetary policy.

At the same time, I do not favor total discretion in which the monetary policy strategy is determined in an ad hoc fashion as we go along. For monetary policy to be most effective, market participants, households and businesses need to be able to anticipate how the Federal Reserve is likely to respond to evolving conditions. That is because the transmission of monetary policy to the real economy depends not only on what policymakers decide to do today, but also on what the public anticipates that the FOMC is likely to do in the future as the economic outlook changes and evolves.

Our experience at the zero lower bound in recent years underscores how important expectations are in influencing the effectiveness of monetary policy. Policymakers thus need to act in a systematic and consistent manner so that expectations are formed accurately and economic behavior can respond consistently with those expectations. In my view, this consideration rules out a totally discretionary monetary policy.

Before I critique the use of prescriptive rules in monetary policy-making, I’d like to make it clear at the start that the Taylor Rule (by which I mean the formulation based on John’s 1993 and 1999 papers) has a number of positive attributes that make it a useful reference for policymakers. First, it has two parameters—the long-term inflation objective and the level of potential output—that map directly to the Federal Reserve’s dual mandate objectives. Second, the Rule has the desirable feature that when economic shocks push the economy away from the central bank’s objectives, the Taylor Rule prescribes a policy response that can help push the economy back toward the central bank’s goals. Third, a number of studies have shown that Taylor Rules are robust in the sense that they generally perform quite well across a range of different assumptions about how the economy is structured and operates.

Despite these attractive features, I don’t believe that any prescriptive rule, including the Taylor Rule, can take the place of a monetary policy framework that incorporates the FOMC’s collective assessment of the large number of factors that impact the economic outlook.

As I see it, the Taylor Rule has several significant shortcomings that can be detrimental to the attainment of the Federal Reserve’s mandated objectives. These shortcomings are not just theoretical; they have been very relevant to monetary policy in recent years. First, the Taylor Rule is not forward-looking. Its policy prescription is based on the current size of the output gap and the deviation of current inflation from the Fed’s objective, not on how these variables are likely to evolve in the future. So, in a rapidly changing environment, the Taylor Rule and other similar prescriptive rules will wind up being “behind the curve.” For example, in the fall of 2008, Taylor Rule prescriptions were well above the level of rates that was appropriate given the sharp and persistent deterioration in the economic outlook and the sharp tightening in financial conditions that occurred during that period.

Of course, many economists at that time recognized that such prescriptions would have been inappropriate and suggested various ad hoc modifications to the prescriptions—in fact, John himself suggested that modifications to his rule were appropriate at that time.2

Nonetheless, there was no consensus about the “right” modification to the rules at that time, in part, because the circumstances were unprecedented and the outlook so uncertain. If the FOMC had been required to justify to Congress deviations from a reference rule at that time, I believe that this would have slowed down how we responded to the crisis and would have resulted in a monetary policy that was not sufficiently accommodative. The consequence could have been a longer financial crisis and a deeper recession.

Second, the Taylor Rule, as typically used, assumes that a 2 percent real short-term interest rate is consistent with a neutral monetary policy. However, a large literature concludes that the equilibrium real short-term rate is very unlikely to be constant, with its value affected by many factors, including the pace of technological change, fiscal policy and the evolution of financial conditions.3

Sometimes it can be much higher than 2 percent. Presumably, this was the case during the late 1990s as rapid technological change lifted productivity growth. Sometimes it can be well below 2 percent. For example, when credit availability dried up during the financial crisis in late 2008, this drove down the equilibrium real rate far below 2 percent.

More recently, the slow growth rate of the economy and the low rate of inflation are evidence that the equilibrium real short-term rate today is well below the 2 percent rate assumed in the Taylor Rule. If 2 percent really was consistent with a neutral monetary policy, then the very low real rates of recent years—buttressed by our large-scale asset purchases—should have been extraordinarily accommodative. As a result, we should have grown much faster than the 2½ percent pace evident over the past couple of years and seen an inflation rate much higher than what we experienced. This conclusion is supported by a number of more formal models. For example, the Laubach-Williams model currently estimates that the equilibrium real short-term rate is around zero percent.4

Third, the Taylor Rule ―and more broadly, any prescriptive rule for the systematic quantitative adjustment of the policy rate to changes in intermediate policy inputs such as real GDP or inflation―is incomplete because it does not fully account for factors that are crucial to how monetary policy impulses are transmitted to the real economy. Monetary policy affects economic activity through its impact on financial conditions—including the level of equity prices, bond yields, the foreign exchange value of the dollar and credit conditions. If the relationship between the federal funds rate and other indicators of financial conditions were stable, then one could just focus on the level of short-term rates.5

But, because financial conditions vary considerably relative to short-term rates, as we have seen in the financial crisis and its aftermath, one needs to consider developments in financial conditions more broadly in setting monetary policy.

In fact, at times, when short-term rates have been pinned at the zero lower bound, the Federal Reserve has taken actions that eased financial conditions without changing short-term interest rates. Such actions have included forward guidance that the FOMC was likely to keep short-term rates low for a long time and large-scale asset purchases that led to lower bond term premia.

Now, as I said at the start, just because I don’t want to follow a rule mechanically does not mean that I favor the polar opposite—that is, a fully discretionary monetary policy in which market participants, households and businesses cannot anticipate how monetary policy is likely to evolve as economic and financial market conditions and the economic outlook change. If households and businesses do not have a good notion of how the Federal Reserve will respond to changing economic and financial market conditions, then this would loosen the linkage between short-term rates and financial conditions. This would also likely lead to greater uncertainty about the outlook and higher risk premia, and it would make it more difficult for policymakers to attain their objectives.

Instead, what I favor is a careful elucidation of those factors that influence the economic outlook and how monetary policy is likely to respond to changes in the outlook. This includes fiscal policy, productivity growth, the international outlook and financial conditions, as well as how much employment and inflation deviate from the Fed’s objectives. By conducting policy in a transparent way and communicating what is important in determining the central bank’s reaction function, I think policymakers can strike the best balance between a monetary policy that fully incorporates the complexity of the world as it is, while, at the same time, retaining considerable clarity about how the FOMC is likely to respond to changing circumstances. A formal policy rule such as the Taylor Rule misses this balance by going too far in one direction.

What is important for attaining the Federal Reserve’s mandated objectives is not that monetary policy is described in terms of a formal prescriptive rule, but rather that the FOMC’s intentions and strategy are well understood by the public. This argues for clear communication through the FOMC meeting statements and minutes, the FOMC’s statement concerning its longer-term goals and monetary policy strategy, the Chair’s FOMC press conferences and testimonies before Congress, and speeches by the Chair and other FOMC participants. But it also is important that the strategy be the “right” reaction function. This means a policy approach that responds appropriately to important factors beyond the two parameters of the Taylor Rule—the output gap estimate and the rate of inflation.

Thank you for your kind attention.



1 Jonathan McCarthy, Paolo Pesenti, Argia Sbordone and Joseph Tracy assisted in preparing these remarks.

2 For example, John suggested in February 2008 to lower the standard Taylor Rule prescription by 50 basis points to take into account the increase in the LIBOR-OIS spread at that time. See Monetary Policy and the State of the Economy, Testimony before the Committee on Financial Services, U.S. House of Representatives, February 26, 2008.

3 See James D. Hamilton, Ethan S. Harris, Jan Hatzius, and Kenneth D. West (2015), The Equilibrium Real Funds Rate: Past, Present and Future, working paper for U.S. Monetary Policy Forum, August.

4 See Thomas Laubach and John C. Williams “Measuring the Natural Rate of Interest”, Review of Economics and Statistics, November 2003, Vol 85, No. 4, pp 1063-1070. Updated estimates available from the Federal Reserve Bank of San Francisco.

5 An analogy can be made to the applicability of a Friedman k-percent monetary aggregate rule. Just as a k-percent rule requires a stable relationship between a monetary aggregate and nominal GDP (i.e., stable money velocity), a Taylor Rule needs a stable relationship between the policy rate and financial conditions.

By continuing to use our site, you agree to our Terms of Use and Privacy Statement. You can learn more about how we use cookies by reviewing our Privacy Statement.   Close