From Wikipedia, the free encyclopedia - View original article
In economics, a Taylor rule is a monetary-policy rule that stipulates how much the central bank should change the nominal interest rate in response to changes in inflation, output, or other economic conditions. In particular, the rule stipulates that for each one-percent increase in inflation, the central bank should raise the nominal interest rate by more than one percentage point. This aspect of the rule is often called the Taylor principle.
The rule was first proposed by John B. Taylor, and simultaneously by Dale W. Henderson and Warwick McKibbin in 1993. It is intended to foster price stability and full employment by systematically reducing uncertainty and increasing the credibility of future actions by the central bank. It may also avoid the inefficiencies of time inconsistency from the exercise of discretionary policy. The Taylor rule synthesized, and provided a compromise between, competing schools of economics thought in a language devoid of rhetorical passion. Although many issues remain unresolved and views still differ about how the Taylor rule can best be applied in practice, research shows that the rule has advanced the practice of central banking.
According to Taylor's original version of the rule, the nominal interest rate should respond to divergences of actual inflation rates from target inflation rates and of actual Gross Domestic Product (GDP) from potential GDP:
In this equation, is the target short-term nominal interest rate (e.g. the federal funds rate in the US, the Bank of England base rate in the UK), is the rate of inflation as measured by the GDP deflator, is the desired rate of inflation, is the assumed equilibrium real interest rate, is the logarithm of real GDP, and is the logarithm of potential output, as determined by a linear trend.
In this equation, both and should be positive (as a rough rule of thumb, Taylor's 1993 paper proposed setting ). That is, the rule "recommends" a relatively high interest rate (a "tight" monetary policy) when inflation is above its target or when output is above its full-employment level, in order to reduce inflationary pressure. It recommends a relatively low interest rate ("easy" monetary policy) in the opposite situation, to stimulate output. Sometimes monetary policy goals may conflict, as in the case of stagflation, when inflation is above its target while output is below full employment. In such a situation, a Taylor rule specifies the relative weights given to reducing inflation versus increasing output.
By specifying , the Taylor rule says that an increase in inflation by one percentage point should prompt the central bank to raise the nominal interest rate by more than one percentage point (specifically, by , the sum of the two coefficients on in the equation above). Since the real interest rate is (approximately) the nominal interest rate minus inflation, stipulating implies that when inflation rises, the real interest rate should be increased. The idea that the real interest rate should be raised to cool the economy when inflation increases (requiring the nominal interest rate to increase more than inflation does) has sometimes been called the Taylor principle.
During an EconTalk podcast Taylor explained the rule in simple terms using three variables: inflation rate, GDP growth, and the interest rate. If inflation were to rise by 1%, the proper response would be to raise the interest rate by 1.5% (Taylor explains that it doesn't always need to be exactly 1.5%, but being larger than 1% is essential). If GDP falls by 1% relative to its growth path, then the proper response is to cut the interest rate by .5%.
While the Taylor principle has proved very influential, there is more debate about the other terms that should enter into the rule. According to some simple New Keynesian macroeconomic models, insofar as the central bank keeps inflation stable, the degree of fluctuation in output will be optimized (Blanchard and Gali call this property the 'divine coincidence'). In this case, the central bank need not take fluctuations in the output gap into account when setting interest rates (that is, it may optimally set .) On the other hand, other economists have proposed including additional terms in the Taylor rule to take into account money gap or financial conditions: for example, the interest rate might be raised when stock prices, housing prices, or interest rate spreads increase.
Although the Federal Reserve does not explicitly follow the Taylor rule, many analysts have argued that the rule provides a fairly accurate summary of US monetary policy under Paul Volcker and Alan Greenspan. Similar observations have been made about central banks in other developed economies, both in countries like Canada and New Zealand that have officially adopted inflation targeting rules, and in others like Germany where the Bundesbank's policy did not officially target the inflation rate. This observation has been cited by Clarida, Galí, and Gertler as a reason why inflation had remained under control and the economy had been relatively stable (the so-called 'Great Moderation') in most developed countries from the 1980s through the 2000s. However, according to Taylor, the rule was not followed in part of the 2000s, possibly leading to the housing bubble. Certain research has determined that some households form their expectations about the future path of interest rates, inflation, and unemployment in a way that is consistent with Taylor-type rules.
Athanasios Orphanides (2003) claims that the Taylor rule can misguide policy makers since they face real-time data. He shows that the Taylor rule matches the US funds rate less perfectly when accounting for these informational limitations and that an activist policy following the Taylor rule would have resulted in an inferior macroeconomic performance during the Great Inflation of the seventies.