The Economic Crisis and New-Keynesian New-Keynesian DSGE models: an Evaluation Wim Meeusen University of Antwerp (Belgium) June 2009
1. Introduction
Economics as a scientific discipline was severely battered in the wake of the Credit Crunch and the ensuing worldwide economic crisis. Especially the strand of what came to be known as ‘mainstream’ macroeconomics in the shape of the ‘New-Keynesian Dynamic Stochastic General Equilibrium (DSGE) model’ took a severe blow. Very few had seen the crisis coming, and the few distinguished economists that indeed did, like Nouriel 1
Roubini (‘Doctor Doom’) and Robert Shiller ( ), were not taken seriously. The signs of a coming catastrophe were nevertheless ominous. To name only some of the most blatant signs in the US: •
The appearance of an asset price boom: the S&P/Case-Shiller home price index was equal to 100 in January 2000; it had climbed to a peak of 226.29 in June 2006;
•
The increasing importance on the mortgage market of sub-prime and Alt-A loans: in 2003 sub-prime and Alt-A loans amounted to 48% of the total, which 2
is of course already alarming; in 2006 this had increased to 74% ( ); •
The explosion of the (OTC) market of credit default swaps (CDS), ‘weapons 3
of financial mass destruction’ in the prophetic words of Warren Buffet ( ): by the end of 2007 this market had a value of 45 trillion USD, more than 20 tril4
lion of which was purely speculative ( ); •
In the summer of 2004, the spread between short- (3 months) and long-term (20 years) interest rates was at its highest level in half a century, leading to
1 2 3 4
E.g., Roubini, 2004 and Shiller, 1989. R. Dodd, 2007. W. Buffet, Berkshire Hathaway Annual Report for 2002. R.R. Zabel, 2008.
1
•
the rapid expansion of bank credit above its long-run trend (in March 1990 the proportion of bank credits to M1 was 3.30; in March 2008 this had risen to 6.90), and
•
the development of a fragile debt structure of the banks, excessively dependent on liquidity;
•
The growth of the US financial sector beyond what is sustainable: between 1973 and 1985 the US financial sector represented approximately 16% of domestic corporate profits; in the 90s this fluctuated in a range from 21 to 30%; 5
in the beginning of the present century it soared to 41% ( ). •
Over-consumption of US households that, from 1999 onwards, increasingly spend more than they earned, net financial investment as a percentage of disposable personal income falling to a record low of -8.25 in 2005, and continuing to remain negative in the years after;
•
The deficit on the US current account increasing from a near equilibrium position in 1991 to a record low of 6.15% of GDP in 2006 (about 1.8% of World GDP), and continuing to remain in the red in the years after that; the concomitant accumulation of huge dollar reserves by the Chinese government reinvested in US government bonds making it possible that the proportionate value of both currencies remained stable.
All this notwithstanding, for the large majority of so-called ‘mainstream’ macroeconomists, it was ‘business as usual’. Michael Woodford, one of the most prominent of present-day DSGE modellers, published as late as in the first issue of the new AEA publication American Economic Journal: Macroeconomics (2009) an article that proclaimed the ‘new convergence’ of ideas in modern macroeconomics, stating that “while there is not yet agreement on the degree to which the greater stability of the U.S. and other economies in recent decades ( sic) can be attributed to improvements in the conduct of monetary policy, the hypothesis that monetary policy has become conducive to stability, for reasons argued by Taylor among others, is certainly consistent with the general view of business
5
S. Johnson, 2009.
2
fluctuations presented by current-generation empirical DSGE models” (Woodford, 2009, 6
p. 273) ( ). Patricia Cohen, in her New York Times afterthoughts on the last yearly meeting of the AEA (Cohen’s title is ‘Ivory Tower Unswayed by Crashing Economy’), cites Robert Shiller who blames ‘groupthink’, i.e. “the tendency to agree with the consensus. People don’t deviate from the conventional wisdom for fear they won’t be taken seriously. (…) Wander too far and you find yourself on the fringe. The pattern is self-replicating. Graduate students who stray too far from the dominant theory and methods seriously reduce 7
their chances of getting an academic job.” ( )
It is thus worthwhile to analyse what has happened on the academic scene in somewhat greater detail. The practice of ‘slicing and dicing’ debt and ‘securitisation’, itself a result of the hubris of bankers and institutional investors, but also of the policy-makers’ urge to deregulate, was based on the so-called ‘Efficient Market Hypothesis’, which assumes – erroneously – allocative and expectational rationality and assets prices reflecting market fun8
damentals ( ). A minority of distinguished scholars in the past have heavily criticised these assumptions. There is the vast body of empirical work by R.J. Shiller (e.g. his 1989 book), but also the earlier contribution of Tobin (1984) who concluded that financial markets neither show Arrow-Debreu full insurance efficiency (a tall order anyway because this would require that all assets and their prices are defined, not only by their obvious characteristics, but also by all the contingencies at which they can possibly be exchanged), nor information arbitrage efficiency (impossibility to earn a profit on the basis of publicly available
information), fundamental valuation efficiency and functional (i.e. macroeconomic) efficiency. Financial markets in his view often are even not technically efficient (it is possible
to buy or sell large quantities with very low transaction costs) (see also Buiter, 2009c). 6
The atmosphere of broad consensus conveyed by Woodford’s text is expressed by the nu merous use of expressions like “it is now widely accepted…” (8 times in the space of the 6 pages on which he documents the ‘New Synthesis’). 7 P. Cohen, NYT, 5/3/2009. 8 Alan Greenspan, in a congressional testimony in October 2008, described himself as being “in a state of shocked disbelief [over the failure of the] self-interest of lending institutions to protect shareholders’ equity” (cited by M. Wolf, 2009).
3
The present events should now convince every reasonable economist that the EMH is a fallacy. The paradigmatic issue, laid bare by the present economic crisis, is however of a wider nature. At stake are an additional number of traditional assumptions made in the ‘mainstream’ macroeconomic literature: -
the existence of ‘representative’ agents (households and firms)
-
that maximise an intertemporal utility function, resp. the present value of present and future profits
-
under perfect foresight or rational expectations.
These assumptions are made by new-classical as well as new-Keynesian macroeconomists, and lead, if coupled with an ‘appropriate’ so-called ‘transversality’ condition, to particularly optimistic conclusions on the inherent (saddle path) stability of the general equilibrium involved. New-Keynesian economists try nevertheless to keep in touch with reality, relatively speaking, by using this framework in models by means of which they explore the implications of imperfect or monopolistic competition in conditions of price and/or wage stickiness. New-classical economists, on the contrary, add insult to injury by moreover assuming -
perfect competition and
-
flexible prices and wages.
We need only go into a discussion of the hard-line new-classical theories and policy prescriptions in as far as elements of their credo survive – which they obviously do – in present day theorising. John Kay, a senior editorialist of the Financial Times (not exactly a left-wing publication), and one-time staunch defender of neo-liberal recipes, wrote recently “[…] these people discredit themselves by opening their mouths” (Kay, 2009). Hard-core new-classical theories are indeed already for some time in full retreat after having dominated academia and policy circles during the days of Margaret Thatcher and Ronald Reagan.
4
The ‘grand old man of economics’, Robert Solow – still active and still very critical on what is going on in the profession – already in his AEA Presidential Address of 1980, called it “foolishly restrictive” for the new classical economists to rule out by ass umption the existence of wage and price rigidities and the p ossibility that markets do not clear. “I remember reading once that it is still not u nderstood how the giraffe manages to pump an adequate blood supply all the way up to its head; but it is hard to imagine that anyone would therefore conclude that giraffes do not have long necks” (9).
The ubiquitous presence of new-Keynesian DSGE models in present day ‘mainstream’ macroeconomic research, since the publication of Obstfeld and Rogoff’s seminal ‘redux’ paper in 1995, is however an altogether different matter. In section 2 we discuss a baseline new-Keynesian DSGE model and its variants and extensions. In section 3 we look at the solution of these models. The next section deals with calibration and estimation issues. In section 5 we draw conclusions for policy and economic theory.
2. The specification of new-Keynesian DSGE models
We concentrate on a particular variant of the baseline new-Keynesian DSGE model, and consider extensions and variants along the way. A representative household j in the continuum j ∈ [0,1] maximises the following intertemporal utility function: ∞
j
t E 0 ∑ β ut ,
[1]
t = 0
where E0 is the expectations operator, conditional on the information available to household j in period 0, β < 1 is a uniform discount factor, and
j
ut =
9
1 1 − σ c
j1−σ c ct
−
λ l
1 + σ l
j1+σ l lt
1−σ m
j mt λ m + 1 − σ m Pt
[2]
A. Klamer, 1984.
5
j
is the instantaneous utility of the j-th household, lt being labour supply by that house j
j
hold, mt / Pt being the real value of its money holdings, and ct being aggregate con10
sumption by the household, usually given by a Dixit-Stiglitz aggregator function (
):
ε
1 j = ∫ 0 cit
ε
j ct
−1
ε
−1 di , ε
[3]
j where cit is consumption of good i by the j-th household ( i ∈ [0,1]) and
contemporaneous elasticity of substitution between the different goods ( σ
c
11
ε
> 1 is the
).
is the inverse of the intertemporal elasticity of substitution w.r.t. aggregate consump-
tion,
σ
l
is the inverse of the elasticity of labour supply w.r.t. the real wage and
σ
inverse of the intertemporal elasticity of substitution w.r.t. the holding of money; λ m
m
λ l
is the and
are (positive) relative weights of work effort and money holdings in the utility func-
tion. It holds, for the labour supply of the j-th household, that j
lt
=
1 j
∫ 0 lit di
,
[4]
j
where lit is the supply of labour by the j-th household to the monopolistically competitive firm producing the i-th good. [1] is maximised over the unknown functions ci j , m j / P and l j , subject to the following period budget constraint: 1
j
j
∫ 0 pit cit di + mt j t
π
=
j j
wt lt
+
j
j
mt −1 + π t .
[5]
are the profits accruing to the j-th household (it is assumed that the households are
the owners of the firms; it is also assumed that the households share the revenues from owning the firms in equal proportion). j The first order conditions for cit yield the usual demand equation:
10
Instead of a continuum of different consumption goods, each produced by a single firm on a monopolistically competitive market, a number of authors have considered an economy wit h a single final good traded under perfect competition, but produced with a technology using a continuum of intermediate inputs, each of them produced by a single firm on a monopolistically competitive market (see e.g. Smets and Wouters (2003), Christiano, Eichenbaum and Evans (2005)). 11 Lower case alphabetic symbols denote variables defined at the level of households and firms. Capitals denote variables defined at th e macroeconomic level.
6
j p cit = it Pt
−ε
j
ct
,
[6]
pit being the price of the i-th good, and the price index Pt being given by
Pt
=
[∫ p
]
1
1 1− ε di 1−ε 0 it
.
[7]
It holds that 1
j
∫ 0 pit cit di
=
j
Pt ct .
[8]
The intertemporal path of aggregate real consumption and the optimal value for the real wage rate that is set by the household are yielded by the familiar Euler conditions: (− = β E t ct j+1
j ( −σ c )
Pt
Pt
Pt +1
j (σ l )
j
wt
c)
σ
ct
=
λ l
j ( −σ c ) ct
lt
[9]
j ( −σ c ) ct
m j = λ m t Pt
−σ m
.
The first-order conditions in expressions [9], together with the budget constraint in [5] and transversality conditions that are assumed to be satisfied, ruling out explosive developments, determine the optimal time-path of consumption, work time and money holdings of the representative household. In this baseline version of the new-Keynesian DSGE model we simplify the supply side of the economy by assuming that there is a continuum of firms, each producing a differentiated good i with the following linear technology:
1 j ∫ 0 cit d j
≡ cit
p = it Pt
−ε
1 j ∫ 0 ct d j
−ε
p = it C t = yit = Z t lit , Pt
[10]
where lit is a composite CES labour quantity specified by
7
φ
φ −1 1 j φ lit = ∫ 0 lit d j
φ −1
,
[11]
and Z t Z t −1 exp(η t ) is a stochastic aggregate technology index, with η t being an independently distributed Gaussian process. C t is of course the national consumption level. =
φ > 1 is the elasticity of substitution between different sorts of labour. According to the
usual Dixit-Stiglitz logic, the wage index can be written as follows: 1
1 0
1−φ
j1−φ
W t = ∫ wt
d j
.
[12]
It again holds, as in [8], that 1
j j
∫ 0 wt lit d j
=
W t lit .
[13]
Since in this version of the model there is no physical capital (labour is the only primary factor of production), there is no accumulation equation, and profit maximisation by firms reduces to the static case. The pricing decision takes in this setting of monopolistic competition the familiar simple mark-up form: pit
ε =
ε
−
W t
1 Z t
.
[14]
It also holds, for labour demand by the i-th firm for the j-th variety of labour that −φ
j j wt lit = yit , W t
[15]
which coincides with labour supply in expressions [2] and [4]. The model implies full employment since each household sells labour according to its own preferences. [14], unsurprisingly in view of the uniform values of all the parameters across households and firms, implies symmetry. This allows to write the period profits of the i-th firm as follows:
8
π
it
t , pt
π
=
π
t
=
pit yit ε
=
pt yt
=
ε
j t
π
.
[16]
and yt are representative profits of individual firms, resp. the representative price
and output of individual goods. The symmetry that is present in the model allows to model the money supply in a very simple way: 1
j
j
∫ 0 mt d j = mt
=
mt = M t = M t −1 exp(ξ t + γη t ) ,
with ξ t being another Gaussian white noise process and
[17] γ
being a reaction parameter of
the monetary authority with respect to technological shocks. In its above form the model exhibits flexible prices and wages and is therefore in essence not much more than a new-classical RBC model in the tradition initiated by Kydland and Prescott (1982) and Long and Plosser (1983), plus the assumption of monopolistically competitive markets. The model becomes ‘new-Keynesian’ with the additional assumption of price and/or wage rigidity. A popular approach among new-Keynesian DSGE modellers is to use the Calvo specification (Calvo, 1983). Wage- and price-setters receive, as it were, ‘green’ or ‘red’ light signals enabling or preventing them to adjust their prices. These signals arrive with a given fixed probability. Let
ω
w
and
ω
p
be the respective probabilities that households
and firms can ‘re-optimise’ their prices in a given period. Optimal wage-setting by the households and optimal price-setting by the firms that are ‘permitted’ to re-optimise is now more complicated than suggested by the Euler conditions in [9] and the simple mark-up pricing equation in [14], because both types of economic agents have to consider the possibility that they may not be able to adjust prices/wages in the future. The expected future costs that are entailed have to be accounted for in the optimal decision taken today (see e.g. Erceg, Henderson and Levin (2000) for details). This introduces additional dynamics in the model and accentuates the role of future expectations. ~ – as the alternative for p in expression [14] – be the optimal price obtained Let p it it j 12 ~ in this way, and let w be the alternative optimal real wage set by the j-th household ( ). t
Symmetry allows now to redefine Pt and W t as follows: 12
Smets and Wouters (2003, 2007), among others, assume that those ag ents that are not allowed to reoptimise in the Calvo sense adjust their price/wage to past inflation levels. This slightly changes the form of the expressions in [18].
9
1−ε
Pt
1−φ
W t
=
~ )1−ε (1 − ω p )( p t
=
~ )1−φ + ω (W )1−φ . (1 − ω w )( w t w t −1
+ ω p
1−ε
( Pt −1 )
[18]
This base-line version of the new-Keynesian DSGE model has in recent years been adapted and extended in a number of directions. We review the most important instances. •
Physical capital as a second primary input. Capital is owned by households and rented to the firms. The capital accumulation equation adds to the dynamics of the model. Capital income is assumed to equal marginal productivity of capital, which becomes an endogenous variable in the model (see e.g. Christiano et al., 2005). Some authors consider also fixed costs in the production sphere (e.g. Adolfson et al., 2007).
•
Households can invest part of their wealth in government bonds at an interest rate that is set by the central bank. Monetary policy is in that case modelled by means of a Taylor reaction rule. This option is chosen in many papers.
•
Variable capacity use of capital and labour. Galì (1999), for instance, considers the disutility from work in the utility function as a positive function both of hours worked and effort supplied. Christiano et al. (2005) and also Smets and Wouters (2003, 2007) include the rate of capital utilisation, next to the investment decision, in the decision set of the representative household.
•
Habit formation in the consumption function (e. g. Smets and Wouters, 2003, 2007).
•
Wage stickiness modelled either through the intermediate role of a monopolistic trade union or through a Nash bargaining process between a union and a representative firm, possibly combined with Calvo-type rigidity (e.g. Smets and Wouters, 2007), or through the use of a search friction model (Gertler et al., 2008).
•
Open economy aspects. Adolfson et al. (2005) extend the Christiano et al. (2005) model to a small open economy. Other contributions in this field include Galì and Monacelli (2005) and Lindé et al. (2008). Two-country newKeynesian DSGE models are analysed in Lubik and Schorfheide (2005) and Rabanal and Tuesta Reátegui (2006). Galì and Monacelli (2008) examine monetary and fiscal policy in a currency union.
•
In open economy models, incomplete markets are introduced by considering transaction costs for undertaking positions in the foreign bonds market, and by
10
gradual exchange rate pass-through, i.e. import prices do not immediately reflect prices on the world market expressed in domestic currency (see e.g. Adolfson et al. (2007), Lindé et al. (2008) and Benigno (2009)). •
Additional types of shocks. The Smets and Wouters paper of 2007 is one that goes far along this path: they consider shocks on technology, investment relative prices, intertemporal preference, government spending (including net exports), monetary policy, the price mark-up and the wage mark-up. Rabanal and Tuesta Reátegui (2006), in their 2-country modellisation, consider also country-specific technology shocks and UIP shocks.
Three important variants/extensions of the base-line model merit specific attention: the specification of monetary policy, the presence of a commercial banking sector and the issue of unemployment. Although optimal monetary policy decisions are one of the main focuses of the large majority of (new-Keynesian) DSGE models papers, the concept of money is nearly always only weakly defined. In the much cited papers of Smets and Wouters (2003, 2007), for instance, a so-called ‘cashless limit economy’ is considered. Money as such is absent in the model, even if there is a central bank pursuing a monetary policy in the form of a Taylor interest rule. The background of this modelling choice is the old Walras and Arrow-Debreu general equilibrium concept of an economy under perfect competition. These models, that surely had no pretence to describe reality, were insufficiently detailed to deal with the ways in which people pay for goods, otherwise than by saying that they had to stay within the borders of a budget constraint. If these models wanted to tell something meaningful about the money supply or monetary policy, they had to make simplifying assumptions like the ‘cash-in-advance’ hypothesis that states that each economic agent must have the necessary cash available before buying goods. Another simplifying option is the one that Woodford chooses in his ‘neo-Wicksellian’ approach (cf. Woodford (1998) and Woodford’s magnum opus Interest and Prices, 2003). Woodford – Smets and Wouters and a number of other authors follow in his suit – observing that paper and metal currency is gradually losing importance, assumes that the limit case where paper and metal money have disappeared and only electronic money remains, continues to yield in their DSGE models a meaningful solution for the nominal price level and the nominal rate of interest. Buiter (2002) strongly objects. He states that “Woodford’s cashless limit is simply the real equilibrium solution to an accounting system of exchange to which money or credit, be it cash (in-advance or in-arrears) or electronic transfer, is an inessential addition”. Cashless limit economies in the sense of 11
Woodford produce an equilibrium by means of the computing power of the auctioneer in an Arrow-Debreu auction, and should not be confused with an electronic money system (see also Rogers, 2006). Cashless limit models in the sense of Woodford may have pedagogical merits, but are unable to describe what is going on in a modern, highly monetised economy, let alone to say something meaningful about the way in which the central bank should act. This is not to say that DSGE models that include a monetary supply variable are much more realistic. The basic problem remains that in DSGE models savers and investors are united in the same economic agent, the ‘representative’ household (
13
). This im-
plies frictionless financial markets, and also no hierarchy of interest rates. The single interest rate set by the central bank is at the same time the rate of return on capital, the rate of return earned by firms and households on savings, and the rate paid by borrowers. There is no place, and no need for a commercial bank sector that acts as intermediary. Recently in Cúrdia and Woodford (2008), an (exogenous) credit friction is introduced, allowing for a time-varying wedge between the debit and credit interest rate, but in the continuing absence of commercial banks. If there are no commercial banks in the model, insolvency and illiquidity problems caused by these banks are not answered. Obviously, the models do not allow such questions to be asked in the first place. The full employment implication of, specifically new-Keynesian, DSGE models is another sore point. The reason for this feature is of course the symmetry in the continuum of households. Each household is ‘representative’ in its own right. If one household finds employment, all do. No involuntary unemployment can occur, only voluntary movements in hours of work or intensity of effort, i.e. movements on the ‘intensive’ margin. This remains true regardless the particular form taken by wage or price rigidity. Both Blanchard and Galì (2008) and Gertler et al. (2008) provide examples of newKeynesian DSGE models in which there are movements in employment along the exten14
sive margin ( ). They do so by redefining the representative household as consisting of family members with and without a job, and combining this feature with a wage bargaining process. Gertler et al. also consider the probability of finding a matching between unemployed workers and vacancies. We note in passing that both models are of the ‘cashless limit’ type. 13
An interesting alternative is analysed in De Graeve et al. (2008), who introduce some degree of heterogeneity by considering three different types of households: workers, shareholders and bondholders. 14 Blanchard and Galì start their analysis by noting that the a bsence of involuntary unemployment was viewed as one of the main weaknesses of the RBC model (see e.g. Summers, 1991), but was then ‘exported’ to new-Keynesian DSGE models.
12
This brings us to the fundamental weaknesses of (new-Keynesian) DSGE models. We discuss successively •
the issue of representative economic agents, and the symmetry it entails,
•
the rationality assumption,
•
imposed stability: the transversality condition,
•
the ‘efficient markets’ and ‘complete markets’ paradigms.
It should be well understood that the use of representative economic agents in DSGE models is a way to circumvent the ‘fallacy of composition’, i.e. the implications of the Sonnenschein-Mantel-Debreu theorem that states that the properties of individual behaviours, after aggregation, generally do not lead to equally nice and transparent properties of the aggregated entities (see e.g. Kirman, 1992). The only way out to preserve logical consistency is therefore to assume that all agents are alike. Symmetry in this context is automatic and inevitable. But does this make representative agents an acceptable scientific concept? The answer is ‘no’ if one uses the traditional argument as voiced by Atkinson (2009) that in the real world people have different, often conflicting, interests and aspirations and that by neglecting these differences, one rules out the most interesting welfare economic problems. It is certainly again ‘no’ if we realise that individual agents that are clones of each other act on their own, and therefore do not interact. This is what is called the agent coordination problem. Macroeconomics is different from microeconomics in the sense
that it should study the complex properties of the whole that emerge from the interaction of individual agents. The whole is not equal to the sum of its parts. Representative agent models fail to address this very basic macroeconomic question. Howitt et al. (2008) therefore ask what is so sound about the ‘sound microfoundations’ that DSGE modellers insist on. The representative household then maximises an intertemporal utility under a budget constraint. Firms maximise an intertemporal profit function under constraints, like the available production technology and the time path followed by the capital stock. The im-
13
plied rationality, also with respect to the formation of expectations, and the ability of these agents to get hold of the necessary information is taken for granted. No one says it better than Solow (2008, pp. 243-244): “(…) basically this is the Ramsey model transformed from a normative account of socially optimal growth into a positive story that is supposed to describe day-to-day behavior in a modern industrial capitalist economy. It is taken as an advantage that the same model applies in the short run, the long run, and every run with no awkward shifting of gears. And the whole thing is given the honorific label of ‘dynamic stochastic general equilibrium’. No one would be driven to accept this story because of its obvious ‘rightness’. After all, a modern economy is populated by consumers, workers, pensioners, owners, managers, investors, entrepreneurs, bankers, and others, with different and sometimes conflicting desires, information, expectations, capacities, beliefs, and rules of behavior. Their interactions in markets and elsewhere are studied in other branches of economics; mechanisms based on those interactions have been plausibly implicated in macro-economic fluctuations. To ignore all this in principle does not seem to qualify as mere abstraction – that is setting aside inessential
details. It seems more like the arbitrary suppression of clues merely because they are inconvenient for cherished preconceptions. I have no objection to the assumption, at least as a first approximation, that individual agents optimize as best they can. That does not imply – or even suggest – that the whole economy acts like a single optimizer under the simplest possible constraints. So in what sense is this ‘dynamic stochastic general equilibrium’ model firmly grounded in the principles of economic theory?” Buiter (2009a) concurs and points out that Ramsey’s model actually was a model for a social planner trying to determine the long-run optimal savings rate. The mathematical programming problem to be solved by the central planning agency only leads to a meaningful solution if this agency, at the same time, also makes sure that terminal boundary conditions (the so-called ‘ transversality conditions’), that preclude explosive timepaths, are met. These conditions express the necessity that the influence on the present of what happens in an infinitely distant future vanishes. DSGE modellers transplant this social planner’s programming problem to the ‘real life’ situation of a ‘representative’ individual, expecting to describe in this way, not only his long-run behaviour, but also his behaviour in the short and the medium run. Only, in a
14
decentralised market economy, there is no such a thing as a mathematical programmer that imposes the necessary terminal conditions. There is no real life counterpart to the transversality conditions imposed on Ramsey’s social planner. Panics, manias and crashes do happen, and are not confined to the nearly cataclysmic events of the Credit Crunch. Post-war economic history abounds with examples. Only in the period since the Stock Exchange Crash in New-York of October 1987, we have had, successively, the Mexican Crisis (1994), the Asian Crisis (1997), the LTCM Crisis (1998 to early 2000), the burst of the dot-com bubble (2000-2001), and the threatening panic following 9/11/2001. Much has been written in the last forty years on rationality, and the questionable existence of the ‘homo economicus’. This is not the place to expand on the important contributions by economists and social psychologists like Tversky, Kahneman, Selten, Fehr a.o., documenting, by means of controlled experiments, the systematic deviation of economic actors from rational behaviour. The rationality assumption, especially when applied to asset markets – regardless of model uncertainty that is always present – has however recently received additional severe blows. Obvious phenomena in the Credit Boost 15
and Bust were hubris ( ) and power play by the main actors, and herd behaviour by the crowd. With respect to the latter phenomenon, it is now clear that most investors and bankers who bought the securitised mortgages did so mainly because other smart people, who where supposed to be knowledgeable, did so too. DSGE models completely miss Keynes’ ‘animal spirits’ point (see Akerlof and Shiller (2009), Shiller (2009), and an interesting paper by De Grauwe (2008) using a ‘heuristic’ expectations formalisation). With respect to the formation of expectations, there is however more to it than outright irrationality. The issue is foremost one of the unknowability of the future as a result of so-called ‘Knightian uncertainty’. Knight made the difference between ‘risk’ and ‘uncertainty’, risk being randomness with a known probability distribution and therefore insurable, and (Knightian) uncertainty being randomness with an unknown or even unknowable probability distribution and therefore uninsurable. Phelps (2009) argued that risk management by banks related to ‘risk’ observed as variability over some recent past. 15
Buiter (2009b) e.g. refers to the role of testosterone in traders’ rooms.
15
This was understood as variability around some equilibrium path, while the volatility of the equilibrium path itself was not considered. Stable and knowable equilibrium paths play however a crucial role in (new-Keynesian) DSGE models (see further in section 3). Another illuminating angle to approach this unknowability problem is to see that on the micro- as well as the macro-scale there is most of the time path-dependency of the long-run dynamics. Examples of hysteresis have been well documented in international trade, industrial innovation, localisation of industries, consumer behaviour, the functioning of labour markets and consequently in the determination of the long-run rate of economic growth itself (see Cross (2008) on DSGE modelling and hysteresis). DSGE modellers seem moreover to have neglected the important insights offered by endogenous growth theory. Instead they have regressed to the old Solow-Cass-Koopmans growth 16
model used by the first RBC theorists ( ). This brings us to what perhaps is the most crucial assumption made by DSGE modellers: ‘complete and efficient markets’. The ‘Complete Market Hypothesis’ refers to the existence of markets. A ‘complete system of markets’ is one in which there is a market
for every good, in the broadest possible sense of the word. A ‘good’ is then, as in the Arrow-Debreu approach of general equilibrium, defined not only in terms of its immanent physical properties, but it is also indexed by time, place and state of nature or state of the world. It is then possible for agents to instantaneously enter into any position with respect to whatever future state of the economy. The ‘Efficient Market Hypothesis’ refers to the working of markets. Allocative and expectational rationality holds and market prices re-
flect market fundamentals. Add to this the assumption made by DSGE modellers that intertemporal budget constraints are always satisfied, and one gets an ‘economy’ where there are no contract enforcement problems, no funding or market illiquidity, no insolvency, no defaults and no bankruptcies. The comments of Goodhart, former member of the Monetary Policy Committee of the Bank of England, are devastating: “This makes all agents perfectly creditworthy. Over any horizon there is only one interest rate facing all agents, i.e. no risk premia. All
16
Solow is very much aware of this and distances himself from the use by DSGE modellers of his own growth theory (Solow, 2008).
16
transactions can be undertaken in capital markets; there is no role for banks. Since all IOUs are perfectly creditworthy, there is no need for money. There are no credit constraints. Everyone is angelic; there is no fraud; and this is supposed to be properly microfounded!” (Goodhart, 2008).
3. The solution of new-Keynesian DSGE models
The baseline model presented in the previous section, and of course also the extensions of it, are highly non-linear. In order to be able to have a workable and estimable version of them, it is a current procedure to (log)linearise the model around the equilibrium path and to reduce stochasticity in the model to well-behaved additive normally distributed distur17
bances with a given distribution ( ). In the determination of the optimal time-paths (in levels) of the different variables of the model it was assumed that the transversality conditions were satisfied. This, in principle, should have ruled out explosive behaviour of these variables, but, since these transversality conditions actually do not intervene in the actual derivation of the optimal time-paths (most DSGE modellers do not even bother to mention them), saddle path stability of the long-run equilibrium is not automatically ensured. The latter is however a necessary condition for the long-run equilibrium to be meaningful in the presence of rational expectations. To this end the linearised version of the model is subjected to the so-called Blanchard-Kahn test. This test requires that, in order to have a unique and stable path, the number of eigenvalues of the linearised system smaller than 1 in absolute value should be equal to the number predetermined endogenous variables, and the number of eigenvalues with absolute value larger than 1 should be equal to the number of anticipated variables (Blanchard and Kahn, 1980). The problem of course is that this test, in nearly all cases, can only be carried out when the parameters of the model are known, either through calibration of the model, or through econometric analysis (see next section). The linearisation takes place around the steady state solution of the model. But this steady state, by its very nature, does not refer to average situations, but to the extreme
17
Some authors have started to experiment with second-order Taylor expansions as an alternative to linearisation (see e.g. Schmitt-Grohé and Uribe, 2004).
17
situations of full capacity use, zero average inflation, purchasing power parity (in open economy models), etc. A good illustration of this is the point that is conceded by Christiano et al., when they make the following comment on the fact that they take zero profits as the steady state value: “Finally, it is worth noting that since profits are stochastic, the fact that they are zero, on average, implies that they are often negative. As a consequence, our assumption that firms cannot exit is binding. Allowing for firm entry and exit dynamics would considerably complicate our analysis” (Christiano et al., 2005, p. 14). Perhaps zero profits are an interesting benchmark, but it can hardly be a steady state value in a monopolistically competitive environment. Combined with the requirement that shocks in a linearised version of a non-linear model have to remain small, one cannot but conclude that, in the very best of cases, newKeynesian DSGE models can only describe what happens in the immediate neighbourhood of a state of blissful tranquillity. The need to linearise around a steady state also implies that one has to limit the analysis to effects of temporary shocks. Permanent shocks cannot be accommodated (see e.g. Mancini Griffoli, 2007). More fundamentally, stripping a non-linear model from its non-linearities may very well mean – the more so if you consider the interaction of these non-linearities with uncertainty – that you delete from the model everything that makes the dynamics of reality interesting: threshold effects, critical mass effects, switching of regimes points etc. If there is one thing that recent economic history has made clear, then it is that economic systems can be tranquil (i.e. ‘stable’) for some time, but that, once in a while, unforeseen events push the system out of the ‘corridor of stability’. Linear systems, by their very nature, cannot have this corridor property. The nature of stochasticity in linearised DSGE models (Buiter (2009a) cynically speaks of ‘trivialising’) is another sore issue. Firstly, linear models with independently distributed disturbances have the ‘certainty equivalence’ property. Linearising, as far as the mean of the solved time path goes, reduces in actual fact the model to a deterministic one. Secondly, if one assumes that the disturbances are normally distributed, as DSGE modellers traditionally do, one dramatically misses one of the essential aspects of, in particular, movements of prices on asset markets. As an illustration of this, De Grauwe, in a
18
witty contribution, has shown that the 10.88% fall of the Dow-Jones Industrial Average on 28/10/2008, if you would assume an underlying normal distribution, would only take place once every 73,357,946,799,753,900,000,000 years, exceeding by far the assumed age of the universe (De Grauwe, 2009).
4. Calibration and estimation of DSGE models
In older DSGE models, in line with what was common in new-classical RBC models, parameters were a-prioristically chosen so that the dynamic qualities of the solution, in terms of the lower moments of the underlying distributions, conformed with what was observed. This ‘calibration’ approach, as opposed to a traditional econometric approach, was preferred because of the complicated, highly non-linear, nature of the models, and presumably also because RBC theorists and early DSGE modellers – probably unconsciously – did not wish to confront directly their very sketchy and unrealistic models with the data. Solow is very caustic on this practice. We cite: “The typical ‘test’ of the model, when thus calibrated, seems to be a very weak one. It asks whether simulations of the model with reasonable disturbances can reproduce a few of the low moments of observed time series: ratios of variances or correlation coefficients, for instance. I would not know how to assess the significance level associated with this kind of test. It seems offhand like a rather low hurdle. What strikes me as more important, however, is the likelihood that this kind of test has no power to speak of against reasonable alternatives. How are we to know that there are not scores of quite different macro models that could leap the same low hurdle or a higher one? That question verges on the rhetorical, I admit. But I am left with the feeling that there is nothing in the empirical performance of these models that could come close to overcoming a modest skepticism. And more certainly, there is nothing to justify reliance on them for serious policy analysis” (Solow, 2008, p. 245). In more recent DSGE models one usually follows a mixed strategy, but the inauspicious heritage of calibration lingers on. It does so in two ways. Firstly, part of the often numerous parameters are still calibrated. Secondly, another part is estimated with Bayesian procedures in which the choice of priors, whether or not inspired by calibrated values
19
taken from previous studies, by the very nature of the Bayesian philosophy, heavily biases the ultimate estimates. One of the reasons to opt for Bayesian estimation techniques is that likelihood functions of DSGE models often show numerous local maxima and nearly flat surfaces at the global maximum. Traditional maximum likelihood estimation strategies therefore often fail (see Fernandez-Villaverde, 2009). But, rather than choosing for the flight forward and reverting to Bayesian techniques, this should perhaps warn one that DSGE models do not marry well with real life data. In the frequently cited Christiano et al. paper, the estimation strategy is, to be sure, more careful, in the sense that the authors in a preparatory step use an unrestricted VAR procedure to estimate the impulse response of eight key macroeconomic variables of the model to a monetary policy shock, in order, in a second step, to minimise a distance measure between these estimated IRFs and the corresponding reaction functions implied by the model. However, eight other very crucial parameters are fixed a priori (among which the discount factor, the parameters of the utility function of the households, the steady state share of capital in national income, the annual depreciation rate, the fixed cost term in the profit function, the elasticity of substitution of labour inputs in the production function, and the mean growth rate of the money supply). This implies of course that the remaining ‘free’ parameters are highly restricted and thus remain heavily biased. In the case of normality, when the variance-covariance matrix of the disturbances is known, the posterior mean can be written as a matrix weighted average of the prior mean and the least-squares coefficient estimates, where the weights are the inverses of the prior and the conditional covariance matrices. If the variance-covariance matrix is not known, as is nearly always the case, the relation between prior and posterior values of the parameters is of course more complicated, but the general picture remains valid (see e.g. Green, 2003). The conclusion is that the practice of calibration is still widespread. Bayesian statistical techniques produce a particular kind of hysteresis effect. Parameter values, once fixed by an ‘authorative’ source, live on in the priors of subsequent studies, which in turn perpetuate possible errors. Blanchard, although himself author of a few new-Keynesian DSGE papers, worries that “once introduced, these assumptions [about the priors and a
20
priori fixed parameters used in models] can then be blamed on others. They have often become standard, passed on from model to model with little discussion” (Blanchard, 2008).
5. Conclusions on policy and theory
New-Keynesian DSGE models and new-Keynesianism as such are not the same thing. Criticism on the former does not mean that one does not recognise the progress that has been made by economists who fall under the latter denomination. Mankiw, in a brilliant review of the present state of economic theory (Mankiw, 2006), distinguishes four phases in ‘modern’ Keynesian theory: 1) the Keynesian-neoclassical synthesis of Samuelson, Modigliani and Tobin; 2) the first so-called new-Keynesian wave with Malinvaud and Barro and Grossman’s disequilibrium approach; 3) the second new-Keynesian wave with Akerlof, Mankiw, Summers, Stiglitz and Blanchard and Kiyotaki’s insights on sources of wage and price rigidity; and 4) the third new-Keynesian DSGE wave. Mankiw (as for that matter Krugman (2009)), sees a scientific progression in each of the first three phases, but discerns a regression in the fourth. It is, for that matter, an open question whether new-Keynesian DSGE models are ‘new-Keynesian’ in the original sense of the word. It is of course the case that price and wage stickiness is one of the basic ingredients of these models, but the simple fact that a representative household at all times maximises its intertemporal utility (even when, as in Blanchard and Galì, not every member of that household does so), implies that newKeynesian DSGE models are, in actual effect, nothing more than revamped new-classical RBC models. Goodfriend and King (1997), followed by Clarida, Galì and Gertler (1999), Woodford (2003), but also Smets and Wouters (2007), have understood this and dubbed this, in their view ‘consensus’, paradigm the ‘New Neoclassical Synthesis’. Armed with this ‘consensus’ view on the economy, DSGE modellers also claim to be able to formulate a ‘consensus’ view on optimal monetary policy. Woodford (2009, p. 273) summarises the obtained DSGE result in this way: “Monetary policy is now widely agreed to be effective, especially as a means of inflation control. The fact that central banks can control inflation if they want to (and are allowed to) can no longer be debated,
21
after the worldwide success of disinflationary policies in the 1980s and 1990s; and it is now widely accepted as well that it is reasonable to charge central banks with responsibility for keeping the inflation rate within reasonable bounds.” Apart from the fact that this sounds very much as an application of the ‘Coué Method’, it should not come as a surprise that inflation-targeting by the central bank comes out as a result (even in models, like Smets and Wouters’ 2007 exercise, that do not contain a monetary supply variable in the first place), given that most recent new-Keynesian DSGE papers model the behaviour of the central bank in the form of a Taylor rule, and given that the validation of DSGE models is at best shaky. The record of the last decade tells however a very different story, especially if we consider the monetary policy pursued by the Fed. Instead of following an explicit inflation-targeting policy Alan Greenspan obviously, on practically a day-to-day basis, manipulated the official discount rate with a view of stabilising financial markets. Not only was this ‘Greenspan Put’ exercised with systematic regularity, in the years 2002-2004 interest rates were kept too long at an abnormally low level. Only on the surface this could appear as the evidence of an inflation-targeting policy: inflation, as measured by the CPI, remained low in that period, the implication being then that the interest rate level is ‘right’, whatever the level of interest rates at each particular moment. According to Leijonhufvud (2008), the inflation rate however could remain low only because of the fact that US consumer goods prices stabilised through competition with (massive) priceelastic imports from countries like China, that had chosen not to let its currency appreciate, in view of the amount of US dollars it continued to accumulate as a result of the continuing deficit on the US current account (see also James K. Galbraith (2008) for a very critical account of the monetary policy of the Fed and of present-day economic theories that try to explain it). Goodhart (2008, p. 14), a central banker himself, asks: “How on earth did central banks get suckered into giving credence to a model which is so patently unsatisfactory?” Mankiw (2006) for that matter disputes, in a closely reasoned argument, countering in this ways the sometimes boisterous claims by new-Keynesian DSGE modellers (e.g. Woodford and Goodfriend in many of their publications), that central bankers have in-
22
deed inspired their actual policy on the results obtained by DSGE models (see Woodford (2009) for a rebuttal). Central bank independence, i.e. the doctrine that there should be a separation of responsibility for monetary and fiscal policy, with the latter ‘flying on auto-pilot’, is another principle hallowed by new-classicals and new-Keynesian DSGE modellers. It broke down once the going got rough, especially in the US and the UK (see e.g. Buiter (2009b) 18
on this, and also Leijonhufvud (2008)) ( ), the Fed acting, in the words of Buiter, “like an off-balance and off-budget ‘special purpose vehicle’ of the US Treasury”. Solow (2008) asks himself what accounts for the ability of new-Keynesian DSGE modeling “to win hearts and minds among bright and enterprising academic economists”? One type answer is purely psychological: the ‘purist streak’ of young people, the search for a ‘theory of everything’, as one also witnesses in the efforts of elementary par19
ticle physicists in their quest for first principles ( ). Another answer is sociological. Streeten (2002, p. 15-16) comes with an interesting theory: “The problem with American undergraduate education is that most American schools (with a few notable exceptions) teach so badly that the young people have to go through remedial training in their early university years. They are often almost illiterate when they enter university. At the same time, these youngsters are often eager to learn, have open minds, and are asking big questions. But while their minds are open and while they are eager to ask these large questions, they do not have the basic training to explore them. By the time they reach graduate studies, the groundwork has been done, but the need to chase after credits and learn the required techniques tends to drive out time and interest in exploring wider areas, asking interesting questions. As a result, only very few exceptional young people are led to approach the subject with a sense of reality and vision. The majority is stuck in the mould of narrow experts.” Rodrik (2009) comes to a similar conclusion.
18
The ECB has of course been able, by the very structure of the EMU, to safeguard its independence, but one might perhaps at the same time question its relevance in the face of the economic crisis. 19 It is, for instance, illustrative that some DSGE modellers, when they speak of (log)linearising their models, use the term ‘perturbation techniques’ (as used by q uantum physicists) (see e.g. Fernandez-Villaverde, 2009).
23
As already mentioned in the introduction, ‘groupthink’ is of course the obvious third 20
explanation ( ). It is time we move to a final conclusion. It is better for macroeconomics, if it wants to regain relevance, to move away from formalism, even if it is logically consistent and therefore aesthetic, to a more engineering-like approach. This point is very forcefully made by Mankiw (2006) and Howitt et al. (2008). There is nothing ‘unscientific’ about this. It is indeed difficult to agree with Woodford (2009, p. 274) when he writes that “one hears expressions of scepticism about the degree of progress in macroeconomics from both sides of this debate – from those who complain that macroeconomics is too little concerned with scientific rigor, and from those who complain that the field has been too exclusively concerned with it.” One should rather say that DSGE modelling is not a matter of scientific rigor, but of formal rigor. Better no micro foundations than bad micro foundations. New-Keynesian DSGE models are “self-referential, inward-looking distractions at best” (Buiter, 2009a) – toy models in the words of Blanchard (2008). Is it not sufficiently ambitious, with the limited knowledge that macroeconomists have about the real world, to move on (back), as Solow (2008) suggests, to small, transparent, tailored models, often partial equilibrium? Could it be that present-day mainstream macroeconomics is a ‘degenerative research programme’ in the sense that Imre Lakatos gave to that term (Lakatos, 1970)?
20
Lee Smolin (2006) tells about a similar situation of a dominating paradigm in elementary particle physics (string theory), with what he sees (rightly or wrongly) as a possible case of ‘groupthink’.
24
References Adolfson, M., S. Laséen, J. Lindé and M. Villani (2005), ‘Bayesian Estimation of an Open Economy DSGE Model with Incomplete Pass-through’, Journal of International Economics, 72: 481-511. Akerlof, G.A. and R.J. Shiller (2009), Animal Spirits: How Human Psychology Drives the Economy and Why It Matters for Global Capitalism (Princeton: Princeton University Press). Atkinson, A.B. (2009), ‘Economics as a Moral Science’, Economica, published online at http://www3.interscience.wiley.com/journal/122314362/abstract?CRETRY=1&SRETRY=0 Benigno, P. (2009), ‘Price Stability with Imperfect Financial Integration’, Journal of Money, Credit and Banking , 41(suppl.): 121-149. Blanchard, O.J. (2008), ‘The State of Macro’, NBER Working Paper 14259, August 2008. Blanchard, O.J. and J. Galì (2008), ‘A New-Keynesian Model with Unemployment’, CEPR Discussion Paper DP 6765. Blanchard, O.J. and C.M. Kahn (1980), ‘The Solution of Linear Difference Models under Rational Expectations’, Econometrica, 48: 1305-1313. Buffet, W. (2002), Berkshire Hathaway Annual Report for 2002 (www.fintools.com/docs/Warren%20Buffet%20on%20Derivatives.pdf ). Buiter, W.H. (2002), ‘The Fallacy of the Fiscal Theory of the Pr ice Level: a critique’, Economic Journal, 112: 459-480. Buiter, W.H. (2009a), ‘The Unfortunate Uselessness of most ‘State of the Art’ Academic Monetary Economics’, FT.com/Maverecon, 3/3/2009 (http://blogs.ft.com/maverecon/2009/03/theunfortunate-uselessness-of-most-state-of-the-art-academic-monetary-economics/#more-667). Buiter, W.H. (2009b), ‘The Green Shoots are Weeds Through the Rubble in the Ruins of the Global Economy’, FT.com/Maverecon, 8/4/2009 (http://blogs.ft.com/maverecon/2009/04/thegreen-shoots-are-weeds-growing-through-the-rubble-in-the-ruins-of-the-globaleconomy/#more-1276). Buiter, W.H. (2009c), ‘Useless Finance, Harmful Finance and Useful Finance’, FT.com/Maverecon, 12/4/2009 (http://blogs.ft.com/maverecon/2009/04/useless-financeharmful-finance-and-useful-finance/#more-1357) Calvo, G.A. (1983), ‘Staggered Pr ices in a Utility-maximizing Framework’, Journal of Monetary Economics, 12: 383-398. Christiano, L.J., M. Eichenbaum and C. L. Evans (2005), ‘Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy’, Journal of Political Economy , 113: 1-45. Clarida, R., J. Galì and M. Gertler (1999), ‘The Science of Monetary Policy: a new-Keynesian perspective’, Journal of Economic Literature , 37: 1661-1707. Cohen, P. (2009), ‘Ivory Tower Unswayed by Crashing Economy’, New York Times, 5/3/2009. Cross, R. (2008), ‘Mach, Methodology, Hysteresis and Economics’, Journal of Physics: Conference Series, 138: 1-7. Cúrdia, V. and M. Woodford (2008), ‘Credit Frictions and Optimal Monetary Policy’, National Bank of Belgium Working Paper 146. De Graeve, F., M. Dossche, H. Sneessens and R. Wouters (2008), ‘Risk Premiums and Macroeconomic Dynamics in a Heterogeneous Agent Model’, National Bank of Belgium Working Paper 150. De Grauwe, P. (2008), ‘DSGE-modelling when Agents are Imperfectly Informed’, ECB Working Paper Series no. 897, May 2008. De Grauwe, P. (2009), ‘The Banking Crisis: cause, consequences and remedies’, Itinera Institute Memo, 19/2/2009. Dodd, R. (2007), ‘Subprime: tentacles of a crisis’, Finance and Development , IMF, 44(4). Erceg, C.J., D.W. Henderson and A.T. Levin (2000), ‘Optimal Monetary Policy with Staggered Wage and Price Contracts’, Journal of Monetary Economics, 46: 281-313.
25
Fernandez-Villaverde, J. (2009), ‘The Econometrics of DSGE Models’, NBER Working Paper 14677, January 2009. Galbraith, James K. (2008), ‘The Collapse of Monetarism and the Irrelevance of the New Monetary Consensus’, The Levy Economics Institute at Bard College, Policy Note, 2008(1). Galì, J. and T. Monacelli (2005), ‘Monetary Policy and Exchange Rate Volatility in a S mall Open Economy’, Review of Economic Studies, 72: 707-734. Galì, J. and T. Monacelli (2008), ‘Optimal Monetary and Fiscal Policy in a C urrency Union’, Journal of International Economics, 76: 116-132. Gertler, M., L. Sala and A. Trigari (2008), ‘An Estimated Monetary DSGE Model with Unemployment and Staggered Wage Bargaining’, Journal of Money, Credit and Banking , 40: 1713-1764. Goodfriend, M. and R. King (1997), ‘The New Neoclassical Synthesis and the Role of Monetary Policy’, NBER Macroeconomics Annual , 1997: 231-283. Goodhart, C.A.E. (2008). ‘The Continuing Muddle of Monetary T heory: a steadfast refusal to face facts’, Financial Markets Group, London School of Economics. th Green, W.H. (2003), Econometric Analysis, 5 ed. (Upper Saddle River, NJ: Prentice Hall). Howitt, P., A. Kirman, A. Leijonhufvud, P. Mehrling and D. Colander (2008), ‘Beyond DSGE Models: toward an empirically based macroeconomics’, American Economic Review: Papers and Proceedings, 98: 236-40. Johnson, S. (2009), ‘The Quiet Coup’, Atlantic Monthly , May 2009. Kay, J. (2009), ‘How Economics Lost Sight of the Real World’, Financial Times, 22/4/2009. Kirman, A. (1992), ‘Whom or What Does the Representative Individual Represent?’, Journal of Economic Perspectives , 6: 117-136.. Klamer, A. (1984), Conversations with Economists (Totowa, NJ: Rowman and Allanheld). Krugman, P. (2009), ‘A Dark Age of Macroeconomics’, New York Times, 27/1/2009. Kydland, F.E. and E.C. Prescott (1982), ‘Time to Build and Aggregate Fluctuations’, Econometrica, 70: 1345-1370. Lakatos, I. (1970), ‘Falsification and the Methodology of Scientific R esearch Programmes’, in: I. Lakatos and A. Musgrave (eds), Criticism and the Growth of Knowledge (Cambridge: Cambridge University Press), pp. 91-196. Leijonhufvud, A. (2008), ‘Keynes and the Crisis’, CEPR Policy Insight , no. 23, May 2008. Lindé, J., M. Nessén and U. Söderström (2008), ‘Monetary Policy in an Estimated Openeconomy Model with Imperfect Pass-through’, International Journal of Finance & Economics, published online at http://www3.interscience.wiley.com/cgibin/fulltext/119877054/PDFSTART . Long Jr., J.B. and C.I. Plosser (1983), ‘Real Business Cycles’, Journal of Political Economy , 91: 39-69. Lubik, T. and F. Schorfheide (2005), ‘A Bayesian Look at New Open Economy Macr oeconomics’, NBER Macroeconomics Annual 2005, 2005: 313-366. Mancini Griffoli, T. (2007), Dynare v4 – User Guide. Mankiw, N.G. (2006), ‘The Macroeconomist as Scientist and Engineer’, Journal of Economic Perspectives, 20(4): 29-46. Obstfeld, M. and K. Rogoff (1995), ‘Exchange Rate Dynamics Redux’, Journal of Political Economy, 103: 624-660. Phelps, E.S. (2009), ‘Uncertainty Bedevils the Best System’, Financial Times, 14/4/2009. Rabanal, P. and V. Tuesta Reátegui (2006), ‘Euro-dollar Real Exchange Rate Dynamics in a n Estimated Two-Country Model: what is important and what is not’, CEPR Discussion Paper 5957. Rodrik, D. (2009), ‘Blame the Economists, not Economics’, Harvard Kennedy School, 11/3/2009 (http://hks.harvard.edu/news-events/commentary/blame-the-economists.htm).
26
Rogers, C. (2006), ‘Doing Without Money: a critical assessment of Woodford's analysis’, Cambridge Journal Of Economics , 30: 293-306. Roubini, N. (2004), ‘The Upcoming Twin Financial Train Wrecks of the US’, RGE EconoMonitor , 5/11/2004. Schmitt-Grohé, S. and M. Uribe (2004), ‘Solving Dynamic General Equilibrium Models Using a Second-order Approximation to the Policy Function’, Journal of Economic Dynamics and Control, 28: 755-775. Shiller, R.J. (1989), Market Volatility (Cambridge, Mass.: MIT Press). Shiller, R.J. (2009), ‘A Failure to Control the Animal Spirits’, Financial Times, 8/3/2009. Smets, F. and R. Wouters (2003), ‘An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area’, Journal of the European Economic Association, 1: 1123-1175. Smets, F. and R. Wouters (2007), ‘Shocks and Frictions in US Business Cycles: a Bayesian DSGE approach’, American Economic Review, 97: 586-606. Smolin, L. (2006), The Trouble with Physics (New York: Houghton Mifflin Harcourt). Solow, R.M. (2008), ‘The State of Macroeconomics’, Journal of Economic Perspectives, 22: 243-249. Streeten, P. (2002), ‘What’s Wrong with Contemporary Economics?’, Interdisciplinary Science Reviews , 27: 13-24. Summers, L. (1991), ‘The Scientific Illusion in Empirical Macroeconomics’, Scandinavian Journal of Economics, 93: 129-148. Tobin, J. (1984), ‘On the Efficiency of the Financial System’, Lloyds Bank Review, no. 153, July 1984, 1-15. Wolf, M. (2009), ‘Seeds of its own Destruction’, Financial Times, 8/3/2009. Woodford, M. (1998), ‘Doing without Money: controlling inflation in a post-monetary world’, Review of Economic Dynamics , 1: 173-219. Woodford, M. (2003), Interest and Prices: Foundations of a Theory of Monetary Policy (Princeton: Princeton University Press). Woodford, M. (2009), ‘Convergence in Macroeconomics: elements of the New Synthesis’, American Economic Journal: Macroeconomics , 1: 267-279. Zabel, R.R. (2008), ‘Credit Default Swaps: from protection to speculation’, Pratt’s Journal of Bankruptcy Law, September 2008.
27