Sunday, September 30, 2012

Eugene Genovese and modes of production

Eugene Genovese has passed away. He was a historian of American slavery, and his views were not particularly popular or discussed in economics courses, as far as I know. Mainstream, quantitative historians suggested that his view that slavery in the South was not profitable was incorrect (in particular Engerman and Fogel, the latter a Bank of Sweden prize, also known as the Nobel, winner). The other reason, I imagine, that made his work particularly difficult for mainstream authors was his use of Marxist categories, including one that I still find essential for historical analysis, namely: the notion of mode of production.

Genevose (I read only his The Political Economy of Slavery and not the classic Roll, Jordan, Roll, often praised as his best work) argued that slavery was an economic drag to the Master class, and that the system remained a pre-capitalist formation, with the profit motive having a secondary role in the process of social reproduction. The idea was that, even if the South was connected by trade (cotton mostly) with the capitalist production in England (and New England too), the commercial relations where not central for the relations of production in the plantation system [I find his argument very unconvincing, by the way].

In other words, very much like Dobb, in the Transition debate with Sweezy, Genovese argued that the trade link was not central and could not define the South as part of the capitalist mode of production. In this sense, Southern slavery was for him a distinctive mode of production, one that was seen increasingly from a positive angle by Genovese, as he became more conservative and remained, interestingly, critical of the capitalist mode of production (which in a sense makes him more in tune with old conservatives that also repudiated the mercantilization of social relations).

Note that Genovese, like Engerman and Fogel in this case, thought that the peculiar institution was quite more benign that it is usually thought. In this sense, he reminds me of the quintessential Brazilian analysis of slavery in The Masters and the Slaves, by Gilberto Freyre, who also, even if for different theoretical reasons, saw Brazilian (not Southern, even if he saw similarities) slavery as benign and helped create the myth of Brazil as a racial democracy.

PS: On the slavery debates Nate Cline suggests this paper by Wallerstein (subscription required).

Friday, September 28, 2012

Unemployment during the Great Depression

Unemployment remained above normal for a long period after the recovery from the Depression started in 1933. So much so that Galbraith père dictum became famous:  “Hitler, having ended unemployment in Germany, had gone to end it for its enemies.” That is, even if the New Deal was successful it was unable to completely eliminate unemployment, something that only World War II did. The graph below shows two different series for unemployment, one that follows closely the official BLS level by Lebergott, and one by Darby.
The main difference is that Darby includes the workers in the emergency government labor force as employed – the most important being the Civil Works Administration (CWA) and the Works Progress Administration, later renamed Work Projects Administration (WPA). Once the workfare programs are accounted for, the level of unemployment fell from 22.9% in 1932 to 9.1% in 1937, a reduction of 13.8%, which can hardly be seen as a failure, even if the 1937 level is certainly not full employment.

Also, the increase in unemployment with the 1937-38 recession was of only 3.4%, with Darby’s data, and by 1940 it was already at 9.5% falling precipitously with the onset of the war. Mind you, Darby was a traditional Monetarist, at that point at least, having studied in Chicago, and he argued that the reduction in unemployment, once corrected to include the New Deal programs reveals: "a strong movement toward the natural unemployment rate after 1933 [sic]."

The notion that 9.1% unemployment, much lower as a result of direct employment programs, is natural, in any sense is peculiar, to say the least. From my perspective it shows that the New Deal was actually quite more successful than normally presumed. But that's me. By the way, due to other activities I'll be blogging slightly less these following days.

PS: For more go to this paper.

Tuesday, September 25, 2012

Rochon and Docherty on the future of PK economics

The financial crises led many to believe that neoclassical economics might pay a price for the inability to foresee the unsustainable problems that were at the heart of the crisis. This paper deals with the problems of the mainstream and the possible strategies that heterodox groups, in particular post-Keynesians, might use to gain influence. From the abstract:
This paper examines the reasons for the difficulties Post Keynesian economics has had in supplanting mainstream neoclassical theory and for its resulting marginalization. Three explanations are given: intellectual, sociological and political, where the latter two are largely responsible for the current relationship of Post Keynesian economics to the mainstream. The paper also reviews various strategies for improving the future of Post Keynesian economics, including a focus on methodological issues by maintaining an ‘open systems’ approach; a strategy of ‘embattled survival’; the development of a positive alternative to mainstream economics; a strategy of ‘constructive engagement’ with the mainstream; and a dialogue with policymakers. While the global financial crisis has increased the potential for constructive engagement with the mainstream, significant barriers remain to the effectiveness of this approach. The crisis has, however, enhanced the possibility of engaging directly with policymakers and gaining a greater role in management education.
Read the rest here.

Let’s not get ‘carried away’ by Bernanke’s latest twist

By Kevin P. Gallagher

Ben Bernanke, chairman of the US Federal Reserve, should be applauded for boldly putting employment over price stability in his latest move to keep interest rates low and to purchase mortgage-backed securities. Bernanke’s critics (and Bernanke himself) have rightly said that monetary policy is not enough, however. To truly generate employment-led growth in the US, those critics say more fiscal policy is needed.

There is also a need for stronger financial regulation in order to ensure that financial institutions do not steer newfound liquidity into currency and commodity speculation in emerging markets and developing countries—speculation that can wreak havoc on developing countries’ financial systems and growth prospects. Such was the case during previous rounds of interest rate declines and quantitative easing in the US, and could occur again.

Investors may choose not to go down Bernanke’s path but rather to use the carry trade to speculate on foreign currencies. The carry trade is a strategy where investors borrow in low interest rate countries and invest in higher interest rate countries with the “carry” being the difference between the two rates. Profits can increase by orders of magnitude if investors are significantly leveraged and bet against the funding country and on the target country currency.

Earlier this year, the IMF reported that lower interest rates in the US and higher economic growth in emerging markets were associated with a higher probability of a capital inflow “surge”. Surges in capital inflows can cause currency appreciation and asset bubbles that can make exports more expensive and destabilise domestic financial systems. According to that IMF report, one third of the time such surges were accompanied by a sudden reversal of capital flows.

The IMF’s 2011 World Economic Outlook report documents how a “sudden stop” in capital flows can unwind emerging markets and developing economies as well. They show that a 5 basis point increase in US rates could cause capital flight worth 0.5-1.25 per cent of GDP out of the developing world. This is not a short-term problem given that Bernanke has committed to keeping rates low into the future. However, global risk aversion, such as continued euro jitters, can also cause sudden reversals of capital flows.

In 2010 and 2011, many emerging markets and developing countries deployed counter-cyclical capital account regulations such as taxes on inflows or reserve requirements on derivatives transactions to curb the negative effects of cross-border capital volatility. Like earlier studies by the National Bureau of Economic Research and others confirming that regulating capital flows can change the composition of inflows, make for more independent monetary policy and ease exchange rate tensions, new studies by the IMF and others show how countries such as Brazil, Taiwan and South Korea have been at least moderately successful during this recent go-around.

Echoing but formalising work that dates back to Keynes, a new IMF report finds that industrialised countries may need to regulate the outflow of capital as well. The new IMF paper, “Multilateral Aspects of Managing the Capital Account”, argues that when regulating capital inflows is costly or relatively ineffective for borrowing countries, or if the proper regulation would cost too much “collateral damage”, then nations such as the US may need to regulate the outflow of capital.

It may come as a big surprise to learn that the US regulated outflows of speculative capital for close to 10 years, 1963 to 1973. During that period the US administered the Interest Equalisation Tax (IET). The IET was a 15 per cent tax on the purchase of foreign equities. For bond trades the tax variety depending on the maturity structure of the bond, ranging from 2.75 per cent on a three-year bond and up to 15 per cent on a 28.5 year bond. Borrowers looking to float bonds would thus pay approximately 1 per cent more than interest rates in the US, thereby flattening the interest rate differential between the US and Europe.

The proposed Volker Rule would make it harder for US banks to speculate on foreign countries via the carry trade with US deposits. However, an increasing amount of carry trade transactions occur outside the commercial banking system. Moreover, financial interests have led to measures in US trade treaties that make it illegal for trading partners to regulate cross-border finance as well.

Later this autumn, the IMF is set to release a new set of guidelines that will reiterate the need to regulate global financial flows. The fund would do well to incorporate its latest work that shows how industrialised nations may need to regulate capital flows as well. Doing so will help nations across the global economy, regardless of their level of development, achieve their stated economic goals without getting “carried away” by footloose finance.

Published originally here.

Sraffa and the Confidence Fairy

Sraffa's views on macroeconomic issues are difficult to gather since he didn't directly write on the issue. However, there is some evidence that he believed in the accelerator, that is, the idea that investment is induced demand, depending on variations of the level of income. Franklin Serrano points out the following passage in a paper by Terenzio Cozzi (my translation, so remember that in this case it is true that il traduttore è un traditore):
“In the spring of 1963 or in the fall of 1964, I asked Sraffa why he had chosen to take as a given, not the real wage, but the rate of profit, and had suggested that the level of the latter could be influenced in particular by the monetary rate of interest. He replied that in the past entrepreneurs, in deciding how much to invest, were strongly influenced by the general state of economy and how the level of activity evolved in recent periods. At the first sign of a decline in production levels, they decided to stop investing. Now, however – we are in 63 or 64 – the entrepreneurs expect that the authorities will still be able to adjust the performance of the system back to its normal growth path. Even bankers think in this way. That's why the interest rate is an indicator of the normal rate of profit. And that's also why the fluctuations of the past are no more, and economic growth is stabler.” Terenzio Cozzi (1986), “Un teoria con un grado di libertà,” in Riccardo Bellofiore (a cura di) Tra teoria economica e grande cultura europea: Piero Sraffa, Franco Angeli, Milano, p. 208.
In the original:
“Nella primavera del 1963 o nell' autunno del 1964, ho chiesto a Sraffa come mai egli avesso preferito assumere come dato, non il tasso di salario, ma il tasso di profitto e avesse sostenoto que che il livello de quest'ultimo poteva essere influenziato in particolare dai livelli dei tassi dell' interesse monetario. Mi rispose che nel passato gli imprenditori , nel decidere quanto investire, erano fortemente influenziati dall' andamento generali dell' economia e da come si erano manifestati questi andamenti nei periodi ricenti. Al primo accenno di caduta di livelli produttivi, decidevano bloccare gli investimenti. Attualmente invece - siamo nel 63' o 64' - essi si aspettano che la autorità saranno comunque in grado di regolare l' andamento del sistema riportandolo in condizioni di crescita normale. Si aspettano, quindi, che la redditività dei loro investimenti tornerà rapidamente al livello normale. Anche i banchieri ragionano in questo modo. Ecco perché i tassi di interesse rappresentano un indicatori del tasso di profitto normale. Ed ecco perché questo non ha piu' le oscillazioni del passato, e cosi la crescita del sistema è piu' stabile.”
No doubt that Sraffa didn't believe in confidence fairies.

Monday, September 24, 2012

Employment-Population ratio, how useful is it?

Several authors, in particular Brad DeLong have correctly pointed out (here, for example) that the employment-population ratio provides a good picture of the problems in the labor market. Krugman and I (see here) also used the same measure to show why the improvements in the employment situation in the current recovery have been small so far. But recently I've been poking the data, and found that there is more to it than meets the eye. Below a longer series than often presented.
Note that there is an increasing trend from the 1970s to the 1980s, which seems to revert in the last decade. The ratio was about 56% before the mid-1970s, and about 61% ever since. Why is that so, you ask? The graph below shows the evolution of male and female employment to population ratios.
By disaggregating by gender, we find out that the constant trend from the 1950s to the 1970s was associated to a decline in male employment to population ratio from about 80% to around 70%, compensated by an increase in women's employment to population ratio from about 30% to 40%. The increase in the global ratio was caused by a stabilization of the male ratio around 70% and a further increase in the female one to more than 50%. The terrible conditions in the 2000s have been associated, not only with a significant decrease in the male rate since 2007 (it was still around 70% in 2006), but also by a negative trend in the female ratio that peaked at 57.5 in 2000. The fact that both ratios are falling is a new phenomenon.

Sunday, September 23, 2012

Quantitative easing isn't magic

"... a candid review of what central banks cannot do. Yes, they can usually forestall panic. Yes, for better or worse they can keep zombie banks alive. No, they cannot bring on economic recovery or solve any of our deeper economic problems, from unemployment and foreclosures in America to unemployment and economic collapse in Greece and elsewhere."


By James K. Galbraith


What should we make of the latest moves to kickstart the US economy, and to save the euro? As the late, great Harvard chaplain Peter Gomes said to my graduating class many years ago, about our degrees: "There is less there than meets the eye."

Quantitative easing, the third tranche of which was announced in the US last week (QE3), is just a fancy phrase for buying bonds, notably mortgage-backed-securities, in which operation the Federal Reserve takes assets from the banks and gives them cash. This raises the bond price and lowers the yield. It also tends to boost stock prices – very nice for people who own stock – and it can spur mortgage refinancing, improving the cashflow of solvent homeowners.

Read the rest here.

Saturday, September 22, 2012

Heterodox Central Bankers: Robert Triffin

Robert Triffin (c. 1940)

Robert Triffin (of Triffin Dilemma fame) worked for the Federal Reserve in the 1940s, after his PhD at Harvard and before joining the IMF. He became the most prominent 'American' (he was Belgian born, in fact) money doctor of the 1940s, participating in missions to the Dominican Republic, Guatemala and Paraguay [other US money doctors in this period were Arthur Bloomfield, Bray Hammond, Henry Wallich, and John Williams; a list of missions available here at the end of the file]. Contrary to the Kemmerer missions of the 1920s and 1930s, or the British missions by Sir Otto Niemeyer of the same period, which advised on the creation of independent central banks strictly adhereing to the Gold Standard rules, the new Fed missions were quite heterodox.

In his National Central Banking and the International Economy, Triffin suggests that peripheral countries (agricultural in his terminology) were victims of the fluctuation of their terms of trade. He says:
"for most agricultural countries, large export receipts and favorable balances of payments usually coincide with high and not with low levels of domestic and export prices. The reason for this is that their ex- port volume and export prices fluctuate as much with demand as with supply conditions, if not more. That is, they are largely determined by international rather than domestic factors. Major fluctuations in export values result primarily from cyclical movements in economic activity and income in the buying countries, and not from changes in the relationship of domestic price or cost levels to prices and costs in other competing or buying countries. Thus, for many agricultural and raw material countries, the international cycle is mainly an imported product."
Crisis in the center dominated and caused crisis in the periphery. The global cycle, very similar to Raúl Prebisch's description of the nature of the crisis in the periphery at that time, implied that the automatic forces of the Gold Standard imposed deflationary adjustment that only made things worse. In Triffin's view, was to intervene in financial markets with exchange controls and promote expansionary policies in a recession. In his words:
"Capital tended to flow toward them in times of prosperity and away from them in times of depression, irrespective of their discount policy. The effect of such fluctuations in capital movements was to smooth down cyclical monetary and credit fluctuations in the creditor countries, but to accentuate them in the debtor countries. To that extent the finan- cial centers could shift part of the burden of readjustment upon the weaker countries in the world economy. Their only mechanism of defense was the policy consistently followed by the Central Bank of Argentina in the recent past with such remarkable success: to offset external drains from or accretions to its reserves through domestic policies of expansion or con- traction."
Note that the head of the Central Bank of Argentina at the time was Prebisch. By the time this study was published Triffin was at the International Monetary Fund.

Thursday, September 20, 2012

Nick Rowe's misconceptions about Sraffians II

As promised here are my additional responses to Nick Rowe’s assumptions (here) on the Sraffian or Cambridge UK side of the capital debates. I had agreed to comment also on assumptions 3 and 4, which stated that:

3. But they still couldn't explain the rate of interest. Because it's hard to explain the rate of interest if you don't want to talk about time preferences. And all the other prices depend on the rate of interest, as well as on technology. So they assumed the rate of interest was exogenous;

4. Some economists in Cambridge US made a very special assumption that let them explain the rate of interest without talking about time preferences. They assumed that there was only one good, and it could be converted back and forth between the consumption good and the capital good by waving a wand.
Before we get to why Sraffa argued that the rate of interest is exogenously determined by the monetary authority, let me discuss the neoclassical assumptions behind Nick’s proposition. I would argue that point 3 is exactly in reverse, that is, it is impossible (not hard) to explain the rate of interest on the basis of subjective preferences.

Böhm-Bawerk famously argued that there are three conditions for the determination of the rate of interest, namely: (1) the differences between wants and provision in different periods of time; (2) the systematic underestimation of future wants and the means available to satisfy them; and (3) the technical superiority of present compared with future goods of the same quality and quantity. The first two are related to subjective preferences, and are behind the supply of savings or abstinence from consumption, while the third is related to productivity. With both thriftiness and productivity one gets a version of the neoclassical loanable funds theory of the natural rate of interest.*

Note that the subjective basis for the determination of the rate of interest is incredibly shaky. The marginalist approach suggests that there is a positive rate of time preference, that is people prefer to consume now rather than latter, and are willing to part with consumption now in order to get more at some future date. That is why there must be a positive rate of interest to convince consumers to postpone the immediate fruition of pleasure. Yet it is far from clear that the positive time preference precedes the positive rate of interest. It seems rather more logical to assume that given a positive rate of interest some people might be willing to postpone consumption. The neoclassical subjective analysis is no more than a tautology with very dubious assumptions about causality, to say the least. It is hard to see why one could base a theory of interest on such uncertain foundations.

Remember that classical authors were very skeptical of subjective individual behavior. They actually referred to social utility when they talked about preferences. In that sense, Sraffa, not only thought that the foundations for subjective theories were unsound, but also from a methodological point of view were not particularly relevant. Interest rates were not positive because some individual preferred things now rather than latter, but they had an institutional foundation, associated to the fact that certain social groups could extract a surplus from society as a whole.

What about the productivity part of the marginalist/neoclassical argument? That’s the part that the capital debates disqualified, as was accepted by no other than Paul Samuelson. I’m not going to discuss the whole issue again, but it suffices to say that there is no logical way to determine the quantity of capital independently of the rate of interest, which implies circular reasoning.

Sraffa had determined very early in his investigation of the determination of relative prices, as early as his first equations in 1927 (with the help of Ramsey) that he could solve the system of simultaneous equations simply with the technical coefficients of production and an exogenous rate of interest (see DeVivo, 2003; subscription required). Sraffa after several changes and developments of his basic equations eventually settled (by the 1940s) on the notion that the rate of profit was determined exogenously by the monetary rate of interest (a proposition not unlike that of certain classical authors, in particular Thomas Tooke, and similar to Keynes idea of a conventional normal rate of interest in the General Theory), in the famous paragraph 44 of PCMC.

Note that classical authors for the most part assumed that the real wage was the exogenously determined distributive variable. The reasons for why Sraffa settled with a monetary theory of distribution require a different post. However, it should be clear that the exogenous rate of interest is not an arbitrary assumption as Nick suggests, but is required for the logical solution of the system of simultaneous equations (which demand the rate of profit to be determined independently of relative prices, something that the marginalist theory is unable to do in a system with a uniform rate of profit). Finally, the important part of the exogeneity of the rate of interest, besides the fact that it fits the historical/institutional framework of the capitalist economies that we live in, where central banks actually do determine the rate of interest, is that institutions play a role in the classical-Keynesian theory of distribution. As noted above, it is class and power that are behind a positive rate of interest and not you aunt's preferences for chocolate cake tomorrow.

Regarding point 4, there is an incredible confusion in the comment by Nick. Sraffa’s system never assumes any aggregate production or a one good economy. There is a composite commodity in the construction of the standard commodity and system, but production is a circular process. Even if it has similar properties as the Ricardian corn model, it is actually composed of several commodities. It is in fact the neoclassical theory, including the disaggregate Walrasian (in its Arrow-Debreu version) model, that requires a one commodity world to bring about the equilibrium of investment (the demand for a quantity of capital) to full employment savings. It is the marginalist theory of the natural rate of interest that lacks any logical foundation.

* Irving Fisher was critical of the limitations of Böhm-Bawerk’s theory even within the neoclassical paradigm. For the debate between them see Avi Cohen (2011).

Monday, September 17, 2012

Nick Rowe's misconceptions about Sraffians I

Nick Rowe continues his very welcome discussion of the issues related to the capital debates in a recent post (see also the reply by Unlearning Economics). However, there are several misconceptions in his post that are worth clarifying. I'm going to deal with his first four assumptions (1 and 2 in this post, and 3 and 4 in a subsequent one), which are the more substantial from a theoretical point of view, namely:
1. Some economists in Cambridge UK wanted to explain prices without talking about preferences. I don't know why they didn't want to talk about preferences;

2. They made some special assumptions that helped them explain prices from technology alone, without talking about preferences. Like: all labour is identical; all technology is linear; prices never change over time;

3. But they still couldn't explain the rate of interest. Because it's hard to explain the rate of interest if you don't want to talk about time preferences. And all the other prices depend on the rate of interest, as well as on technology. So they assumed the rate of interest was exogenous;

4. Some economists in Cambridge US made a very special assumption that let them explain the rate of interest without talking about time preferences. They assumed that there was only one good, and it could be converted back and forth between the consumption good and the capital good by waving a wand.

The Cambridge economists are, of course, Sraffa and his followers. First, one of the most frequent confusions about Sraffa was that he assumed that demand, and, as a result, preferences were irrelevant. Before I tackle the issue per se, it is worth quoting this phrase, brought to my attention in Robert Vienneau’s blog:
"I am sorry to have kept your MS so long - and with so little result. The fact is that your opening sentence is for me an obstacle which I am unable to get over. You write: 'It is a basic proposition of the Sraffa theory that prices are determined exclusively by the physical requirements of production and the social wage-profit division with consumers demand playing a purely passive role.' Never have I said this: certainly not in the two places to which you refer in your note 2. Nothing, in my view, could be more suicidal than to make such a statement. You are asking me to put my head on the block so that the first fool who comes along can cut it off neatly. Whatever you do, please do not represent me as saying such a thing." -- Piero Sraffa (1964). Letter to Arun Bose (italics added).
Clearly Sraffa says that demand plays a role. However, the role is not the same as in marginalist theory. One has to understand what role demand played in the surplus approach in the determination of relative prices to get what Sraffa is saying.

Classical authors, in particular Adam Smith and David Ricardo (and certainly not Marx), did not think in terms of individual utility. When they talk about utility they are referring to social utility. Hence, commodities to be produced must be socially useful, otherwise they would not be produced, since nobody would buy them, but their price, their exchange value, is not based or connected with their use value. For example, Smith in his discussion of the diamond/water paradox (Wealth of Nations, Book I, ch. IV) argues that things with a high use value often have little or no exchange value, since things that are not costly to produce will command no price, even if they are useful. It is only with Thomas De Quincey, after Ricardo and the demise of classical economics, that the notion that utility (and use value) had a functional relation to exchange value becomes entrenched in economics, an idea that was picked up by Stuart Mill, and through him by Marshall (marginalism or neoclassical economics), as it is well documented by Krishna Bharadwaj (subscription required).

As Bharadwaj says of the Quincey/Mill/Marshall notion:

"This was a different notion of use-value than that accepted by Smith and Ricardo, for whom use-value was a necessary condition for a commodity to possess in order to be an object of exchange, but referred to the physical properties socially known to belong to a commodity, and not dependent upon the individual's estimation of its capacity to gratify subjective inclinations, measured in quantitative terms. In fact, use-value and exchange-value were incomparable in so far as the former covered the qualitative aspect and the latter was a quantitative notion. In De Quincey and Mill, the two notions had become quantitatively comparable (one acting as an extreme limit upon another) and this was only a step towards the later resolution of the paradox in terms of 'total' and 'marginal' utility."
In other words, social utility not individual estimation is behind the classical notion of preferences. The point then is that social utility and, as a result, demand considerations are essential for the determination of long term prices. However, these preferences are not the subjective preferences of individuals, about which nothing scientific can be said, since there are no regularities and they can change for irrational and circumstantial reasons.

The utility that society attaches to a particular good, say a car, however, can be taken as given at a particular point in time. The reasons are not only directly connected to objective characteristics, like the fact that a car is a means of transportation or negatively that they worsen environmental conditions, but also that cars socially may be a source of status, as Veblen later suggested. In other words, preferences (and demand) are socially determined and taken as given for the determination of relative prices, but there is a role for historical and institutional analysis in understanding why and how demand and preferences change over time. The idea was, also, that social preferences are relatively slow to change, and for that reason one can take them as given.

Thus, in the surplus approach there is a role for historical/institutional analysis (not just social preferences, but income distribution too, e.g. the discussion of the determination of the exogenous real wage), and a different role for theoretical analysis (the determination of exchange value). Note that the assumption of given social preferences, as correctly noted by Unlearning Economics, can be seen as a ceteris paribus clause. So Nick is right that there is no reason not to talk about preferences. However, it is far from clear that knowledge has been advanced by the marginalist treatment of individual subjective preferences. As I noted in my comments to his post, the reasons for convex, homothetic preferences is not dictated by knowledge about a regularity about people’s behavior, but simply by the teleological need of finding a solution to the maximization problem. I would say rather that there is no reason to talk about individual preferences. Social utility is fine.

Point 2 is just a poorly built straw man of the Sraffian/surplus approach model. There is no assumption of a linear technology. In fact, that’s typical of the neoclassical aggregative models. The only situation in which a linear relation between wages and profits in the Sraffian model is in the case of the standard commodity [I still owe you all a post on that topic; didn't forget], if this is what Nick means by linearity, which would be the only relevant case (unless he is against input-output models). The input-output framework of any system in which production is done by using commodities to produce commodities (a feature of the real world, by the way), does not imply that there is no technical change either. Technical input-output coefficients can change.

Note that any theory has to say something about the determination of relative prices for a given technology anyway. By the way, in this case Sraffa assumes, like the classical authors, a given level of output (again is a ceteris paribus condition, and a different theory of the determination of output is needed; we know through Garegnani, Sraffa’s disciple, that effective demand is what Sraffa had in mind, and not some version of Say’s Law). That does not imply, hence, constant returns to scale, since that would require an increase in output proportional to the increase in inputs, something that cannot happen when output is given. Remember that output is given for the theory of long term prices only.

Finally, there is enough literature showing that one can reduce, theoretically speaking, labor of different qualities to a uniform type. It is ironic that a neoclassical author would complain about this, since the production function, which is beset with unfathomable problems, also assumes identical labor, and in fact it also assumes a unique capital good (not many means of production) and is fundamentally a world of only one commodity (on this and point 4 by Nick, more on the following post).

TO BE CONTINUED

Friday, September 14, 2012

Path Dependency and Hysteresis

I promised to discuss the difference between these two concepts a while ago. The idea of path dependency is related to Joan Robinson’s famous objection according to which equilibrium is not an actual outcome of real economic processes, and it is for that reason an inadequate tool for analyzing accumulation.  Her view would suggest that path dependency should be seen as a property of models that break with conventional methodological stances, and, in particular, with the dominant neoclassical school.

It is important to note that mainstream defenders of the idea that ‘history matters’, like Paul David (of QWERTY fame), tend to disagree with the view that path dependency implies a rupture with neoclassical economics. David (2001, p. 22) says, in this regard: “imagine … my utter surprise to find this approach being attacked as a rival paradigm of economic analysis, whose only relevance consisted in the degree to which it could be held to represent a direct rejection of the normative, laissez-faire message of neoclassical economics!”  For David, path dependency is a property of dynamic and stochastic processes and cannot be used to assert anything about models and propositions derived in static and deterministic setting [which is, apparently, what he thinks neoclassical economics is all about].

Mark Setterfield's research might hold the key to this issue, by differentiating hysteresis [a concept from physics, that shows that mainstream economists do have physics envy!] and path dependency, and suggesting that the former, more typical of mainstream models, is a special case of the latter, more general and the concept often linked to heterodox models. He suggests that hysteresis is a variation of traditional equilibrium analysis, which implies that some displacements from equilibrium would be self-correcting while others would not. Hysteresis results from the non-uniqueness of equilibrium and under certain conditions the economic system would adjust to a new equilibrium. On the other hand, Setterfield argues that the typical path dependent model is based on cumulative causation, a concept that harks back to Gunnar Myrdal and Nicholas Kaldor’s contributions to economics. In this case, transitory shocks always have permanent effects.

A simple example might illustrate the difference between the more restricted notion of hysteresis and cumulative causation. In the conventional mainstream description of labor markets, an increase in unemployment insurance that allows workers to hold out longer for better paid jobs increases the natural rate of unemployment. After a fall in demand (an external shock), if structural changes to the labor market like higher benefits take place, the level of unemployment will increase and eventually fall, as real wages fall, but to the new and higher natural rate [think of Gordon's Time Varying NAIRU]. Hysteresis implies that history matters, but the system is still self-adjusting.

The quintessential example of cumulative causation is associated to the Kaldor-Verdoorn Law, which says that output growth leads to rising labor productivity. Thus, higher demand leads to higher output growth, which implies higher productivity, lower costs, and higher income in a virtuous circle of expansion. There are several possible expansion paths, depending on the strength of the multiplier-accelerator forces and the Kaldor-Verdoorn coefficient, rather than a single equilibrium to which the system adjusts. There is no adjustment to an optimal equilibrium level, no natural rate fixed or varying. The heterodox notion demands the rejection of the natural rate.

Thursday, September 13, 2012

EPI's The State of Working America 2012

The last edition of the Economic Policy Institute State of Working America is out. A lot of data and more importantly serious and rigorous analysis. Here just want graph, which shows the relation between unemployment and changes in median wages from 1991 to 2011.

As you can see real wages are pro-cyclical, going up in a boom when unemployment falls, and down in a recession. We know that since at least Tarshis and Dunlop critique of Keynes in the 1930s. One more reason why full employment is an important policy goal.

Trade and Development Report 2012

The United Nations Conference on Trade and Development has just published the TDR 2012, on growth and inequality. Not surprisingly the TDR says that "rising inequality is [not] a necessary condition for sound economic growth." The table below shows the regional evolution of inequality since the 1980s.
As the report shows: "Empirical evidence shows that increasing income inequality has been a feature in the world economy since the early 1980s. However, in the 2000s in Latin America and in parts of Africa and South East-Asia income inequality fell in a context of improved external conditions. The evidence suggests that the relationship between growth and inequality is complex and can be altered by proactive economic and social policies."

Wednesday, September 12, 2012

A good book on the Cambridge Capital Controversy

I have received a few questions about what one can read on the topic. I suggest the recent book by Andrés Lazzarini, which is based on his PhD in Rome under Ciccone and Garegnani. From the preface:
This book is the result of my research on the post-war Cambridge capital theory controversies conducted at the Department of Economics, Roma Tre University, to obtain my Ph.D. in Economics (Dottorato di Ricerca in Economia Politica), which I eventually achieved in April 2008. Since then I have presented and discussed several works drawn from my original dissertation which immensely helped to improve the present version for publication at the Pavia University Press.

The so-called Cambridge controversy in the theory of capital took place between the beginning of the 1950s and the mid-1970s, though arguably it got its heyday after the publication of Sraffa’s 1960 book. That there existed a controversy between Cambridge (UK) and Cambridge, Massachusetts (US), could hardly be ignored by any practitioner of the discipline. Yet the recognition of its most relevant results is still lacking even though they touch upon the core of the basic economic principles underlying the core of mainstream neoclassical economics.

Certainly, after more than fifty years since the first exchanges in these debates many articles and books have been written about this theoretical conflict in the economic theory, and in fact the present book has dealt with most if not all of them in our reconstruction. But, unlike many of the reviews available, our reconstruction attempted to present an account which, first, does not follow a chronological survey of the contributions to the Cambridge controversy but rather focuses on the analytical arguments necessary to understand the implications of the phenomena of ‘reswitching’ and ‘reverse capital deepening’. In order to understand why these phenomena affect the very core of neoclassical economics, we have thoroughly examined the ultimate reason of their theoretical existence: the specification of capital in value terms among the data of the theory in order to determine equilibrium variables through supply and demand forces which ultimately rely on the factor substitution principle. These arguments, from a chronological viewpoint, were visibly recognised in the last contributions of a first phase of the debates. Secondly, we present and discuss some contributions of a later phase of the controversy in which the treatment of capital among the data of the theory had already been shifted to its Walrasian specification, but which have not been thoroughly dealt with by previous surveys and which deserve due attention to better understand the present state of the controversy between the neoclassical authors and the critical side.

Another issue examined in the present study, and which also deserves due attention for understanding the evolution of the controversy, is the different – and sometimes we may say radically different – positions not only between the two camps of the debate but also within the critical side of the controversy, in particular during its later phase. Despite the fact that the critically oriented (Cambridge, UK) side agreed that the main results of the controversy had put the foundations of neoclassical economics at serious risks – and hence the latter had to be replaced with an alternative economic theory – differences in the method among the critical side (and also the silence on the part of the marginalist authors in the second phase as to accepting the controversy’s results) helped to prevent that alternative from surfacing at that time.

India’s Growth Model: A Need for Change

By Suranjana Nabar-Bhaduri (Guest Blogger)

India has been cited as an example of an alternative development strategy under which economic growth in the early stages of development is service sector-led rather than manufacturing-led. The international press has heralded its exemplary growth performance, projecting it as one of the emerging market economies that will take over the world economy. As expected in the process of development, the share of the agricultural sector in GDP has decreased over time. However, the share of the manufacturing sector has not shown any significant increase. Rather, the services sector has emerged as the main contributor to India’s economic growth, especially since the 1990s. Evidence suggests that between 1993 to 2007, more than 60 per cent of the increase in India’s GDP was driven by an increase in services GDP. This growing importance of the service sector is partly the result of a meteoric rise in services exports, mainly software and information technology (IT)-enabled services. This performance has been greatly associated with the offshoring process in the developed world, and India’s ability to provide English-speaking workers at relatively lower wages. India’s trade balance and current account have shown persistent deficits, and it has relied on earnings from services exports, remittance inflows, and capital inflows to sustain these deficits.

When one evaluates the ability of this current growth path to generate inclusive and sustainable development, the picture is far from promising. The contribution of the IT-enabled services and the IT industry to employment generation has been miniscule, given the size of the Indian workforce, and the fact that a major part of this workforce remains rural and unskilled. While the total estimated size of the Indian workforce is more than 450 million, total employment in these services is only around 2 million workers. The rest of the employment in the services sector has been in low-productivity self-employment services in the unorganized sector. Furthermore, employment in IT-enabled services and the IT sector falls way short of the annual increment of around 12 million in the Indian workforce. 65 per cent of India’s population of nearly 1.2 billion people is now below the age of 25, leading to the emergence of a young population, a fall in the dependency ratio and a rise in the worker-population ratio. Without concrete policy efforts to accelerate the growth and expansion of agriculture and manufacturing, India cannot tap into the demographic advantage of a relatively young population by providing productive employment for both expanding output, and making the process of growth more inclusive. Equally important, there remain the questions of meeting the needs of food, clothing, investment and industrial products that must constitute a large part of consumption before a sufficiently high standard of living can be attained.

It has been generally argued that India’s trade and current account deficits can be financed and sustained by earnings from services exports, remittances and capital inflows, particularly portfolio investment inflows. Though India is nowhere close to a balance of payments crisis, this argument neglects the constraint imposed by external demand. There is no guarantee that the strong export performance of India’s services can be indefinitely sustained, and generate sufficient foreign exchange earnings to finance rising deficits. The major destinations of India’s IT-enabled services exports, and the main sources of remittances (since the mid-1990s) have been the US and Europe. The slow economic recovery in the US, economic recession in Europe in the backdrop of the Euro crisis and the possibility of tighter immigration laws in Europe have the potential to significantly affect India’s exports of services and remittances. Even the potential to significantly increase receipts from the Middle East, another major source of India’s remittances, has narrowed with the slowing down of the oil boom in these countries in the late 1990s and early 2000s, and the plateauing out of the Indian diaspora in this region with respect to size and economic scope. Moreover, short-term inflows such as portfolio investment appreciate the real effective exchange rate, and further widen trade and current account deficits. The persistence of large trade deficits, can, over time, reduce investor confidence, ultimately resulting in a reversal of inflows and speculative attacks on the domestic currency.

What the Indian economy strongly needs are proactive policy efforts to be directed towards accelerating the growth and expansion of agriculture and industry. This calls for more research and development (R&D) programs through public-private partnerships; credit policies that will make it easier for industrial entrepreneurs to replace outdated or inefficient capital equipment; the establishment of more development financial institutions and subsidies to firms for investing in R&D. Public investment, education policies, vocational training programmes and government procurement policies need to be directed to the increase of labor skills and well paid, high- productivity jobs that reduce the needs for imports, and the dependence on services exports, remittances, and volatile capital flows. There is also a need for more comprehensive employment generation initiatives through infrastructural development and rural development programs. India’s development strategy needs to be one that promotes the growth of the domestic market in order to raise the living standards of its population without hitting the external demand constraint. It should not merely seek to integrate into global markets through a reliance on low-wage services exports, implying the exploitation of its workers, for the benefit of global consumers.

(Originally published in Spanish in Página/12 with information on the author here)

Tuesday, September 11, 2012

2013 Leontief Prize

Wassily Leontief (1905-99)

Tufts University’s Global Development And Environment Institute announced today that it will award its 2013 Leontief Prize for Advancing the Frontiers of Economic Thought to Albert O. Hirschman and Frances Stewart. This year's award, titled "Development in Hard Times," recognizes the critical role played by these researchers in crossing disciplines to forge new theories and policies to promote international development. The ceremony and lectures will take place on March 7, 2013 at Tufts University’s Medford campus.

“Development economics is experiencing a deserved revival, as developing countries increasingly seek to define the appropriate role for the state in a global market economy that is suffering upheavals from politics, economics, and resource constraints,” says GDAE Co-director Neva Goodwin. “A serious return to development theory must start with the work of Albert Hirschman, one of the early leaders in the field. Frances Stewart’s practical and theoretical work on the challenges of modern development further advance such interdisciplinary approaches to international development.”

The Global Development And Environment Institute, which is jointly affiliated with Tufts’ Fletcher School of Law and Diplomacy and the Graduate School of Arts and Sciences, inaugurated its economics award in 2000 in memory of Nobel Prize-winning economist and Institute advisory board member Wassily Leontief, who had passed away the previous year. The Leontief Prize for Advancing the Frontiers of Economic Thought recognizes economists whose work, like that of the institute and Leontief himself, combines theoretical and empirical research to promote a more comprehensive understanding of social and environmental processes. The inaugural prizes were awarded in 2000 to John Kenneth Galbraith and Nobel Prize winner Amartya Sen.

2013 Awardees
Albert Hirschman needs no introduction to those in the field of development. He has been an eminent figure at Columbia, Yale, Harvard, and at the Institute for Advanced Study in Princeton, with which he is currently affiliated. He is considered a pioneer in the field of political economy in developing countries, with a long history of work in Latin America. He has always seen development as a process of creating economy-wide benefits for all, and he understands deeply the nature of “unbalanced growth” and the importance of fostering industrialization and innovation. He has authored some of the most insightful works in the social sciences, straddling economics, psychology, and political theory. His key works include National Power and the Structure of Foreign Trade (University of California Press, 1980 edition), The Strategy of Economic Development (Yale University Press, 1958), Exit, Voice, and Loyalty (Harvard University Press, 1970), and The Passions and the Interests: Political Arguments for Capitalism before its Triumph (1977). In 2007, the Social Sciences Research Council established an annual award in his honor.

Frances Stewart is emeritus Professor of Development Economics at the University of Oxford and was director of Oxford's Department of International Development and the Centre for Research on Inequality, Human Security and Ethnicity (CRISE). Her 1977 book, Technology and Underdevelopment (Macmillan) presents a comprehensive approach to technology choice, challenging neo-classical assumptions. Adjustment with a Human Face (co-authored with Andrea Cornia and Richard Jolly), published in 1987 was highly influential in challenging IMF approaches to adjustment. She has worked on the Human Development Reports of the UNDP since the first Report, and in 2009 was awarded the Mahbub ul Haq prize for lifetime contributions to Human Development. Her long-term project on poverty compares four different approaches – monetary, capabilities, social exclusion, and participatory – from both a theoretical and a policy perspective. Most recently she introduced the concept of “horizontal inequalities” (i.e. inequalities in economic and political resources between culturally defined groups) and has shown how such inequalities constitute a major cause of conflict. Her 2008 book, Horizontal Inequalities and Conflict: Understanding Group Conflict in Multiethnic Societies (Palgrave Macmillan) documents her rich interdisciplinary approach to development.

The Global Development And Environment Institute was founded in 1993 with the goal of promoting a better understanding of how societies can pursue their economic and community goals in an environmentally and socially sustainable manner. The Institute develops textbooks and course materials, published on paper and on its web site, that incorporate a broad understanding of social, financial and environmental sustainability. The Institute also carries out policy-relevant research on climate change, the role of the market in environmental policy, and globalization and sustainable development.

In addition to Amartya Sen and John Kenneth Galbraith, GDAE has awarded the Leontief Prize to Paul Streeten, Herman Daly, Alice Amsden, Dani Rodrik, Nancy Folbre, Robert Frank, Richard Nelson, Ha-Joon Chang, Samuel Bowles, Juliet Schor, Jomo Kwame Sundaram, Stephen DeCanio, José Antonio Ocampo, Robert Wade, Bina Agarwal, Daniel Kahneman, Martin Weitzman, Nicholas Stern, C. Peter Timmer, and Michael Lipton.

Learn more about the Leontief Prize for Advancing the Frontiers of Economic Thought and view a list of previous award recipients
Learn more about the Global Development and Environment Institute
Learn more about Leontief and Input-Output Analysis

The Sinister Irreversibility of the Euro

Giancarlo Bergamini and Sergio Cesaratto (Guest Bloggers)

Draghi's decision to provide unlimited support to short term bonds of those countries who submit their public finances to European control has been greeted with widespread acclaim in Italy.

Albeit necessary to cut the spreads, which had attained unbearable levels, the European Central Bank (ECB) initiative is by no means decisive and under the current terms risks being counterproductive. For starters, it is politically indigestible for Spain and Italy, who hope in fact to scrape through without subscribing to any austere “precautionary program” imposed by Europe and policed by the IMF. In other words, they hope that the expectations triggered by the ECB's announcement can do the trick of lowering the spread on their sovereign bonds, even if nothing concrete follows without conditionality constraints. In reality, if nothing happens the spreads are likely to increase, possibly because the markets expect that the bailout will be requested too late. Let's not get carried away by market euphoria. On occasion of the previous ECB interventions, the Strategic Management Plan (SMP) of 2010-2011 and the two Long Term Refinancing Operations (LTROs), we had the same immediate reactions, only to be wound up after a few weeks. And this time we haven't even had ECB intervention to speak of, just the threat of it and subject to an abstruse mechanism (request for aid by country concerned, signing of a Memorandum of Understanding (MOU), participation of European Financial Stability Facility/European Stability Mechanism (EFSF/ESM) in the bond actions, and at last ECB purchase in the secondary market). It seems highly impractical, save that in the process the applicant country may lose access to the markets.

How much the ECB intends to shrink the spreads is left unknown, however, the presumable inadequacy of its “Outright Monetary Transactions” and the hardening of the austerity clauses attached thereto will make it more and more difficult for the applicant countries to comply with the prescribed terms. If the countries do not comply with the objectives agreed upon, the ECB may withdraw its support, thus sanctioning a possible breakup of the Euro.

A true mess indeed, which renders Draghi's move the umpteenth kicking of the can down the road. Yet, it confirms what heterodox economists (including those subscribing to “Modern Money Theory”) have always asserted: Interest rates are determined by central banks, not by the markets. Hence, the deduction that the bulk of the fire of the past two years has been set by the ECB itself, subservient to the European elite's diktat that welfare state and trade unions be wiped out by means of a fiscal crisis– first in the periphery, but as a lesson to German unions as well.

The fact is, monetary unions are set up with the primary goal of constraining member countries (and their working classes) into a devastating deflationary competition. This teaching derives from Keynes, but few left-wing economists are culturally capable of drawing its dire consequences. Indeed, the ECB has acted in conformity with its mandate. Out of the three sources of the Eurozone crisis, the Euro itself, the two-year-long weakness of ECB's actions, and austerity policies, Draghi's move softens the second, but at the price of exacerbating the third, and without doing anything to deal with the first.

Draghi's move should be read as a response to the fear that the fire could bring down the very reason of the ECB's existence, namely the Euro, and that peripheral countries' citizens call an end to this exasperating agony. The patient is, thus, kept barely alive, so that augmented doses of the other treatment, austerity, effectively annihilate any remaining willingness to react. Therefore, the implications of Draghi's message on the irreversibility of the Euro are pretty sinister, rather than progressive as some commentators seem to infer.

Are there alternative routes? The unconditional intervention of the ECB, while affirming its role as lender-of-last-resort, makes sense as far as it allows the peripheral economies to execute a growth strategy aimed at restoring their competitiveness, with a view to dealing with the huge intra-European trade imbalances. To this end, stabilisation (not reduction) of the debt-to-GDP ratio should be sought. This objective would hopefully reassure the markets, while leaving room for more expansive fiscal policies. However, this would still not be enough.

A rapid increase in the European public budget should also be pursued, with a strong redistributive bias from core to periphery, whereas the role of national budgets should correspondingly decrease (as in the USA, in short). This vision of a Federal Europe is tantamount to a transfer union subdivided between those who subsidize and those that are subsidized, which would prove unacceptable to both, and not because of “national and nationalistic idiosyncrasies” that one commentator regards as obstacles.

Above all, the hard truth is that Europe is headed in another direction, in keeping with the true purpose of the Euro.

(A former version of this article appare in Il manifesto September 8 2012)

Monday, September 10, 2012

Why Tax the Rich?

A question that has been central in this electoral season is whether taxation of the wealthy matters. Republicans suggest that it reduces growth, since it creates negative incentives to 'job-creators', while Democrats argue that it is a question of fairness. The figure shows some data which sheds light on the second issue.
The graph shows the top marginal rate from 1913 to 2010 in the left-hand axis, with a low of 7% in 1913 and an all time high in 1953 at 92% of the marginal dollar being taxed, and the income of the top 1% (the non-99%) on the right-hand side. The lowest level of total income of the top 1% was 7.7% in 1973 and the highest levels were 19.6% in 1928 (right before the 1929 crash) and 18.3% in 2007 (before the 2008 Lehman collapse). There is a clear negative correlation. And, by the way, it is no coincidence that after the increase in inequality we had a financial crash both times.

Saturday, September 8, 2012

Economists do it with models, again


Updating my blog on Nate Silver's FiveThirtyEight.com model forecasts, I left the story in the aftermath of the RNC when it looked like any Romney bounce was below the expectations built into the model. Prior blog linked here.

The model is now responding to inputs from the DNC period, and is showing substantial gains in President Obama's probability of winning in the Electoral College. His probability as of today's (9/8/12) model update is 79.8%, up 2.5 percentage points since the end of the DNC, up 3.5 percentage points since the beginning of the DNC, up 8.2 percentage points since the end of the RNC, and up 10.5 percentage points since the beginning of the RNC. So the model has been consistently increasing the probability of an Obama win for the last 13 days. This is also a model high since it began in June. The early specific poll returns indicate that the Obama bounce is at or exceeding model expectations.

At an 80.0% probability of winning, Obama's win should be categorized as likely on the scale election forecasters appear to use.

I write these words, of course, with pleasure given my political bias. But also with fascination at the extent to which Nate Silver has gone to build a model which responds to the most important inputs to the eventual election outcome. Econometricians, good ones, rock!

This is a clumsy way to show the model output, but the site does not permit a link to just the graphs afaik. And I wanted to visually document the clear uptrend in Obama's winning probability as of this date.


Early Sunday update: Nate Silver has found faith in his model (it's OK Nate, you have the best model I am aware of): link here.

Friday, September 7, 2012

Europe’s Adjustment: How much has happened?

Austerity to reduce spending, and decrease the need for imports, and liberalization reforms to reduce real wages, and promote internal devaluation, have been going on for a while in the European periphery to promote rebalancing, that is, to reduce the current account deficits and allow for continuous service of the debt.

How much fiscal adjustment and wage reduction has already been pushed by the Troika (ECB, EU, IMF)? Quite a bit in fact. Figure 1 below shows the fiscal adjustment (all data from the European Commission’s Statistical Annex of the European Economy, Spring 2012). [Note that the crisis was not fiscal]
Read the rest here.

Employment numbers

Bureau of Labor Statistics (BLS) Employment Situation Summary posted. Total nonfarm payroll employment rose by 96,000 in August, and the unemployment rate edged down to 8.1%. This is weaker than last month, which suggests that the economy is slowing down.
Participation rate fell from 63.7 to 63.5%, meaning that the number of people not in the labor force edged up. The employment-population ratio fell from 58.4 to 58.3%, as shown below.
Note that while the employment-population ratio has been flat since the last recession, the previous boom (the Bush housing bubble) was so weak that the ratio never topped the previous peak (the Clinton dot.com bubble one).

Thursday, September 6, 2012

Income Inequality in the US (1917-2010)

Atkinson, Piketty and Saez have a new website on income inequality that provides free access to a lot of data. Below a taste, showing the ratio of average income of the bottom 90% to the average income of the top 10% in the US from 1917 to 2010.
It is clear that the war, and the policies enacted during the 1930s, allowed a significant compression of the income of the top, which has been basically reverted in the last 3 decades, after Reagan and the rise of the Conservative movement. Nothing new, but good to see it this clearly.

Wednesday, September 5, 2012

Heterodox central bankers again

A new version of the paper Heterodox Central Bankers: Eccles, Prebisch and Financial Reform co-authored by Esteban Pérez Caldentey has been published in the Network Ideas Working Paper Series. Again from the abstract: The Great Depression led to a need to rethink the principles of central banking, as much as it had led to the rethinking of economics in general, with the Keynesian Revolution at the forefront of the theoretical changes. This paper suggests that the role of the monetary authority as a fiscal agent of government and the abandonment of the view of the economy as self-regulated were the central changes in central banking in the center. In the periphery, the change in central banking was related to insulating the worst effects of balance of payments crises, while the use of capital controls became more common. The experiences of Marriner S. Eccles in the United States following the Great Depression, and Raúl Prebisch in Argentina in the 1930s and in Latin America in the 1940s are paradigmatic examples of those new tendencies in central banking at the time.

Is Growth Still Possible?

Paul Krugman has recently pointed out a very pessimistic, but very instigating paper by Robert Gordon, about the possibilities of long run growth. Gordon suggests, very boldly, that the: “rapid progress made over the past 250 years could well turn out to be a unique episode in human history.” In his view, long-term stagnation is a very possible outcome. The reasons are associated to the effects of technical progress on investment.

Gordon argues that, while the first (steam, cotton textiles, railroad) and particularly the second (automobile, chemicals, electricity, oil) Industrial Revolutions (IR) led to a significant increase in investment, the third IR (information technology) has been less prone to lead to significant increases in investment. Further, the advantages of the first and second IRs were incremented by demographic changes and the process of urbanization, which created the need for investment in infrastructure.

Read the rest here.

Tuesday, September 4, 2012

The IMF and stylized fiction

The IMF has posted their Top 20 list of most popular entries since the launch of the blog. At #3 they have the Ten Commandments of Fiscal Adjustement in Advanced Economies, which is from 2010, but still worth reading, since their views have hardly changed. I am not going to go through the whole list, even though it does merit careful analysis. I want just to point out a few problems with three of the commandments (do they really need the religious analogy?). This 10 Commandments are based on the IMF's views on the stylized facts of fiscal consolidations.

Note that the IMF wants a reduction in debt-to-GDP ratios in the long run (commandment #3), even if nobody knows exactly what is the difference of having a 40% ratio, which they recommend for 'emerging markets' (meaning developing economies), or a 250%, as the UK had during the Napoleonic Wars (here). My first concern is with the idea that consolidation (by which they mean austerity) should be done by cutting spending and not increasing taxes (#4), because this is more conducive to growth.

This is a proposition they repeat in their last Fiscal Monitor (2012: p. 35), where we are told that:
"a number of earlier studies have shown that expenditure-based fiscal consolidations have a more favorable effect on output than revenue-based consolidations, in spite of the standard multiplier analysis … Chapter 3 of the October 2010 World Economic Outlook reaches the same conclusion (IMF, 2010b) and notes that this result is partly because, on average, central banks lower interest rates more in the case of expenditure-based consolidations (perhaps because they regard them as more long-lasting)."
Note, however, that the reason for the superior performance for cutting spending instead of raising taxes (on the rich one would hope) is that the Central Bank does not hike rates in the former case, since it is part of a conservative plan to reduce the size of government (note that the IMF asks for consolidations to be fair, #6, but then wants to cuts social spending, #5). Worse the notion is also based on the idea that lower (higher) spending brings down (up) the rate of interest and leads to crowding in (out) of private investment. The problem is that the evidence for a positive (negative) effect of fiscal deficits (surplus), or public spending increase (reduction), on interest rates, is that it is almost non-existent (see UNCTAD, 2011, chapter 3 for a review).

The other point is related to the last commandment (#10), which says that you should coordinate your macroeconomic policies with other countries. I'm not even going to deal with the problems of coordination. My problem is that the arguments tend to be based on the Mundell-Fleming (MF) model (the ISLMBP with perfect capital mobility), which suggests that fiscal policy is less efficient in a small open economy. In this case, fiscal policy raises the rate of interest, with capital mobility, pressures for inflows lead to an appreciation of the currency, and lower trade surpluses. Instead of crowding out, meaning lower investment, one gets lower output from the external accounts. That's why they say in their last Fiscal Monitor that "in line with the theory, fiscal multipliers tend to be smaller in more open economies" (2012, p. 33).

Again this depends on a weak empirical relation. In the United States seldom is the case that expansionary fiscal policy causes higher rates of interest. In fact, the policy of the strong dollar, with the impact on manufacturing output and exports, has often been detached from fiscal expansionism or higher rates of interest (e.g. the Clinton years in which a strong dollar went hand in hand with fiscal consolidation and monetary easing to feed the dot-com bubble).

Finally, note that even small open economies in several periods were able to have very effective fiscal policies, because in spite of relatively flexible exchange rates, they used capital controls to avoid the effects of volatile capital flows on their external accounts. In this sense, the world of relatively regulated capital flows, rather than of fixed exchange rates (even if sometimes the two are confounded as a result of the Bretton Woods arrangement), seems to be more conducive to effective fiscal policy. So the lesson should not be that small open economies cannot do effective fiscal policy, but that capital controls (which they are not quite okay with contrary to what you might have heard, but I leave for another post) are necessary.

PS: For a more consistent theoretical critique of the MF model see Serrano and Summa (2012).

Monday, September 3, 2012

Productivity slowdown and and the return of secular stagnation

Robert Gordon has recently argued that secular stagnation is a likely possibility. Alvin Hansen, one of the most influential of the neoclassical synthesis Keynesians, was the father of the idea. For Hansen the reasons were associated with declining population growth, the disappearance of labor-saving technology, and the closing of new frontiers.

His views were published in a famous book titled Full Recovery or Stagnation?, in which he argued that investment opportunities were lacking. Since Keynes theory was also dependent on the expansion of autonomous investment, and Hansen was a Keynesian, the stagnationist thesis was considered a Keynesian theory. All in all, Hansen's views are not very different from Gordon's position. However, it is important to note that Gordon presumes that productivity determines growth, which is not a very Keynesian proposition.

The table below shows the rate of labor productivity growth from 1948 to 2011, and the two sub-periods that start with the productivity slowdown in 1973. Note that productivity growth follows the rate of growth of output (GDP).
This is not the Okun's Law discussed before, since these are averages that eliminate the cyclical relation between productivity and growth. In other words, the relation above is about the trend, something referred to as the Kaldor-Verdoorn Law (KVL). KVL suggests that productivity growth is not the cause of growth, but the result, and it assumes that growth is demand led, in Keynesian fashion. This would suggest, at least from the technological point of view, that there are less reasons than Gordon suggests for his pessimism. Of course one may very well think that stagnation in the US will result from the political forces that do not allow demand expansion. And yes everything hinges on the thorny issue of causality.

Saturday, September 1, 2012

Election anti-bounce - economists do it with models


A very brief observation on economists, models, and the US presidential race.

Nate Silver, an economist (who successfully escaped the University of Chicago after completing his BA there), and a person who got hooked on econometrics, has the best electoral race forecasting model I am aware of.

During the 2008 race, he forecast the outcome within 0.1%.

His current model (as of 9/1/12) has President Obama's probability of winning at 72.0%. That is an increase of 2.7 percentage points during this last week of the Republican national convention. The Republican anti-bounce? Maybe they needed more of Dirty Harry, er, Clint Eastwood?

More importantly, this lead has been fairly consistent, perhaps slowly widening, since June. Those who call the race close are paying too much attention to the national horse-race polls. There, Obama's current projected lead is 1.8%, which is fairly close. But at this point has little bearing on the forecast outcome.

For the more wonkish of you out there, check out Nate's blog FiveThirtyEight.com here. Essentially, Nate's model is a (Bayesian?) weighted average of poll results and economic indicators. His results last time were impressive. It's the best political forecasting model I am aware of.

Update 9/3/12. Happy Labor Day! Some have suggested, including an anonymous comment here, that I am being too optimistic in interpreting the Silver models' outputs. No, I am interpreting the models' output. If they are trending toward Obama, that is what the model is saying.

So it is incumbent that if the model goes negative, I also report that output. Silver's Now-cast went negative early today, indicating a 3.1 percentage point decline from the intra-RNC peak to 71.0%. Note that level is 0.3 percentage points lower than the model indicated at the beginning of the RNC, so interpretation is sensitive to choice of starting points. Some, including Silver wearing his hyper-conservative hat, say this is evidence of a Romney RNC bounce.

OTOH, the paired Nov. 6 Forecast model, which has an adjustment for relative convention bounce as well as an adjustment for relative economic indicators, indicates a further widening in the probability of an Obama win, to 74.5%, a model high.

So the convention period was, in terms of these two models, a toss-up, or as Silver says a split-decision. No one knows which model is more accurate at this point in time; I simply point out that Silver built the Nov. 6 Forecast model to specifically adjust for relative convention performance. (Don't go too wobbly, Nate Silver). What is clear right now is that Romney has underperformed recent convention performances.

On to Charlotte to continue the saga.