I started out saying: I find my peers, as they age, become increasingly unwilling to mark their beliefs to market. .... So let me ... spend my time this lunchtime detailing four points in economics at which the world has surprised me over the past decade, and in which as a result reality has led me to shift my beliefs.
The world has turned out to be more Keynesian than I would have imagined a decade ago.
Low-tax, low-service U.S. state level political economy has proved to be ineffective as an economic development model. I was always pretty sure that it was a lousy bet from the standpoint of societal welfare. But a decade ago I thought it at least boosted state-level GDP. Now I do not.
The success of the implementation of Obamacare has raised my estimation of the administrative competence of the government.
And the aggregate economic costs to America of local NIMBYism now appear to me to be much larger than I would have thought reasonable decade ago: we are no longer a country in which people can afford to move to places where they will be more productive and more highly paid because high-productivity places refuse to upgrade their residential density.
All this, I said, has powerful political consequences. And the politics of the last decade has also been very surprising to me. But I did not have time to get into that in any depth…
The biggest surprise for me, and perhaps it shouldn't have been, is the degree to which politicians are willing to put political interests ahead of helping people in need. Watching the political/policy reaction to the Great Recession was both disappointing and eye opening.
This is the first part of an interview of Claudia Goldin by the Richmond Fed (later she also talks about her work on education and inequality, among other things):
Econ Focus: Much of your work has focused on the history of women's employment in the United States. You've described the past few decades of that history as a "quiet revolution." What do you mean by that?
Goldin: The quiet revolution is a change in how young women perceive the courses their lives are going to take. One of the places we see this is the National Longitudinal Survey, which began in 1968 with women who were between 14 and 24 years old. One of the questions the survey asked was, "What do you think you're going be doing when you're 35 years old?" In 1968, young women essentially answered this question as if they were their mothers. They would say, "Well, I'm going to be a homemaker, I'm going to be at home with my kids." Some did say they would be working in the labor market, but the fraction that said they would be out of the home was much smaller than the fraction that actually did end up working outside the home.
But as these women matured and as successive cohorts were interviewed, their perceptions of their futures, their own aspirations, began to change. And so their expectations when young about being in the labor force began to match their actual participation rates once they were older. That meant these young women could engage in different forms of investment in themselves; they attended college to prepare for a career, not to meet a suitable spouse. College women began to major in subjects that were more investment oriented, like business and biology, rather than consumption oriented, like literature and languages, and they greatly increased their attendance at professional and graduate schools.
EF: What changed in society that allowed this revolution to occur?
Goldin: One of the most important changes was the appearance of reliable, female-controlled birth control. The pill lowered the cost to women of making long-term career investments. Before reliable birth control, a woman faced a nontrivial probability of having her career derailed by an unplanned pregnancy — or she had to pay the penalty of abstinence. The lack of highly reliable birth control also meant a set of institutions developed around dating and sex to create commitment: Couples would "go steady," then they would get "pinned," then they would get engaged. If you're pinned or engaged when you're 19 or 20 years old, you're not going to wait until you're 28 to get married. So a lot of women got married within a year or two of graduating college. That meant women who pursued a career also paid a penalty in the marriage market. But the pill made it possible for women who were "on the pill" to delay marriage, and that, in turn, created a "thicker" marriage market for all women to marry later and further lowered the cost to women of investing in a career.
EF: What happened during previous periods of change in women's labor force participation?
Goldin: A large fraction of employment in the early 20th century, outside of agriculture, was in manufacturing. And manufacturing jobs were not particularly nice jobs. White-collar jobs in offices greatly expanded in the 1910s and 1920s, but they required one to be literate and possibly numerate, and women who were older at the time would not have had the education to move into those jobs. And so there developed a social norm against married women working. It was OK if you were single, it was often OK if you were an immigrant or African American, but it wasn't OK if you were an American-born white woman from a reasonable family, especially if you had kids.
New technologies further increased the demand for white-collar workers, and the high school movement produced a huge increase in women's education during the early decades of the 20th century. More positions were created that were considered "good" jobs, those that young women could start after high school and keep after marriage with far less social stigma.
The income effect and the substitution effect come from a set of preferences. If individual families have more income in a period when there are various constraints on women's work, they're going to purchase the leisure and consumption time of the women in the family, and the income effect will be higher. But if well-paying jobs with lower hours and better working conditions open up, then the income effect will decrease and the substitution effect will increase and both will serve to move women into the labor force. ...
The Obama administration is risking its credibility over the trade deal:
Trade and Trust, by Pau Krugman, Commentary, NY Times: One of the Obama administration’s underrated virtues is its intellectual honesty. Yes, Republicans see deception and sinister ulterior motives everywhere, but they’re just projecting. The truth is that, in the policy areas I follow, this White House has been remarkably clear and straightforward about what it’s doing and why.
Every area, that is, except one: international trade and investment.
I don’t know why the president has chosen to make the proposed Trans-Pacific Partnership such a policy priority. Still, there is an argument to be made for such a deal, and some reasonable, well-intentioned people are supporting the initiative.
But other reasonable, well-intentioned people have serious questions about what’s going on. ...
The administration’s main analytical defense of the trade deal came earlier this month, in a report from the Council of Economic Advisers. Strangely, however, the report didn’t actually analyze the Pacific trade pact. Instead, it was a paean to the virtues of free trade, which was irrelevant to the question at hand.
First of all, whatever you may say about the benefits of free trade, most of those benefits have already been realized. ...
In any case, the Pacific trade deal isn’t really about trade. Some already low tariffs would come down, but the main thrust of the proposed deal involves strengthening intellectual property rights — things like drug patents and movie copyrights — and changing the way companies and countries settle disputes. And it’s by no means clear that either of those changes is good for America. ...
As I see it, the big problem here is one of trust.
International economic agreements are, inevitably, complex, and you don’t want to find out at the last minute ... that a lot of bad stuff has been incorporated into the text. So you want reassurance that the people negotiating the deal are listening to valid concerns, that they are serving the national interest rather than the interests of well-connected corporations.
Instead of addressing real concerns, however, the Obama administration has been dismissive, trying to portray skeptics as uninformed hacks who don’t understand the virtues of trade. But they’re not...
It’s really disappointing and disheartening to see this kind of thing from a White House that has, as I said, been quite forthright on other issues. And the fact that the administration evidently doesn’t feel that it can make an honest case for the Trans-Pacific Partnership suggests that this isn’t a deal we should support.
Conservatives and Keynes: ...the debate over business-cycle economics has always been a left-right thing. Specifically, the right has always been deeply hostile to the notion that expansionary fiscal policy can ever be helpful or austerity harmful; most of the time it has been hostile to expansionary monetary policy too... So the politicization of the macro debate isn’t some happenstance, it evidently has deep roots.
Oh, and some of us have been discussing those roots in articles and blog posts for years now. We’ve noted that after World War II there was a concerted, disgraceful effort by conservatives and business interests to prevent the teaching of Keynesian economics in the universities, an effort that succeeded in killing the first real Keynesian textbook. Samuelson, luckily, managed to get past that barrier — and many were the complaints. ...
What’s it all about, then? The best stories seem to involve ulterior political motives. Keynesian economics, if true, would mean that governments don’t have to be deeply concerned about business confidence, and don’t have to respond to recessions by slashing social programs. Therefore it must not be true, and must be opposed. ...
If you think I’m being too flip, too conspiracy-minded, or both, OK — but what’s your explanation? For conservative hostility to Keynes is not an intellectual fad of the moment. It has absolutely consistent for generations, and is clearly very deep-seated.
1776: The Revolt Against Austerity: Was the Declaration of Independence a powerful indictment of British austerity policies? Does America’s founding document need to be seen as part of an economic debate about the British Empire? ... Just as political debates in Britain and the United States today turn in large part on the response to the great recession of 2008, so the events that made the United States were shaped by the British imperial government’s reaction to the debt crisis of the 1760s. What made the Declaration so offensive to British politicians then ... is that America’s founders offered a blueprint for a different kind of state response to fiscal crisis. ... [explains how debt crisis led to austerity policies for the colonies] ...
What alternative strategy did the authors of the Declaration propose? Today, we tend to regard the practice of using government spending to stimulate economic growth as an invention of John Maynard Keynes in the 1930s. But already in the eighteenth century, self-styled Patriots, followers of Pitt on both sides of the Atlantic, argued that what the British Empire needed if it was to recover from the fiscal crisis was not austerity but an economic stimulus. ...
Twenty-first century American politicians routinely draw our attention to our founding moment and founding document... But they fail to understand the economic arguments that in large measure shaped what Thomas Jefferson and his colleagues wrote. When Governor Scott Walker of Wisconsin proudly proclaims that “we celebrate the fourth of July and not April 15, because in America we celebrate our independence from the government, not our dependence on them [sic],” he fails to see that our founders blamed George III and his government not for taxing too much but for doing too little to stimulate consumer demand. ...
America’s founding document called for an American state that would promote economic growth just as the British state had done before the shift toward balancing the books. ... Had George III and his ministers not adopted austerity measures in the 1760s and 1770s, had they chosen to follow Pitt’s policies of economic stimulus, America’s founders might not have needed to declare their independence at all.
[That's only a small part of the essay -- there's a lot more in the full post, e.g. an argument the Adam Smith supported expansionary policy for the colonies.]
Before heading to one of my least favorite places, the dentist, here's one from the Liberty Street Economics Blog at the NY Fed:
Why Are Interest Rates So Low?, by Marco Del Negro, Marc Giannoni, Matthew Cocci, Sara Shahanaghi, and Micah Smith: Second post in the series: In a recent series of blog posts, the former Chairman of the Federal Reserve System, Ben Bernanke, has asked the question: “Why are interest rates so low?” (See part 1, part 2, and part 3.) He refers, of course, to the fact that the U.S. government is able to borrow at an annualized rate of around 2 percent for ten years, or around 3 percent for thirty years. If you expect that inflation is going to be on average 2 percent over the next ten or thirty years, this implies that the U.S. government can borrow at real rates of interest between 0 and 1 percent at the ten- and thirty-year maturities. This phenomenon is by no means limited to the United States. Governments in Japan and Germany are able to borrow for ten years at nominal rates below 1 percent, and the ten-year yield on Swiss government debt is slightly negative. Why is that?
To answer this question, it is useful to consider the concept of the “natural rate of interest,” introduced by Knut Wicksell in 1898 and fully integrated in modern macroeconomic models by Michael Woodford. This natural rate refers to the real interest rate consistent with full employment of labor and capital resources. More specifically, it can be viewed as the rate of interest that would obtain if all prices and wages had adjusted so as to bring the level of economic activity to its full-employment level. The natural rate of interest can vary substantially over time, as it is driven by numerous factors such as the long-run potential growth rate of the economy, demographic composition of the population, desirability of saving on the part of households, perceived profitability of investment opportunities, government spending, and taxes. Importantly, by construction, the natural rate of interest does not depend on the stance of monetary policy: when prices and wages are assumed to adjust instantaneously, economic activity fully employs all available resources, and there is little monetary policy can do to affect economic activity.
According to Wicksell, the natural rate of interest is the right benchmark for determining the extent to which monetary policy is accommodative. He argues that, “it is not a high or low rate of interest in the absolute sense which must be regarded as influencing the demand for raw materials, labour, and land or other productive resources, and so indirectly as determining the movement of prices. The causality factor is the current rate of interest on loans as compared to [the natural rate].” An implication is that monetary policy is not by itself expansionary if interest rates are low and restrictive if interest rates are high. Instead, monetary policy turns out to be expansionary if rates are below the natural rate and restrictive if rates are above the natural rate.
One key difficulty, however, is that the natural rate is not directly observable, as it is a counterfactual rate that would obtain only if all the economy’s resources were fully employed. To get a sense of where the natural rate is, economists have employed various techniques. In a recent paper, Jim Hamilton, Ethan Harris, Jan Hatzius, and Kenneth West use moving averages of the actual real rate of interest over a relatively long period of time as a proxy for the natural rate of interest. The idea is that we can estimate the natural rate of interest by averaging the actual interest rate in periods when the actual rate is below the natural rate and periods when the actual rate is above the natural rate. They assess that recent estimates of the real rate are low as a result of temporary headwinds on investment, deleveraging, and so on, but that the long-run equilibrium U.S. real interest rate remains significantly positive. While the measure provided by these authors is very useful to understand low frequency changes in the actual real rate of interest, it arguably does not correspond to the Wicksellian notion of the natural rate. As Paul Krugman points out in a recent blog post, when monetary policy is constrained (by, for example, the zero lower bound) the actual and natural rates may not coincide, and if the constraint binds for a long time, the difference between the two can be quite persistent. The practical implication is that this long-run measure of the effective real rate cannot be used to assess the stance of monetary policy in those instances.
Another approach, proposed by Thomas Laubach and John C. Williams, involves estimating a statistical model linking real GDP, inflation, and a short-term interest rate, and assuming that the gap between real GDP and its long-run trend depends on the past gaps between the actual interest rate and the natural rate. This model allows one to disentangle movements in the natural rate driven by long-run growth considerations from those driven by cyclical considerations. However, the estimated measure is best suited for a longer-run measure of the natural rate of interest, as discussed more recently in an article by San Francisco Fed President Williams.
DSGE models, such as the New York Fed’s DSGE model, provide an alternative approach for estimating the natural rate of interest by imposing on the relationships among economic variables a structure informed by modern economic theory. This model, which builds on the model with financial frictions used in Del Negro, Giannoni, and Schorfheide (2015), is estimated using data on real GDP, consumption, investment, hours worked, real wages, two distinct measures of inflation (the GDP deflator and core PCE inflation), the federal funds rate, and the ten-year Treasury yield. We also use survey-based long-run inflation expectations to capture information about the public’s perception of the Fed’s inflation objective, and market data on expectations of future federal funds rates to incorporate the effects of forward guidance on the policy rate. Finally, the model allows for persistent shocks to both the level and the growth rate of productivity, in an attempt to allow for the possibility of secular stagnation, and uses data on the growth rate of productivity. We discuss the model’s forecasts in the first post of the series.
Having a model makes it possible to define, and compute, the Wicksellian notions of “full employment” output and interest rates, precisely because we can construct a counterfactual economy. Specifically, we construct the natural rate as the equilibrium interest rate that would obtain if prices and wages were perfectly flexible (so that output and employment would be at their “potential”), if there were no shocks to the markup on goods and labor markets, and no financial frictions. Robert Barsky, Alejandro Justiniano, and Leonardo Melosi have used a similar model to estimate the natural rate of interest.
The red line in the chart below shows the model’s estimate of the nominal natural rate of interest (that is, the sum of the real natural rate of interest and expected inflation) along with its forecast. For comparison, the chart also shows the recent evolution of the nominal federal funds rate (solid blue line).
The chart shows that the estimated quarterly natural rate of interest is quite volatile in the short run, mostly because of fluctuations in quarterly consumption. As these short-term fluctuations are averaged out (right-hand panel), the estimated natural rate paints a fairly consistent picture: The natural rate fell sharply during the crisis, from above 6 percent in early 2007 to about -2 percent in mid-2009. The natural rate was slightly above the actual rate for the period preceding the Great Recession, and well below it for the entire post-Recession period, indicating that the zero lower bound imposed a constraint on interest rate policy. The natural rate is currently close to, but still below, the actual rate, suggesting that policy is not particularly accommodative. Finally, the natural rate is projected to increase in the near future, since the factors that brought down the natural rate during the crisis are dissipating, as discussed in our first post.
What are the factors that have led to such a precipitous drop in the natural rate and that have kept the rate at such a low level? The DSGE model allows us to trace the evolution of the natural rate back to the original shocks perturbing the economy. The next chart shows the real natural rate of interest, in deviations from its long-run mean. The colored bars show the contribution of various shocks to the evolution of the natural rate.
The dark blue bars refer to household “discount factor” shocks, that is, to disturbances to the household’s willingness to consume or save. The chart shows that while in 2007, households appeared more willing than normal to consume, they have since reversed this tendency by saving more than usual. This factor boosted the real natural rate above its long-run average by 2 percentage points in early 2007 and depressed the rate by about 1 ½ percentage points in 2012-13. The light blue bars refer to shocks in firms’ willingness to invest in physical capital. The chart reveals that in 2007 and 2008, firms were very willing to invest. However, since 2009, they have been much more prudent, which contributed to lowering the natural rate by more than one percentage point. These effects are projected to abate slowly as consumers are able and willing to consume more again and firms are projected to invest more. Changes in total factor productivity are also responsible for large drops in the natural rate, from 2008 to late 2014, as the orange bars show. Finally, other aggregate demand factors, such as government expenditures, have pushed up the natural rate in late 2008 but have exerted a downward pressure on rates since then.
Several factors are missing from the analysis. For instance, a potentially important omission relates to the assumption of a closed economy. Properly accounting for international factors would likely result in a different estimate of the natural rate. Explanations pertaining to the “global saving glut” advanced by Ben Bernanke suggest that foreign saving might push the natural rate of interest to even lower levels than estimated here.
In conclusion, the low level of interest rates experienced since 2008 is largely attributable to a reduction in the natural rate of interest, which reflects cautious behavior on the part of households and firms. Monetary policy has largely accommodated the decline in the natural rate of interest, in order to mitigate the adverse effects of the crisis, but the zero lower bound on interest rates has imposed a constraint on the ability of interest rate policy to stabilize the economy. Looking ahead, we expect these headwinds to continue to abate, and the natural rate of interest to return closer to historical levels.
Disclaimer The views expressed in this post are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.
From the blog Three-Toed Sloth by Cosma Shalizi (this also appeared in yesterday's links):
Any P-Value Distinguishable from Zero is Insufficiently Informative: After ten years of teaching statistics, I feel pretty confident in saying that one of the hardest points to get through to undergrads is what "statistically significant" actually means. (The word doesn't help; "statistically detectable" or "statistically discernible" might've been better.) They have a persistent tendency to think that parameters which are significantly different from 0 matter, that ones which are insignificantly different from 0 don't matter, and that the smaller the p-value, the more important the parameter. Similarly, if one parameter is "significantly" larger than another, then they'll say the difference between them matters, but if not, not. If this was just about undergrads, I'd grumble over a beer with my colleagues and otherwise suck it up, but reading and refereeing for non-statistics journals shows me that many scientists in many fields are subject to exactly the same confusions as The Kids, and talking with friends in industry makes it plain that the same thing happens outside academia, even to "data scientists". ... To be fair, one meets some statisticians who succumb to these confusions.
One reason for this, I think, is that we fail to teach well how, with enough data, any non-zero parameter or difference becomes statistically significant at arbitrarily small levels. The proverbial expression of this, due I believe to Andy Gelman, is that "the p-value is a measure of sample size". More exactly, a p-value generally runs together the size of the parameter, how well we can estimate the parameter, and the sample size. The p-value reflects how much information the data has about the parameter, and we can think of "information" as the product of sample size and precision (in the sense of inverse variance) of estimation, say n/σ2. In some cases, this heuristic is actually exactly right, and what I just called "information" really is the Fisher information.
Rather than working on grant proposals Egged on by a friend As a public service, I've written up some notes on this... [The mathematics comes next.]
Quite often, the facts are consistent with either theory. For example, the well-attested momentum anomaly - the tendency for assets that have risen in price recently to continue rising - is "consistent with" both a cognitive bias (under-reaction) and with rational behaviour; fund managers' desire to avoid benchmark risk.
My point here should be well-known. The Duhem-Quine thesis warns us that facts under-determine theory: they are "consistent with" multiple theories. ...
So, how can we guard against the "consistent with" error? One thing we need is history: this helps tell us how things actually happened. And - horrific as it might seem to some economists - we also need sociology: we need to know how people actually behave and not merely that their behaviour is "consistent with" some theory. Economics, then, cannot be a stand-alone discipline but part of the social sciences and humanities...
Alan Krueger kicks off a debate on the relationship between inequality and mobility:
The great utility of the Great Gatsby Curve: Every so often an academic finding gets into the political bloodstream. A leading example is "The Great Gatsby Curve," describing an inverse relationship between income inequality and intergenerational mobility. Born in 2011, the Curve has attracted plaudits and opprobrium in almost equal measure. Over the next couple of weeks, Social Mobility Memos is airing opinions from both sides of the argument, starting today with Prof Alan Krueger, the man who made the Curve famous.
Building on the work of Miles Corak, Anders Björklund, Markus Jantti, and others, I proposed the “Great Gatsby Curve” in a speech in January 2012. The idea is straightforward: greater income inequality in one generation amplifies the consequences of having rich or poor parents for the economic status of the next generation.
The curve is predicted by economic theory…
There are strong theoretical underpinnings for the Great Gatsby Curve. Gary Solon has shown, for example, that the relationship is predicted by a standard intergenerational model if the payoff to education increases over time. This causes inequality to rise in one generation, but also increases the significance of this inequality for children’s economic success, since well-off parents have more resources and more incentive to invest in their children’s education.
Other mechanisms could also underlie the Great Gatsby Curve. For example, if social connections are important for success in the economy (e.g., getting the right summer internship), and wealthy parents have access to job networks, then a spreading out of the income distribution would leave children from the bottom of the distribution in a more disadvantaged position in terms of gaining access to networks that will ultimately lead to a higher paid job.
Consistent with the Great Gatsby Curve, several studies also point to a growing gap in the resources devoted to education between high- and low-income American families. As predicted by the Great Gatsby Curve, it appears that the dramatic rise in income inequality has created a more tilted playing field for the next generation. ...
The two key remaining questions now are:
What are the main mechanisms underlying the Great Gatsby Curve?
What policy actions can be taken to improve economic opportunities for children born in disadvantaged circumstances?
Learning more about the former can help us to achieve the latter — which is, in the end, the most important goal of all.
The situation where there is no way to make some people better off without making anyone worse off is often referred to as “Pareto optimal” after the Italian economist and political theorist Vilfredo Pareto, who developed the underlying concept. “Pareto optimal” is arguably, the most misleading term in economics (and there are plenty of contenders). ...
Describing a situation as “optimal” implies that it is the unique best outcome. As we shall see this is not the case. Pareto, and followers like Hazlitt, seek to claim unique social desirability for market outcomes by definition rather than demonstration. ...
If that were true, then only the market outcome associated with the existing distribution of property rights would be Pareto optimal. Hazlitt, like many subsequent free market advocates, implicitly assumes that this is the case. In reality, though there are infinitely many possible allocations of property rights, and infinitely many allocations of goods and services that meet the definition of “Pareto optimality”. A highly egalitarian allocation can be Pareto optimal. So can any allocation where one person has all the wealth and everyone else is reduced to a bare subsistence. ...
Restoring the Public’s Trust in Economists: The belief that economics has become politicized is a big reason the general public has lost faith in the ability of economists to give advice on important policy questions. For most issues, like raising the minimum wage, the effects of government spending, international trade, whether CEOs deserve their high compensation, etc., etc., it seems as though economists who also happen to be Republicans will mostly line up on one side of the issue, while economists who are Democrats mostly take the other. Members of the general public, not knowing who to believe and unable to rely upon the press to sort it out, either throw up their hands in frustration or follow the side that agrees with their preconceived notions and ideological beliefs.
But why is it so hard to sort out? Why can’t the press do a better job of avoiding “he said – she said” reporting and give the public direct and specific answers to these important policy questions? One reason is the “mathiness” that has infected our economic models, something economist Paul Romer recently identified as a big problem with economic theory. ...
Errors and Lies, by Paul Krugman, Commentary, NY Times: Surprise! It turns out that there’s something to be said for having the brother of a failed president make his own run for the White House. Thanks to Jeb Bush, we may finally have the frank discussion of the Iraq invasion we should have had a decade ago...
The Iraq war wasn’t an innocent mistake, a venture undertaken on the basis of intelligence that turned out to be wrong. America invaded Iraq because the Bush administration wanted a war. The public justifications for the invasion were nothing but pretexts, and falsified pretexts at that. We were, in a fundamental sense, lied into war. ...
This was, in short, a war the White House wanted, and all of the supposed mistakes that, as Jeb puts it, “were made” by someone unnamed actually flowed from this underlying desire. ...
Now, you can understand why many political and media figures would prefer not to talk about any of this. Some of them ... may have fallen for the obvious lies, which doesn’t say much about their judgment. More, I suspect, were complicit: they realized that the official case for war was a pretext, but had their own reasons for wanting a war, or, alternatively, allowed themselves to be intimidated into going along. ...
On top of these personal motives, our news media in general have a hard time coping with policy dishonesty. Reporters are reluctant to call politicians on their lies, even when these involve mundane issues like budget numbers, for fear of seeming partisan. In fact, the bigger the lie, the clearer it is that major political figures are engaged in outright fraud, the more hesitant the reporting. And it doesn’t get much bigger — indeed, more or less criminal — than lying America into war.
But truth matters, and not just because those who refuse to learn from history are doomed in some general sense to repeat it. The campaign of lies that took us into Iraq was recent enough that it’s still important to hold the guilty individuals accountable. Never mind Jeb Bush’s verbal stumbles. Think, instead, about his foreign-policy team, led by people who were directly involved in concocting a false case for war.
So let’s get the Iraq story right. Yes, from a national point of view the invasion was a mistake. But (with apologies to Talleyrand) it was worse than a mistake, it was a crime.
Efficiency matters! It can make the economy better off. Understanding efficiency in manufacturing and retail markets can help guide policy, according to prize-winning research by Daniel Muller and Fabian Herweg in The Economic Journal. See the summary here: http://www.res.org.uk/details/mediabr...
The interview was recorded at the Royal Economic Society annual conference at The University of Manchester in Spring 2015 and produced by Econ Films.
Blaming Keynes: A few people have asked me to respond to this FT piece from Niall Ferguson. I was reluctant to, because it is really just a bit of triumphalist Tory tosh. That such things get published in the Financial Times is unfortunate but I’m afraid not surprising in this case. However I want to write later about something else that made reference to it, so saying a few things here first might be useful.
The most important point concerns style. This is not the kind of thing an academic should want to write. It makes no attempt to be true to evidence, and just cherry picks numbers to support its argument. I know a small number of academics think they can drop their normal standards when it comes to writing political propaganda, but I think they are wrong to do so. ...
Ed Prescott is No Robert Solow, No Gary Becker: In his comment on my Mathiness paper, Noah Smith asks for more evidence that the theory in the McGrattan-Prescott paper that I cite is any worse than the theory I compare it to by Robert Solow and Gary Becker. I agree with Brad DeLong’s defense of the Solow model. I’ll elaborate, by using the familiar analogy that theory is to the world as a map is to terrain.
There is no such thing as the perfect map. This does not mean that the incoherent scribbling of McGrattan and Prescott are on a par with the coherent, low-resolution Solow map that is so simple that all economists have memorized it. Nor with the Becker map that has become part of the everyday mental model of people inside and outside of economics.
Noah also notes that I go into more detail about the problems in the Lucas and Moll (2014) paper. Just to be clear, this is not because it is worse than the papers by McGrattan and Prescott or Boldrin and Levine. Honestly, I’d be hard pressed to say which is the worst. They all display the sloppy mixture of words and symbols that I’m calling mathiness. Each is awful in its own special way.
What should worry economists is the pattern, not any one of these papers. And our response. Why do we seem resigned to tolerating papers like this? What cumulative harm are they doing?
The resignation is why I conjectured that we are stuck in a lemons equilibrium in the market for mathematical theory. Noah’s jaded question–Is the theory of McGrattan-Prescott really any worse than the theory of Solow and Becker?–may be indicative of what many economists feel after years of being bullied by bad theory. And as I note in the paper, this resignation may be why empirically minded economists like Piketty and Zucman stay as far away from theory as possible. ...
[He goes on to give more details using examples from the papers.]
Factoryless Goods Producing Firms: Andrew B. Bernard and Teresa C. Fort sketch what is known about the "Factoryless Goods Producing Firm" in the May 2015 issue of the American Economic Review: Papers and Proceedings (vol. 105:5, pp. 518-523). The AER is not freely available on-line, but many readers will have access through a library subscription. Succumbing to acronyms, Bernard and Fort write: "We define a FGPF as a firm that has no manufacturing establishments in the United States, but performs pre-production activities such as design and engineering itself and is involved in production activities, either directly or through purchases of contract manufacturing services (CMS)."
The best-known example of a factoryless goods producer is Apple Inc. Apple designs, engineers, develops, and sells consumer electronics, software, and computers. For the vast majority of its products, including iPhones, iPads, and MacBooks, Apple does none of the production and the actual manufacturing is performed by other firms in China and elsewhere. While Apple is known for its goods and services and closely controls all aspects of a product, almost none of Apple’s US establishments would be in the manufacturing sector. ...
How prominent are factoryless goods producing firms in the US economy, and how much have they expanded over time? By definition, you don't find these firms in the manufacturing sector of the economy. Bernard and Fort look at statistics on the wholesale trade sector of the economy. As background, wholesale trade is about 6% of the US GDP when measured in value-added terms. which is about half the size of the manufacturing sector, or half the size of the professional and business services sector. Here are a few facts from Barnard and Fort about factoryless goods producing firms:
In 2007, the total number of factoryless good producing firms was 13,500, and these firms employed 672,000 workers. "
Industries where factoryless goods producing firms tend to focus include electrical machinery and equipment, machine and mechanical appliances and computers, pharmaceuticals, and apparel.
Compared to other firms in the wholesale industry, the factoryless goods producing firms tend to be larger and to pay higher wages.
If you go back to 1992, and look at the factoryless goods producing firms of that time, you find that many of them begin manufacturing in the US at some point. Indeed, "it is likely that the current set of FGPFs are a mix of different types of firms including former manufacturing firms, new firms created as FGPFs from their inception, and other firms that have made the transition to the design and manufacture of products. More work is needed to understand the evolution of FGPFs over time."
The imports of factoryless goods producing firms are equal to about 38% of their total sales. Thus, a majority of money spent at such firms ends up flowing to non-manufacturing inputs from the US economy.
The growth of factoryless goods producing firms may have effects on wages, employment, and productivity. It's a phenomenon worth understanding. ...
The point of the paper is that if we want economics to be a science, we have to recognize that it is not ok for macroeconomists to hole up in separate camps, one that supports its version of the geocentric model of the solar system and another that supports the heliocentric model. As scientists, we have to hold ourselves to a standard that requires us to reach a consensus about which model is right, and then to move on to other questions.
The alternative to science is academic politics, where persistent disagreement is encouraged as a way to create distinctive sub-group identities.
The usual way to protect a scientific discussion from the factionalism of academic politics is to exclude people who opt out of the norms of science. The challenge lies in knowing how to identify them.
From my paper:
The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.
Persistent disagreement is a sign that some of the participants in a discussion are not committed to the norms of science. Mathiness is a symptom of this deeper problem, but one that is particularly damaging because it can generate a broad backlash against the genuine mathematical theory that it mimics. If the participants in a discussion are committed to science, mathematical theory can encourage a unique clarity and precision in both reasoning and communication. It would be a serious setback for our discipline if economists lose their commitment to careful mathematical reasoning.
I focus on mathiness in growth models because growth is the field I know best, one that gave me a chance to observe closely the behavior I describe. ...
The goal in starting this discussion is to ensure that economics is a science that makes progress toward truth. ... Science is the most important human accomplishment. An investment in science can offer a higher social rate of return than any other a person can make. It would be tragic if economists did not stay current on the periodic maintenance needed to protect our shared norms of science from infection by the norms of politics.
[I cut quite a bit -- see the full post for more.]
Take a moment to savor the cowardice and vileness of that last remark. ... Mr. Bush is trying to hide behind the troops, pretending that any criticism ... is an attack on the courage and patriotism of those who paid the price for their superiors’ mistakes. That’s sinking very low, and it tells us a lot ... about the candidate’s character...
Wait, there’s more: Incredibly, Mr. Bush resorted to the old passive-voice dodge, admitting only that “mistakes were made.” Indeed. By whom? Well, earlier this year Mr. Bush released a list of his chief advisers on foreign policy, and it was a who’s-who of mistake-makers ... in the Iraq disaster and other debacles. ...
In Bushworld, in other words, playing a central role in catastrophic policy failure doesn’t disqualify you from future influence. ...
Take my usual focus, economic policy. ... Having been completely wrong about the economy, like having been completely wrong about Iraq, seems to be a required credential.
What’s going on here? My best explanation is that we’re witnessing the effects of extreme tribalism. On the modern right, everything is a political litmus test. Anyone who tried to think through the pros and cons of the Iraq war was, by definition, an enemy of President George W. Bush and probably hated America; anyone who questioned whether the Federal Reserve was really debasing the currency was surely an enemy of capitalism and freedom.
It doesn’t matter that the skeptics have been proved right. Simply raising questions about the orthodoxies of the moment leads to excommunication, from which there is no coming back. So the only “experts” left standing are those who made all the approved mistakes. It’s kind of a fraternity of failure: men and women united by a shared history of getting everything wrong, and refusing to admit it. Will they get the chance to add more chapters to their reign of error?
If this forecast holds, then the first half of 2015 will be very weak if not flat, slow enough that commentators might be tempted to refer to growth as at "stall speed". But quarterly GDP numbers are fairly volatile. Would two consecutive weak quarters be terribly unexpected, or even suggestive of a troubling undercurrent in the economy? It is somewhat difficult to panic about the GDP numbers just yet, especially in the context of the continuous slide in the forward-looking unemployment claims indicator:
Moreover, should we be surprised by the occasionally GDP number in the context of lower estimate of potential growth? As Calculated Risk likes to say:
Right now, due to demographics, 2% GDP growth is the new 4%.
A simple way to think about this is to look at the confidence interval around the one-step ahead GDP forecast from an AR2 model:
Prior to the Great Depression, it would be very unusual for the confidence interval to include a negative read on GDP outside of a recession. Following the Great Depression, however, the confidence interval around the forecast almost always captures the possibility of a negative outcome. This is likely the consequence of two factors, the downshifting of GDP growth as described by Calculated Risk and an increased GDP growth volatility in the most recent sample.
Bottom Line: We probably need to get used to the occasional negative GDP growth numbers in the context of overall expansion for the US economy. The concept of "stall speed" will need to be revised accordingly.
Fighting for History, by Paul Krugman: ...Progressives ... are much too willing to cede history to the other side. Legends about the past matter. Really bad economics flourishes in part because Republicans constantly extol the Reagan record, while Democrats rarely mention how shabby that record was compared with the growth in jobs and incomes under Clinton. The combination of lies, incompetence, and corruption that made the Iraq venture the moral and policy disaster it was should not be allowed to slip into the mists. ...
There’s a reason conservatives constantly publish books and articles glorifying Harding and Coolidge while sliming FDR; there’s a reason they’re still running against Jimmy Carter; and there’s a reason they’re doing their best to rehabilitate W. And progressives need to fight back.
Defend Workers and the Environment Before Voting Fast Track: President Barack Obama is making a full-court press for two new international business agreements, one with Asian-Pacific countries known as Trans-Pacific Partnership (TPP) and the other with European countries known as the Trans-Atlantic Trade and Investment Partnership (TTIP). To secure these, he is calling on Congress to pass Trade Promotion Authority (TPA), also known as "fast track," so that when TPP and TTIP come up for a Congressional vote, they can only be voted up or down, without amendments. ...
The president portrays TPP and TTIP as part of an overall program of "middle-class economics" in which "everybody gets a fair shot, everyone does his fair share, and everybody plays by the same set of rules." That means "making sure that everybody has got a good education," "women are getting paid the same as men for doing the same work," "making sure that folks have to have sick leave and family leave," and "increasing the minimum wage across the country." It means pushing for investments in infrastructure and faster Internet.
The problem, however, is that the president has not succeeded in getting any of those middle-class policies in place. ...
If the U.S. were a fairer society, in which Obama's vision of everybody getting a fair shot truly applied, then TPP and TTIP would be much easier calls. The losers from trade and offshoring would reliably get help from the winners; workers hit by the agreements would have a clear path to new skills, re-training, family support, adjustment assistance, a higher minimum wage, and all of the other protections that the president rightly seeks but can't secure. Yet America today is not that kind of society. The TPP and TTIP would hand another gift to the multinational companies that are lobbying so hard for the two agreements without providing real protections for workers (and for the environment as well). ...
Obama and the Republicans in Congress have not made the case to American workers that trade policies under TPP and TTIP will be part of a fair, middle-class, and environmentally sustainable economy.
Here's a takeaway figure. It's a measure of those who are foreign-born, and who were living outside the US a year ago--in other words, it's a measure of migration to the US in the previous year.
As I have noted in the past, immigration from Mexico has dropped off substantially in the last few years. Indeed, a few years ago when the U.S. unemployment rate was still so elevated in the aftermath of the Great Recession, net migration from the US to Mexico--that is, new arrivals minus departures--may have been slightly negative. Over the last decade or so, a combination of stronger enforcement at the border, along with a gradually stronger economy in Mexico and fewer children per women in Mexico have meant fewer young people on the move looking for work. ...
…the new law makes it a crime to gather data about the condition of the environment across most of the state if you plan to share that data with the state or federal government. The reason? The state wants to conceal the fact that many of its streams are contaminated by E. coli bacteria, strains of which can cause serious health problems, even death. ... Rather than engaging in an honest public debate about the cause or extent of the problem, Wyoming prefers to pretend the problem doesn’t exist. And under the new law, the state threatens anyone who would challenge that belief by producing information to the contrary with a term in jail...
The new law is of breathtaking scope. It makes it a crime to “collect resource data” from any “open land,” meaning any land outside of a city or town, whether it’s federal, state, or privately owned. The statute defines the word collect as any method to “preserve information in any form,” including taking a “photograph” so long as the person gathering that information intends to submit it to a federal or state agency. In other words, if you discover an environmental disaster in Wyoming, even one that poses an imminent threat to public health, you’re obliged, according to this law, to keep it to yourself.
For me personally, the timing is ironic, as I’ve spent the last week involved in various agriculture-related microbiology meetings, and the constant refrain was “we need more data on what people are doing” (e.g., how are they using antibiotics?). In the areas of food and water safety, we desperately need more data. ...
As we argue, inequality is not inevitable: it is a choice that we’ve made with the rules that structure our economy. Over the past 35 years, the rules, or the regulatory, legal and institutional frameworks, that make up the economy and condition the market have changed. These rules are a major driver of the income distribution we see, including runaway top incomes and weak or precarious income growth for most others. Crucially, however, these changes in the rules have not made our economy better off than we would be otherwise; in many cases we are weaker for these changes. We also now know that “deregulation” is, in fact, “reregulation”—that is, a new set of rules for governing the economy that favor a specific set of actors, and that there's no way out of these difficult choices. But what were these changes? ...
This report describes what has happened, going far deeper than this summary here. It also has a policy agenda focused on both taming the top and growing the rest of the economy. Some may emphasize some pieces more than others; but no matter what this argument about the rules is what is missing in the current debates over the economy. ...
Roberto M. Billi, senior researcher at the Sveriges Riksbank, has a new paper on nominal GDP targeting:
”A Note on Nominal GDP Targeting and the Zero Lower Bound,” Sveriges Riksbank Working Paper Series No. 270, Revised May 2015: Abstract: I compare nominal GDP level targeting to strict price level targeting in a small New Keynesian model, with the central bank operating under optimal discretion and facing a zero lower bound on nominal interest rates. I show that, if the economy is only buffeted by purely temporary shocks to inflation, nominal GDP level targeting may be preferable because it requires the burden of the shocks to be shared by prices and output. But in the presence of persistent supply and demand shocks, strict price level targeting may be superior because it induces greater policy inertia and improves the tradeoffs faced by the central bank. During lower bound episodes, somewhat paradoxically, nominal GDP level targeting leads to larger falls in nominal GDP.
We find your WSJ op-ed (Wednesday, May 6) misleading, short-sighted, self-serving, and very disappointing.
Vanguard has been in the forefront of providing low-cost, reliable access to U.S. and global capital markets to millions of customers, including ourselves. Following the financial crisis of 2007-2009, the firm naturally should be a leader in promoting a more resilient financial system. Your op-ed sadly goes in the opposite direction.
Let’s start with the most stunning example: your defense of money market mutual funds. MMMFs are simply banks masquerading as professionally managed investment products. Like banks, they engage in liquidity and maturity transformation. Like banks, they faced runs in 2008 that ended only when the federal government provided a guarantee that put taxpayers at risk. Even with that guarantee, the government still had to support many healthy U.S. corporations with household names that – having previously relied on MMMF purchases of their commercial paper – suddenly faced a severe credit crunch. And, to limit a fire sale amidst the crisis, the Federal Reserve had to provide special funding to buyers to help MMMFs unload their assets.
Unsurprisingly, fund sponsors and their clients – both creditors and borrowers – want to keep these opaque federal subsidies (especially the implicit guarantees that only become explicit and transparent in a crisis). Like them, you make the false, but popular claim that power-hungry regulators (who wish to limit the subsidies that make future crises more likely) are attacking (taxing!) Main Street instead of Wall Street.
In fact, the investment company industry captured its primary regulator long ago, and hasn’t let go. The Securities and Exchange Commission’s 2014 “reform” of MMMFs is exhibit A. It almost surely makes these funds more, not less, liable to runs (see here and here). And – what a surprise – Congress seems to find protecting U.S. taxpayers from contingent liabilities (like implicit financial guarantees to your industry) less attractive than the largesse of financial lobbyists. Even the voluminous Dodd-Frank Act didn’t address MMMFs! ...
After quite a bit more, they conclude with:
As the CEO of one of the largest mutual fund companies in the world that is dedicated to serving and protecting small investors, you should be in the vanguard of advocating reforms that enhance stability.
Instead of complaining about regulation under the guise of protecting Main Street, you should highlight the vulnerabilities in our financial system and make the case for efficient regulation that treats all activities equally. You should also promote investment vehicles that are likely to prove robust in a crisis, while warning about existing products that probably won’t be.
Only greater resilience in the system can make investors confident that capital markets here and elsewhere will remain strong. That is in Vanguard’s interest, too.
Wall Street Vampires, by Paul Krugman, Commentary, NY Times: Last year the vampires of finance bought themselves a Congress. I know it’s not nice to call them that, but I have my reasons, which I’ll explain in a bit. For now, however, let’s just note that these days Wall Street, which used to split its support between the parties, overwhelmingly favors the G.O.P. And the Republicans who came to power this year are returning the favor by trying to kill Dodd-Frank, the financial reform enacted in 2010.
And why must Dodd-Frank die? Because it’s working. ...
For one thing, the Consumer Financial Protection Bureau — the brainchild of Senator Elizabeth Warren — is, by all accounts, having a major chilling effect on abusive lending practices. And early indications are that enhanced regulation of financial derivatives — which played a major role in the 2008 crisis — is having similar effects, increasing transparency and reducing the profits of middlemen.
What about the problem of ... “too big to fail”? There, too, Dodd-Frank seems to be yielding real results, in fact, more than many supporters expected. ...
All of this seems to be working: “Shadow banking,” which created bank-type risks while evading bank-type regulation, is in retreat. ...
But the vampires are fighting back.
O.K., why do I call them that? Not because they drain the economy of its lifeblood, although they do: there’s a lot of evidence that oversize, overpaid financial industries — like ours — hurt economic growth and stability. Even the International Monetary Fund agrees.
But what really makes the word apt in this context is that the enemies of reform can’t withstand sunlight. Open defenses of Wall Street’s right to go back to its old ways are hard to find. When right-wing think tanks do try to claim that regulation is a bad thing that will hurt the economy, their hearts don’t seem to be in it. ...
Republicans would love to undo Dodd-Frank, but they are, rightly, afraid of the glare of publicity that defenders of reform like Senator Warren — who inspires a remarkable amount of fear in the unrighteous — would shine on their efforts.
Does this mean that all is well on the financial front? Of course not. Dodd-Frank is much better than nothing, but far from being all we need. And the vampires are still lurking in their coffins, waiting to strike again. But things could be worse.
Robert Merton: Measuring the Connectedness of the Financial System: Implications for Systemic Risk Measurement and ManagementAbstract: Macrofinancial systemic risk is an enormous issue for both governments and large asset pools. The increasing globalization of the financial system, while surely a positive for economic development and growth, does increase the potential impact of systemic risk propagation across geopolitical borders, making its control and repairing the damage caused a more complex and longer process. As we have seen, the impact of the realization of systemic risk can be devastating for entire economies. The Financial Crisis of 2008-2009 and the subsequent European Debt Crisis were centered around credit risk, particularly credit risk of financial institutions and sovereigns, and the interplay of the two. The propagation of credit risk among financial institutions and sovereigns is related to the degree of “connectedness” among them. The effective measurement of potential systemic risk exposures from credit risk may allow the realization of that risk to be avoided through policy actions. Even if it is not feasible to avoid the systemic effects, the impact of those effects on the economy may be reduced by dissemination of that information and subsequent actions to protect against those effects and to subsequently repair the damage more rapidly. This paper applies the structural credit models of finance to develop a model of systemic risk propagation among financial institutions and sovereigns. Tools for applying the model for measuring connectedness and its dynamic changes are presented using network theory and econometric techniques. Unlike other methods that require accounting or institutional positions data as inputs for determining connectedness, the approach taken here develops a reduced-form model applying only capital market data to implement it. Thus, this model can be refreshed almost continuously with “forward-looking” data at low cost and therefore, may be more effective in identifying dynamic changes in connectedness more rapidly than the traditional models. This new research is still in progress. The basic approach and the empirical findings are encouraging and it would seem that at a minimum, this approach will provide “good “questions, if not always their answers, so that overseers and policy makers know better where to look and devote resources to discovery among the myriad of places within the global financial system. In particular, it holds promise for creating endogenously specified stress test formulations. The talk closes with some discussion of the importance of a more integrated approach to monetary, fiscal and stability policies so as to better recognize the unintended consequences of policy actions in one of these on the others.