Thursday, September 18, 2014

This is my short and short-attention-span review of Noah Smith's full 5-part review of Big Ideas in Macroeconomics by Karthik Athreya

Dumb luck, maybe. At the top of Noah's sidebar, a link to his review of the book Big Ideas in Macroeconomics. From its title, the book sounds like it might be a useful summary of the big ideas. It isn't. Or at least, I couldn't tell if it is, from as much as I read of Noah's post.

But he did irk me, Noah did. So here we are.

Noah writes:
Deep within my cultural memory is buried a legend - the legend of the Scholastics. The legend goes like this: At the dawn of the modern age, when European rationalists and scientists began to unleash an explosion of creativity and free thought, there were a tribe of very smart, very learned people called the Scholastics, who devoted all their mental powers to defending the old Medieval understanding of the Universe. They produced exhaustive treatises defending old dogmas, and honed their logical thinking to a fine edge, but in the end they could not stand in the way of progress and were swept away. Deep within my cultural memory lies the boyish fantasy of confronting and defeating a Scholastic in an intellectual confrontation, in the name of a new scientific revolution. The 14-year-old in me still wants to be a fictionalized, Hollywood-ized version of Descartes, Galileo, or Francis Bacon, fighting for rationality, enlightenment, etc. etc.

Yeah, I'm no history buff, but I think Noah has the timeline jumbled. The Scholastics were in the 1200s and 1300s. Occam (of Occam's Razor) and Thomas Aquinas and them. The "dawn of the modern age" came a little later, with Descartes in the 1500s, and after.

Oh, crap. Noah's Wikipedia link dates Scholasticism at "about 1100 to 1700".

Oh, yeah, but: Under "Early Scholasticism" they say Charlemagne "attracted scholars" and "By decree in AD 787, he established schools in every abbey in his empire." Wow... he set a standard that lasted more than a thousand years. Only now are we trying to think it might be better to "privatize" schools. We, in our dark-age thinking. Oh, well.

Under "High Scholasticism" they say:

The 13th and early 14th centuries are generally seen as the high period of scholasticism.

Yeah, see? The 1200s and 1300s.

Then, under "Late Scholasticism" there is nothing but a link to some other article! I was right. Noah has things jumbled. Or maybe he's just focused on the dregs of Scholasticism, the unimportant part. Sheesh.

Descartes, it turns out, was a little later than I thought: the first half of the 1600s. But what's interesting, I think, is that when Descartes decided to throw away everything he knew and start from scratch, the stuff he was throwing away was mostly Scholasticism. What he was throwing away was the stuff Wikipedia describes as "a method of critical thought" and "a program of employing that method in articulating and defending dogma".

Interesting fellow, Descartes.

So, Noah:
But anyway, this is the bias I have to overcome when thinking about Big Ideas in Macroeconomics. It definitely has the feel of a Scholastic apologia. The book is clearly intended as a response to the outside criticisms of academic macroeconomics that have proliferated since the 2008 crisis and recession. Some of the critics are bloggers like Paul Krugman or writers like John Quiggin, who criticize macro in the public sphere; others are economists like Dan Hamermesh, who had this to say in 2011:

The economics profession is not in disrepute. Macroeconomics is in disrepute. The micro stuff that people like myself and most of us do has contributed tremendously and continues to contribute. Our thoughts have had enormous influence. It just happens that macroeconomics, firstly, has been done terribly and, secondly, in terms of academic macroeconomics, these guys are absolutely useless, most of them. Ask your brother-in-law. I’m sure he thinks, as do 90% of us, that most of what the macro guys do in academia is just worthless rubbish. Worthless, useless, uninteresting rubbish, catering to a very few people in their own little cliques.


A good number of people within the macro field agree, of course. But not all...

... Athreya seems to believe that most of macro's critics just don't know enough about the field...

But does this mean "outside" criticisms of macro should be discarded, just because they come from outside? Athreya does acknowledge, two pages later, that this process can lead to "capture" of critics - if the only legitimate critics of something are people who do it for a living, then the set of potential legitimate critics is pre-selected for people who will not want to be critics. (Athreya fails to mention a second, more cynical kind of "capture", which is that internal critics of a field are automatically "pissing where they sleep".)

This matters for society, because they're the ones who pay macroeconomists' salaries. Granted, it's not a large burden - if there are 3000 macroeconomics profs and govt. researchers in America and they earn an average of $150,000 each, then that's less than half a billion dollars total each year (the cost of two or three F-35s), for a field that has a shot at preventing trillions of dollars in lost output.

It's all gossip. Sure, in the end it all comes down to money. Of course. But Noah's discussion is not about the economy. It's about personalities and disagreements and zingers. It's all gossip.

It's the money that's important. The people who have money know it. The people who don't have money know it. So, why is Noah writing about economics? Why isn't he writing about the economy?

Noah, you need to throw away everything you know, and start from scratch. Don't worry, it'll be no great loss. You'll be giving up nothing except a method of critical thought and a program of employing that method in articulating and defending dogma. Throw it away, Noah.

It's what you wanted since you were 14.

Wednesday, September 17, 2014

Not perfect, but it's close.

The red line on this graph shows Real GDP, percent change in Real GDP, actually, shifted down one and a half percentage points:

Suppose you wanted to draw a trend line through the red line, a smoothed, gradually sloping line that provides some idea of how the "average" value of Real GDP had changed over time.

I think it would look a lot like the blue line on the graph.

The blue line is a measure of nominal GDP relative to total credit market debt. The downtrend in real economic growth looks like a result of accumulating debt.

Tuesday, September 16, 2014

It didn't reduce private debt

From Sumner's post to Glasner...

David Glasner:

In a monetary economy suffering from debt deflation, one would certainly want to use monetary policy to alleviate the debt burden, but using monetary policy to alleviate the debt burden is different from using monetary policy to eliminate an excess demand for money.

I get this.

I don't really know what the "demand for money" is. I think maybe that's the problem Quantitative Easing was designed to solve. I don't know. But I do know that Quantitative Easing did nothing to alleviate the debt burden.

Scott Sumner:

As far as I know the demand for money is usually defined as either M/P or the Cambridge K. In either case, a debt crisis might raise the demand for money, and cause a recession if the supply of money is fixed. Or the Fed could adjust the supply of money to offset the change in the demand for money, and this would prevent any change in AD, P, and NGDP.

I don't get this.

Or maybe I do. A debt crisis might raise the demand for money, because creditors worry about loans going bad. So they lend less. They hang on to more of their money. Their "demand for money" goes up.

So then, Sumner is saying that if there is a debt crisis and the demand for money goes up, then to avoid a recession the Fed should "adjust the supply of money" to compensate for the increased demand for money. But isn't that what the Fed was doing for the last five years? Or trying to do?

It didn't work. But even if it *could* work, it wouldn't alleviate the debt burden.

Actually, that's why the Fed's plan didn't work: It didn't reduce private debt.

So I think Glasner has it right, and Sumner has it wrong.

Monday, September 15, 2014

Prime less inflation

Showed this Quandl graph the other day, the Real interest rate:

Graph #1: Real Interest Rate
SOURCE: Quandl
As I discovered when I tried to duplicate their graph at FRED, They don't specify the interest rate used in their calculation. I took a shot and guessed the prime rate. For the inflation I subtracted the GDP Deflator, which they specified; I used percent change from year ago of the Deflator, which they didn't specify.

Then, as my graph turned out much more jiggy than theirs, I specified annual frequency for the data on my FRED graph:

Graph #2: The Prime Rate less Inflation, Annual Data
That's a pretty good match for the Quandl data. Theirs starts in the early 1960s, and lacks the rounded tops the FRED graph shows before 1960. And my FRED numbers may be a little higher -- but not much. Anyway the trends are similar, and that's the main thing.

Here's the jiggy one:

Graph #3: The Prime Rate less Inflation, Quarterly Data

Sunday, September 14, 2014

Hoodwinked by the indexing

Back in February I came across this graph at The Current Moment:

Graph #1
Also this one, which they called "Doug Henwood's Graph":

Graph #2
Both graphs show productivity. The one graph compares productivity to compensation, the other compares it to wages. Both wages and compensation lag productivity. But compensation increases a lot more than wages, because compensation (as the source site explains) is wages plus benefits.

I find it useful when people provide background information like that. But that's not what caught my eye. It's the difference in separation points that caught my eye: On Graph #1 the "wages" line breaks away from the productivity line suddenly, just around 1974. On Graph #2 the "compensation" line breaks away from the productivity line gradually, but also much earlier -- possibly as early as 1960. This is the kind of thing that fascinates me.

I did notice that the first graph is indexed "relative to 1970" while the second is "rebased to 1960". In other words, the lines on Graph #1 cross at 1970 and the lines on Graph #2 cross at 1960. The indexing (or the choice of a "base year") influences the location of the break-away point on the graphs, and affects the impression the graphs give us. I think we are hoodwinked by the indexing.

On Graph #1 in particular, between 1955 and 1975, it's pretty clear that the orange line is going up faster than the red line. If you could take the whole red line and just move it down a little bit, you could make the two lines touch at just one point. That point would be about 1956. And if you looked at that graph, you would see real wages falling behind productivity since 1956.

You should find that disturbing.

The second graph, Henwood's graph, seems to show compensation falling behind productivity since about 1960. But it doesn't show earlier data, so it leaves the door open. 1956 is not shown to be wrong.

I followed the Current Moment link back to Doug Henwood's and liked what I found there. I emailed Henwood:
Hi. I recently came upon an old page (2001) at Left Business Observer

I'd like to use your "productivity and compensation" graph on my blog. The page says I should get permission first, so I'm asking.

All the similar graphs that I've seen focus on the separation beginning mid-1970s. Your graph shows the separation beginning much earlier. It was eye-opening for me.

"Heavens," Henwood replied. "I have much more recent versions of that - let me find one for you tomorrow." He did, too:

Graph #3
"Here's the latest," he wrote last February. "Haven't updated it since November, but it's through the third quarter of last year. 'Compensation' includes fringe benefits - since much of that is health insurance, much of the real value is eaten up by medical inflation. Direct pay is pay without fringe benefits. All are inflation-adjusted and indexed so the base year = 100."

The three lines on this graph combine the three different series shown on the first two graphs above. In addition, on Graph #3 the series are indexed to 1964. And -- don't you know it! -- the break-away point this time looks to be 1964, or before.

A little over a week ago -- it seems much longer -- I was reading a discussion at Reddit. The topic was "What might actually be holding back workers’ wages".

Somebody blamed Reaganomics. One guy, I'll call him Joe, rejected that idea: "Your idiotic blaming of Reaganomics would be dependent on Reaganomics traveling back in time a decade and starting the trend in the mid 70's," Joe said.

I love it. That's one of my themes: You can't just blame the guy you don't like, especially if the things you're blaming him for happened before his turn at bat. Reminds me of a Mike Kimel quote that Jazzbumpa has in his sidebar:

#15 Time moves in a single direction.

No doubt.

Anyway, Joe provided a link to a "productivity and real wages" graph -- the same graph from The Current Moment that I have as Graph #1 above. Small world.

I complimented Joe on the graph. But then I went off-topic, so much that Joe had to disagree with me. I pointed out that the graph is indexed "relative to 1970". I said: "If the graph was indexed relative to 1956 we would see the slowing of wages begin in the mid-1950s and then accelerate slow more in the mid-1970s."

Joe replied: "That is not how the graph would change changing the index year at all. Here is one indexed to 1947. There is still a trend starting at 1970 where wages and output are decoupled."

He showed this graph:

Graph #4:

Yeah, the red and blue lines are comparable to those in the graphs above. And yeah, the base year is 1947 this time.

But this graph has a third line -- the purple line -- that shows the ratio of the red and blue. Nice! I was going to get to showing a ratio.

The purple line shows how productivity ("hourly output") is changing relative to wages. And if you look at the purple line you can see it starts going up around 1956 or 1957. That means productivity started gaining on wages around 1956 or 1957.

Wages started falling behind productivity around 1956 or 1957. So if you want to blame Reaganomics... or if you want to blame Jimmy Carter or Gerald Ford or Richard Nixon or Lyndon Johnson -- or John F. Kennedy for that matter -- to do it you will have to make time go in the wrong direction. You'll be breaking Rule #15.

The failure of wages and compensation to keep up with productivity is a problem that began in the 1950s.

Yesterday, at The State of Working America I found this graph from the Economic Policy Institute:

It is similar to the graphs above. The dark blue "productivity" line goes up faster than the light blue "compensation" line since the mid-1950s. But the indexing has the two lines tangled together so you don't notice compensation falling behind until the mid-1970s.

This graph is important because it comes with this data. (Excel XLSX, 37KB) So of course I took the file, uploaded it to Zoho, and customized it for on-line use.

Change the 4-digit year value in the yellow cell to "rebase" the two lines to the year of your choice. The graph runs from 1948 to 2013. You can use any year in that range. 1956 is about right.

Saturday, September 13, 2014

He's kidding, right?

Benoît Cœuré of the ECB asks:

"Why is the effect of finance on growth non-linear?"

He's kidding, right?

You have to take the verb "finance" and consider what it means. It means to borrow money. Borrowing money is good for economic growth. A dollar borrowed is a dollar spent. The numbers go up. You might even want to conclude there is a one-for-one relation between borrowing and spending, or between borrowing and growth. But it's a bit early yet for conclusions.

Borrowing money is good for economic growth. But when you borrow a dollar, you create a dollar of debt. (Think of debt as the record of money borrowed.) Debt has to be repaid. And just as borrowing puts money in to the economy, repayment of debt takes money out. You know this. Even Benoît Cœuré knows this.

So, borrowing money is good for growth. But paying down debt is bad for growth. It's yin and yang. It's shadow and light. It's the two sides of the coin.

The effect of finance on growth goes something like this: The change in growth is equal to the amount of new borrowing, less principal repaid, less interest paid, plus interest withdrawn and spent into the economy.

And then you have to factor in the fact that spending recycles income, and the fact that not all spending contributes to growth, and the fact that debt accumulates.

It becomes too complicated for me to put a number on it. But you can easily see it's not a simple linear relation.

I think you already knew this. But maybe Benoît Cœuré didn't know.

The linear, one-for-one relation between borrowing and economic growth is made complex by the cost of debt. Therefore, the relation becomes non-linear.

I think that answers Benoît Cœuré's question.

Friday, September 12, 2014

The Bankers' Solution

Notes on the speech by Benoît Cœuré of the ECB.
The crisis has made us understand that the size of the financial sector can exacerbate the trade-off between economic efficiency and financial stability. While finance per se is necessary for growth, an oversized financial industry can be detrimental to real economic activity. Of course, the question of what constitutes an “oversized” financial sector is a complex one...

I have written:

Would you wait until adding to total debt decreases GDP to say there is a harmful effect? Or would you say what I say: If adding to total debt produces a smaller increase in GDP than it formerly did, then harm has already been done.

The question is really not complex. If the next dollar of debt increases GDP less than the previous dollar did, then you are on the wrong side of the Laffer Limit:

If you watch an economy using credit for growth, using more credit increases growth up to a point; beyond the Laffer Limit it starts ruining the economy.

Benoît Cœuré:
As regards the link between finance and growth, for a long time, it was common to say that finance affects growth in a positive, monotonic way, as if more is always better. Recent empirical evidence, however, has qualified this conventional wisdom. The analysis has now been refined to show that, beyond a threshold size, the effect of finance on long-term economic growth can weaken and even become negative.


Why is the effect of finance on growth non-linear?

Wrong question.

Cœuré, the banker:
In the wake of the financial crisis, the global regulatory architecture has evolved to meet the challenge posed by the financial sector’s potential to generate economic distress. For one, we have proceeded towards a more integrated governance of the banking sector in Europe. The first components of a genuine banking union – the Single Supervisory Mechanism and the Single Resolution Mechanism – are already being implemented. By aiming to make the banking activities conducted in Europe safer, the banking union implicitly touches on some of the aspects of “oversized finance” in general, and “overbanking” in particular...

We believe that we have learned our lesson...

The academic literature, as well as our own experience in the crisis, has proven that the size of the financial sector can exacerbate the trade-off between economic efficiency and financial stability. I hope that the regulatory reforms that are now being enacted and the recent changes in the European supervisory architecture, combined with the renewed push for a single market for capital in Europe, will go some way in addressing this trade-off.


There is no "trade-off between economic efficiency and financial stability". If Benoît Cœuré thinks there is, then Benoît Cœuré fails to understand the Laffer Limit.

The banker's solution makes finance bigger. My solution makes finance smaller.

We must not allow bankers to design the future.

Wednesday, September 10, 2014

A fly in the ointment

You know the economy is screwed up. That's why you're here. It's what you do on the Internet, think about the economy. Me, too.

You know we need to fix it. But everything has an economic component, a money component. So the conversation can go anywhere. Makes it hard to focus. Makes it hard to know what's relevant. That's unfortunate.

You know our world is full of well-written explanations and well-accepted memes, not all of which are correct. Not all of which can be correct. So it goes.

Maybe you are reading Frances Coppola's Ultra-Liquidity at Pieria. Well-written, certainly seems right.


All assets can be regarded as “money” to a greater or lesser extent: the extent to which assets have “moneyness” is really a matter of liquidity.

Sure. The concept of "near money" isn't new. But I guess Coppola's point is that liquidity is different now. Liquidity is "ultra".

She gives some excellent examples, beginning with her stay at Lindau. Using a VISA debit card she could "pay for a meal in Euros directly from my sterling account":

Because of technological changes, what was formerly an illiquid asset in Germany (sterling) has become transparently liquid.

Coppola says we could easily do the same thing with "a smartcard or a smartphone app". This helps me understand a little of what Steve Roth has been writing about.

Me? I don't use smartcards or smartphone apps or a VISA debit card.

It was interesting, I thought, that Coppola said this:

... suppose that instead of a sterling bank account, a smartcard or a smartphone app enabled me to pay a bill in Euros directly from my holdings of UK gilts? This is not as unlikely as it sounds. It would actually be two transactions...

Yeah: It's two transactions. One of them pays for dinner. The other pays for the currency swap: a financial transaction. Coppola's significant fact is that "This pair of transactions in today's liquid markets could be done instantaneously."

My significant fact is that financial transactions increase costs.

Coppola makes a good point:

ANYTHING that can be used to settle transactions really should be regarded as money: and as technology increases liquidity in all asset classes, so they become easier to use for transaction settlement and therefore more money-like.

Okay. This is a significant change: We now have technological liquidity. Like gunpowder and the bomb, it probably won't be going away.

But it doesn't come free, either. And the more financial transactions we participate in, the more is the cost of finance. This is true, for any given rate of interest. It is true, even if the transactions occur "instantaneously".

I see financial cost as a problem. My solution is the same, always: reduce the size of finance. Rely more on government-issue money and less on bank-issue.

The cost of finance doesn't seem to cross Frances Coppola's mind.

The article is about the increase in the liquidity of non-monetary financial assets, a change that has turned them into money. But as Coppola points out, ultra-liquidity is not always reliable:

CDOs were highly liquid near-money assets prior to the financial crisis. Now, they are about as liquid as flies in amber, and worth considerably less.

Again, Coppola:

No wonder yield-hunters go for intrinsically illiquid assets such as property, or assets that carry significant credit risk such as junk bonds. They can't get any sort of return on safer and/or more liquid assets. And the more technology improves market liquidity – and the more regulators push for improvements to market liquidity – the lower returns will be across all classes of financial asset.

And the more regulators push for improvements to market liquidity, she says.

It is policy to "push for improvements" in liquidity. By "improvements" Coppola means "increases" in liquidity. Policy increases liquidity, Coppola says, even though it drives investors into junk.

In the conclusion Coppola writes:

Above all, though, we need to rethink how we do monetary policy... Liquidity is becoming to all intents and purposes free. The question for policy makers now is how to influence the returns earned on illiquid investments such as property.

In other words, she accepts this brave new world and wants to find a way to deal with it. I think that's the wrong approach. I think Coppola, like the regulators, is heading for the wrong goal. We want to minimize finance, not maximize it.

Tuesday, September 9, 2014

The limits of limits

It's four years old, but I am fascinated by Thomas Palley's The Limits of Minsky’s Financial Instability Hypothesis as an Explanation of the Crisis. Minsky makes a good point, Palley says, but "his theory only provides a partial and incomplete account of the current crisis."

I'm still reading, but according to Palley, Tim Geithner and Larry Summers -- the Treasury Secretary and the President's chief economic counselor when the piece was written -- Geithner and Summers' view, Palley says, was that "financial excess was the only problem, and normal growth will return once that problem is remedied." Palley disagrees, and pushes the "roots of the crisis" back to the late 1970s, early '80s:

By giving free rein to the Minsky mechanisms of financial innovation, financial deregulation, regulatory escape, and increased appetite for financial risk, policymakers (like former Federal Reserve Chairman Alan Greenspan) extended the life of the neoliberal model. The sting in the tail was that this made the crisis deeper and more abrupt when financial markets eventually reached their limits. Without financial innovation and financial deregulation the neoliberal model would have got stuck in stagnation a decade earlier, but it would have been stagnation without the pyrotechnics of financial crisis.

When Palley goes back in time to the root of the crisis, he finds people doing what Minsky said they'd be doing. So I haven't figured out why Palley says Minsky's theory provides only a partial explanation. But like I said, I'm still reading.

"The sting in the tail", Palley says, was that the long duration of the policy made the crisis worse "when financial markets eventually reached their limits." Sure: Preventing small forest fires makes the big ones bigger. But I have trouble with the words "reaching the limits".

Palley doesn't say it, but to me the word "limits" implies we might somehow be able to calculate those limits, and that way avoid a crisis the next time. This brings to mind a calculation by Richard Vague, recently referenced by Steve Keen:

This recovery is starting from an unprecedented level of private debt: whereas the last post-recession recovery in America began from a debt level of 115 per cent of GDP, this one is commencing from 155 per cent (see Figure 2). If it only reaches the level of the previous peak (177 per cent) in the next five years, it will have fulfilled Richard Vague’s empirical rule of thumb for the cause of an economic crisis (a private debt to GDP ratio above 150 per cent, and an 18 per cent increase in that ratio over 5 or less years).

I think it conveys the wrong idea to say a crisis occurs when markets reach their limits. It implies there is a number, 18 or 5 or maybe 42, a magic number that when you reach it all hell breaks loose.

I didn't read Minsky, so maybe I have this wrong. (If you think I'm wrong, quote Minsky to me. Don't quote somebody else.) But here's what I'm pretty sure Minsky said: Financial instability increases.

Until what Thomas Palley calls Minsky's "thwarting institutions" are restored, instability increases. As instability increases, the economy becomes increasingly more fragile. Eventually, the weight of a feather is enough to bring it down. An analogy that comes to mind: a bicycle going slower and slower, wobbling and righting itself once, wobbling again, then falling over in a sudden crash.

If the bicycle kept going fast enough, it might have stayed upright. If the economy didn't grow increasingly fragile, it might have withstood the shock that brought it down. It's not like there's a limit-line on the pavement and the bicycle will fall when it reaches the limit.

I think a shock that would scarcely be noticed in a strong economy is enough to topple a weak one. You don't calculate the "limits" at which an economy comes crashing down. You figure out what's making the economy weak, and you calculate how best to strengthen it.