Tuesday, September 27, 2016

Firming up my best guess

Yesterday's conclusion:

So what does this tell me? If I remove even one piece of data from the Bulge 3 calculation, the predicted date of recession could change substantially. Or if I do this calculation again after the next release of data, the predicted recession date could change substantially. I guess nothing can be done about that.

After I said that, I went on to guess what my graph would look like if I changed it. I made a guess about how soon the graph would predict recession... uh, I mean a guess about the soonest a recession could be predicted to occur based on changing the selection of data on which the prediction is based. If you get my drift.

I don't like guessing. So again I will start with Graph #6 from Sunday's post.

Graph #1: Predicting the Closing of the Current Bulge
I'll start by eliminating the blue line and chopping off everything before 2007 on the red line.

Graph #2: The Current Bulge
Now, that's a pretty bulge.

The solid line shows Federal debt data from FRED (the series GFDEBTN) indexed to 1980 Q1, with each year's average of indexed Federal and Private debt subtracted out. It's a simple calculation, but I haven't been looking at it long enough yet to describe it easily in words.

No matter. What I want to do today is show a bunch of trend lines for different subsets of the data. and see how they compare to the dashed red line. If they curl down faster than the dashed line, it indicates the recession will occur sooner than the date I predict. If they curl down more slowly, it suggests the recession will occur later.

I anticipate that recession will occur where the dashed red line crosses the 100.0 level. That's about where the solid red line was the last time a recession started. So when we look at the trend lines I'll add, we should look at the 100.0 level, look straight down from there, and get the date from the x axis.


I'm not going to start by showing the trend lines. I'm going to start by looking at R-Squared values for different subsets of the solid red line. I'll get the value for all the data shown (2007Q1 to 2016Q1), then drop the first data item and get the value again, then drop the next data item, and repeat. My goal is to get both the highest R-Squared and the most data. But R-Squared is my higher priority.

I'm writing the R-Squared values in the spreadsheet linked below.

R-Squared reaches a peak of 0.99504834 in 2008 Q1. So I will use this as the start-date for the data on which the trend lines are based.

Graph #3: Finding the Start-Point for Trend Data Selection
Oh, just for the record: The title of this graph says "finding the best starting-point". That's my intent. But I just do this for a hobby. I think I have a good feel for what I'm doing. But if I have something wrong and you have a suggestion you think I can understand, do let me know.

Now that I've got a starting point, I can select a bunch of different end-points and use these different subsets of the data to create different polynomial trend lines. There should be a lot of them when I'm done, and it would be too messy to identify each one. So I won't identify them on the graph. (You can pick thru the spreadsheet if you want.)

My objective is to get an impression from the several trend lines I'll create. I'll call this impression my best guess of what the trend line should be. That best guess will give me a start-date for our next recession. That's what I want to see.


Just reading this over. It's amazing how many times I say I "start":

  •  I will start with Graph #6 ...
  •  I'll start by eliminating the blue line ...
  •  I'm going to start by looking at R-Squared values ...

Now I'll start with the last available data (2016 Q1) and use the 2008Q1-2016Q1 period as the base data for my trend line. Then I'll drop the last data used, giving me a different subset, and add a trend line for that one. And I'll keep repeating the process until I get tired of it. Or till I discover something interesting.


Okay. Wrote a little VBA to delete and add trend lines for me, and make them second order polynomials. Here's the first bunch of trends I came up with:

Graph #4: Trendlines Based on 2008Q1 Start Date and 2011Q1 to 2016Q1 End Dates
Damn, I'm good. There's one trendline gone wild, like a hair that won't stay combed. The rest are all pretty well clustered around the faint gray line that used to be my dashed red prediction.

The wild hair is the trendline for the series that ends with 2011 Q1. As you can see on Graph #2, 2011Q1 is on the early part of the line labeled "Public", maybe just before the jiggies start. Continuing this trend line out till it reaches down to the 100.0 level would bring us to our next recession some time around 2050.

I know we'll have another recession long before 2050, so I'm going to throw the wild hair away.

Working our way down the right edge of the graph from that wild hair, the trend lines we come to (in order) are: 2011Q2, 2012Q1, and 2012Q2. After that they get pretty dense. I'm going to delete the six earliest-ending data series (2011Q1 thru 2012Q2) and look at what's left.

Hey, it looks better already. I also want to start the low value of the vertical axis at 100.0 because, when the trend line gets to that level, that's when I think the recession starts.

Plus I put some tick-marks on the x axis:

Graph #5: Trendlines Based on 2008Q1 Start Date and 2012Q3 to 2016Q1 End Dates
Pretty neat. Assuming that the recession starts when the trend line hits the 100.0 level, our next recession should start somewhere between 2022Q2 and 2024Q2. Looks like most of the trend lines fall in the early half of that time period, so I'd venture the second half of 2022 or the first half of 2023 for the start of recession.

Eh, I'll be dead by then.

// the Excel file

No comments: