538 Article on Departing Starters / Performance - Michigan Focus

Submitted by EastCoast Esq. on

Interesting article over at 538 about the correlation between a college football team losing starters and seeing a drop in its offense/defense. The metric the author uses is VERY rough, but it's still worth a read.

https://fivethirtyeight.com/features/michigans-lineup-was-gutted-how-much-will-it-matter/

EDIT: I should mention that I emailed the author and pointed out that not all back-ups are created equal; that Mo Hurst, Rashan Gary, and Chase Winovich all saw significant snaps. He wasn't defensive at all and said I had a good point.

LKLIII

August 31st, 2017 at 4:31 PM ^

Good read but very rough metric.  Rather than a generic "starter" designaiton, I think a better one would be % of snaps returning at each position.  Or even more exacting, % of non-garbage time snaps returning.

Farnn

August 31st, 2017 at 4:36 PM ^

They actually came closer than most, and were given a lot of crap prior to the election that they were way off by giving Trump a 33% chance.  And they had to keep reminding everyone that the election wasn't the slam dunk everyone was saying it would be, that there was a legit chance of Trump winning.

Not sure why they should love credibility when something they said had a 1/3 chance of happening actually happened.

FauxMo

August 31st, 2017 at 4:42 PM ^

This had literally zero to do with politics. It had to do with predictive statistical models.

It's true, 538 was one of the more skeptical that the result was a slam dunk, but they still had the odds at about 2-to-1 in favor of the losing candidate. With the amount of data they had, that's not great. Of course, caveats apply regarding national popular vote and EC, but they integrated state-level polls in their results too, a ton of them. 

joeyb

August 31st, 2017 at 5:01 PM ^

The data that they were using was collected by third parties. If all of the data they are collecting says Hilary has a lead, then they can't come to some other conclusion. What they do with that data is use it to generate something like 10,000 scenarios with different potential turnouts in the voting population. What this showed is that of those 10k scenarios, Hilary won 7k and Trump won 3k.

Now, where everything actually went wrong is that the polls weren't able to correctly predict the turnout for the election. When polls are done, they take in 1000 responses and try to match the ratios of demographics to the turnout that is expected. There is a margin of error with this method that is around 2-3% that 538 correctly utilized while other sites did not.

As for how that applies to this, we're not sampling popular opinion, we're modeling based on statistics. The number of samples is much lower and the variation higher, so their model may not be very accurate, but in terms of building models out of the data available, I would consider 538 one of the best.

FauxMo

August 31st, 2017 at 5:16 PM ^

I do survey sampling and probability statistics for a living (no, that was not an, "ooooh, I'm an expert, don't question me!" comment, but just an FYI). I know almost exactly what 538 and similar outfits do, from the aggregating of diverse survey samples to maximize available data for analysis, to the estimation of turnout, to blending state and aggregate results to minimize error, to the monte carlo simulations, etc. Some of the better aggregators - 538, RealClearPolitics, even HuffPo - actually came incredibly closes to predicting the national vote margin, within +/- one point. What they didn't account for nearly enough was that, well, the national vote only matters to the casual observer watching CNN or whatever. A good model - and I have said this since long before the last election, honest to goodness - should eliminate modeling for all but about 10 states (Penn, Florida, Ohio, Michigan, Wisconsin, NH, NC, etc.). Because frankly, it really doesn't matter what the national vote is. Instead, conduct your own tracking studies of the swing states (data collection is cheap now), add that to whatever polls are available in those states as a backcheck (which are surprisingly few, but still), and then predict those ten states. Instead, they all want to "get press" and do it on the cheap, so they poach data from others and make models for the national horserace, get caught up in that, and this time, they failed pretty miserably. 

Anyone, bringing this all back to football, while they may not be modeling football outcomes "based on popular opinions," a strong argument could be made that they are doing something far less reliable: They are modeling static, point-in-time historical data and making inferences about the future. I'd be willing to bet that their error margins are bigger than what they have for their political models.

Procumbo

August 31st, 2017 at 5:24 PM ^

A lot of people have said this, and it always confuses me. 33% isn't 0%. If they correctly predict something as a 2-1 dog, it will happen 1 out of 3 times. So if their predictions are perfectly calibrated, you'll "lose faith" in statistical predictions in 1 out of 3 events.

FauxMo

August 31st, 2017 at 5:54 PM ^

Yeah, I should have been clearer. In addition to the modeling gripes I mentioned above, my beef with Silver was how he handled it throughout the process, and especially in October and November. At times his model had Clinton up as much as 85-15 in late October. His last readings had it about 71-29 in favor of Clinton. Sure, the underdog still can pull it out at 70-30 (and will, 30 out of 100 times), but those are long odds, and as a modeler it is your job simply to say, "this is what my model is predicting, that X will happen." 

Yet his refrain was, "well, sometimes underdogs pull it out, and while my model says X, I'm just not sure." He followed that up with a ridiculous "we were wrong, but closer than anyone else!" article after the election. In short, he seemingly never trusted his own model throughout the entire process, so he was constantly hedging and equivocating. I so desperately wanted to ask him, "hey, if you don't trust your own results, why are you doing this?"

DavidP814

August 31st, 2017 at 4:47 PM ^

I think the odds 538 gave Trump were closer to 1/6 (~16%, IIRC).  Those odds, while low, were way higher than outfits like the NY Times, which had the election at about 100-1 for HRC.

Election prognostication aside, the author gives no data on the strength of any of the regressions he ran, so I didn't find much value to the article.  538 seems to have been dumbed down from its early years.  

Sopwith

August 31st, 2017 at 5:02 PM ^

and ticked up at the last second for reasons I can't remember, but what I do remember clearly was Silver pointing out that with FL a 50/50 proposition, if you turned that red, then it swung to 50%+ for a Trump win. About two or three weeks out, he went on a tweetstorm vs. the NYT to call them out on an overly optimistic (90%) call for HC.

I still think he has a lot of credibility, but if you want to hear an interesting mea culpa nonetheless, he was a guest on NPRs Radiolab soon after and explained some of the modeling.

Finally, what makes people say he was wrong? It's a probability. If I have a coin that will be flipped twice and I tell you the odds that it will be heads both times is only 25%, and it comes up heads both times, was I wrong?

 

MGoCombs

August 31st, 2017 at 6:41 PM ^

I agree that there's better, albeit probably more labor-intensive ways of doing this. Another thing to consider would be the average recruiting ranking of the teams. I assume teams with solid recruiting classes in previous years don't lose as much efficiency upon replacement.

Also, annoying how he snuck in the never-dying trope about Harbaugh wearing out his welcome. How does that keep getting brought up when there's virtually no evidence of it?

thespacepope

August 31st, 2017 at 7:15 PM ^

This article does not take into account the Don Brown year 2 bump that could offset the loss of 10 of 11 starters on defense.  Statistics are interesting but there are always outliers and Don Brown is going to be a positive outlier for Michigan this year.