Certified Legendary Thread Squiggle 2017

Remove this Banner Ad

Interesting read this week Max and great to see all the movement. I noted you mentioned St Kilda's state is really only as a result of the Hawthorn take down. In a similar vein, media leading up to last weekend was only about the few wins we had with no mention of being 'in' games (matching) Cats and WC until the 4th quarter. Class and running out of gas getting us easily both times. Is there any modelling which looks beyond the final siren score?

There's always Roby's advanced calculus.

I model on the final siren score and very little else, but that's the whole score, not just the win-loss.
 
Round 7, 2017

0spjMud.jpg



Lots of validation this week for the squiggle theory that teams don't suddenly get a lot better or worse very often, with Richmond, Fremantle, and Essendon looking more like their owners put them up on blocks and spent the summer tinkering with the engine, rather than going out and buying a new car.

But not Sydney! Sydney are the strange one: a team that looked like a sports car throughout 2016 but then seem to have traded in for my old "fire engine red" 1978 Gemini.

320px-1976_Holden_TX_Gemini_Sedan_%2814288655924%29.jpg

The 2017 Sydney Swans

Animated squiggling:

LcXNXwB.gif


The Hawks crashed again, but the only surprise there was how bad it was; we knew they were wobbling toward the middle of the road with the bumper hanging off, but we didn't know there was a tram coming the other way. A number 96 tram, bound for St Kilda, this week.

But it was all good for Adelaide, who cruised past the Tigers while Geelong lost to the Pies and GWS barely outlasted the Bulldogs.

The Bulldogs currently look locked in a battle for a Top 4 spot with Port Adelaide, who also had a good week. In fact, the Power may well have a real vehicle here, as they haven't had a bad squiggle all year, so could wind up leaving the Cats and Dogs to fight for 4th. But for now, it's looking like this:

VSLf8Vx.png

Or in animated form:

bm4eRE9.gif

Tons of uncertainty through the middle, with many teams capable of finishing all over the place.

It's worth noting how bad Brisbane are. They were terrible late last year and have been consistently terrible all this year. I keep hearing talk about how improved they are and I don't get it. Only a handful of teams in the last 10 years have had a sub-40 Defence rating in the squiggle, including the expansion clubs, and Brisbane is just camping out there. It's pretty hard to find wins when you can't stop the opposition from scoring.

And for all the talk of the Tigers' new attacking game style, they're still a defensive team. A couple of games in the wet haven't helped, but there isn't much evidence that they can score well against good opposition.

So after 6 rounds we have:
  • Teams with improved 2017 models: Port Adelaide, Richmond, Adelaide, Gold Coast, Essendon, Fremantle, St Kilda (only because of their last game), Geelong (being generous).

  • Teams still driving their 2016 models: Brisbane, West Coast, GWS, Collingwood, North Melbourne, Melbourne, Carlton, Western Bulldogs (being generous).

  • Teams still driving their 2016 models and there's this weird noise whenever you brake that you should have had checked out months ago and now there's smoke coming out: Hawthorn, Sydney.

Flagpole! Squiggle hasn't rated the Tigers like their 5-0 start would suggest, but the Crows' 76-point win was still enough for yet another week of "yay Adelaide."

w8Oir02.gif


There was a question earlier about how Sydney can rated so highly when they face a challenge to even make finals. And the answer is yes, this is more or less an "if they make finals, how will they go" rating. More specifically, it's an algorithm that survived a deathmatch against tens of thousands of other algorithms in a competition to rank the eventual premier highly during the season. It hasn't been trained to care about teams lower down the pecking order, so long as they're not bumping out the eventual premier. And it's completely ignorant of how likely the team is to make finals and whether they get home games or double chances if/when they get there.

I can probably improve this now that squiggle is actually running season simulations, but for the moment, it's a "premiership form" rating, where it rates highly teams who are most delivering results similar to those of premiers from the last 20 or 30 years.

Live squiggles!

Squiggle dials!
The Hawthorn Tower of Power Movement is the best new comedy of 2017.
 
Interesting read this week Max and great to see all the movement. I noted you mentioned St Kilda's state is really only as a result of the Hawthorn take down. In a similar vein, media leading up to last weekend was only about the few wins we had with no mention of being 'in' games (matching) Cats and WC until the 4th quarter. Class and running out of gas getting us easily both times. Is there any modelling which looks beyond the final siren score?

Fremantle board - it turns out, on average, the final score is generally the best reflection of the ability between the two teams.
 

Log in to remove this ad.

Interesting read this week Max and great to see all the movement. I noted you mentioned St Kilda's state is really only as a result of the Hawthorn take down. In a similar vein, media leading up to last weekend was only about the few wins we had with no mention of being 'in' games (matching) Cats and WC until the 4th quarter. Class and running out of gas getting us easily both times. Is there any modelling which looks beyond the final siren score?

I don't see the point in that. You don't get points for being "in" games for part of it. Just like you don't get points for losing close games. Games are played over 4 quarters. If you are lacking fitness compared to other teams or have an unsustainable game plan that you can only follow for 2-3 quarters then that will be reflected in the final score.

It's similar to teams winning comfortably at 3/4 time. Often you will see them put the cue in the rack and just play out time in the final quarter.
 
Interesting read this week Max and great to see all the movement. I noted you mentioned St Kilda's state is really only as a result of the Hawthorn take down. In a similar vein, media leading up to last weekend was only about the few wins we had with no mention of being 'in' games (matching) Cats and WC until the 4th quarter. Class and running out of gas getting us easily both times. Is there any modelling which looks beyond the final siren score?
Most models, including Squiggle, use some kind of venue input to model home advantage.

Beyond that:

Figuring Footy and Matter of Stats both use shot quality data, which allow them to tell the difference between a side that missed easy shots and a side that couldn't generate good scoring opportunities in the first place. The theory here is that the side that's missing easy shots is likely to convert more of them next week.

The Arc is experimenting with player ratings, so it can tell the likely impact when a particular player is added to or removed from the team.

Mostly, though, models do indeed lean heavily on the final score alone. Although I don't think anyone else shares their algorithm details, so we can't tell for sure.

My experience with testing lots of different inputs over the years is this:
1. "I wonder if you can tip better if you put some weight on who won the week before."
2. Test
3. "Nope."

So I now have 182 different algorithms.
 
Would you really want to moderate that R1 movement? The magnitude shows that at that time it was a very unexpected result which is still useful information.

I'm also not sure that squiggle positions now are any more representative of each team's true R1 position than the squiggle starting point. The Swans appear to be appreciably worse now than they were a month ago (the mob that wilted against Carlton wouldn't be coming back to lead in the 4th against the Dogs). Injuries happen. As the season plays out the squiggle adjusts; the squiggle knows what its doing.
Right, this is a great point. Teams do seem to become easier or harder to beat during the season, for whatever reason - they lose key players to injuries, or get "worked out," or fatigued, or they're Essendon and it's after Round 12. You don't always want to go back and say a team's earlier games should now be treated as if they're at their current strength.
 
Final Siren I just wanted to ask a question about the sliding doors again - specifically the luck %.
Just opening some doors, with 5% luck, and you get results such as:

St Kilda 51 (89-38) losing to West Coast 118 (84+34)

The banner at the top says "On average, how much of a team's final score is due to luck?" How can 5% luck change scores by 40-50%. I assume it's intended that way, and not a bug, but just wondering how the 5% comes into it?
 
Most models, including Squiggle, use some kind of venue input to model home advantage.

Beyond that:

Figuring Footy and Matter of Stats both use shot quality data, which allow them to tell the difference between a side that missed easy shots and a side that couldn't generate good scoring opportunities in the first place. The theory here is that the side that's missing easy shots is likely to convert more of them next week.

The Arc is experimenting with player ratings, so it can tell the likely impact when a particular player is added to or removed from the team.

Mostly, though, models do indeed lean heavily on the final score alone. Although I don't think anyone else shares their algorithm details, so we can't tell for sure.

My experience with testing lots of different inputs over the years is this:
1. "I wonder if you can tip better if you put some weight on who won the week before."
2. Test
3. "Nope."

So I now have 182 different algorithms.
Thanks for the detailed response FS
 

(Log in to remove this ad.)

Final Siren I just wanted to ask a question about the sliding doors again - specifically the luck %.
Just opening some doors, with 5% luck, and you get results such as:

St Kilda 51 (89-38) losing to West Coast 118 (84+34)

The banner at the top says "On average, how much of a team's final score is due to luck?" How can 5% luck change scores by 40-50%. I assume it's intended that way, and not a bug, but just wondering how the 5% comes into it?
Good question! There's an answer buried in the "Help & Information" link inside each Door. Scores are 5% different on average (across all alternate realities), but individual results can vary by a lot more than that.

The amount of random adjustment to each score is distributed like this:

normaldist.png

... so the majority of the time, scores are only adjusted a little, and there's a decreasingly smaller chance of bigger adjustments. Because that's how things work in the real world. So with 5% luck, most scores are different by less than one goal, but you can still get a rare, freakish result like the one you mention, where it's very different. It's just not very likely.

Now when you hunt through hundreds of 5% luck alternate realities for an interesting one, you're counteracting the luck factor, because you wind up with a situation that's pretty unlikely, but you discarded all the likely ones. This is like rolling a bunch of dice and getting all 6s: the result is unlikely, but not when you have enough tries at it.
 
I've been thinking about this - given that the most accurate prediction models and the average difference to the bookie's line is usually a margin error of about 25-30 points, shouldn't the most "realistic" way of simulating luck for the rest of the season incorporate the percentage luck so that the change in margin is about that amount? I have no idea what percentage that is in the squiggle but I'd think the best way of simulating the season would be to incorporate that amount of luck, simulate the season 10,000 times and then see what percentage of the time certain teams finish in different positions. Is that what the tower already does?
 
I've been thinking about this - given that the most accurate prediction models and the average difference to the bookie's line is usually a margin error of about 25-30 points, shouldn't the most "realistic" way of simulating luck for the rest of the season incorporate the percentage luck so that the change in margin is about that amount? I have no idea what percentage that is in the squiggle but I'd think the best way of simulating the season would be to incorporate that amount of luck, simulate the season 10,000 times and then see what percentage of the time certain teams finish in different positions. Is that what the tower already does?
Yep, you're exactly right! The Tower of Power is generated from 100,000 season simulations with 10% luck.

The especially nice thing is it plays out the actual fixture, so you're always generating genuinely possible ladders, not just calculating odds in a vacuum.

The only reason not to do it this way, I think, is the computing power required. Live Squiggle regenerates the Tower every 5 minutes or so while games are in progress by running 2,500 sims, which is about the limit of what my web host will tolerate. It takes 10 minutes of 100% CPU usage to do the 100k sims at the end of each round (although this will get shorter over time, as there are fewer remaining rounds to simulate), which I have to do on my home machine.

For more complex models, I've read it can take hours!
 
Gws from day 1 to now would be a good one?!?
The notable thing about GWS is how bad they were. This is maybe best to see on a plot of all squiggles from 2000-2017:

uKutsbj.jpg

With GWS highlighted:

0yY4n2F.jpg

GWS were worse than any other team this century. And they were that bad for a long time. Only Melbourne are really comparable.

Then they got better!

 

Remove this Banner Ad

Back
Top