Certified Legendary Thread The Squiggle is back in 2023 (and other analytics)

Remove this Banner Ad

It's a mean ladder. 11 wins wouldn't get you 7th. You are conflating the two.

It’s all we have now. What did the 8th team in 2021 win? How many did hawks win? I realise the ladder is ‘averaged’ and 17th probably won’t win 9 games

Anyway my point was what’s to complain about for hawks fans with a ladder predicting around 9 wins?
 

Log in to remove this ad.

My subjective view would be that the addition of Gunston, Sicily, Day, Impey, Jiath, Moore, Bruest into the side is of far greater significance than the loss of Ceglar, Burgoyne, O'Brien and Cousins, but the squiggle doesn't value Gunston or Sicily.

Ask anyone if they'd take those 4 players over the returning 6 and you'd get crickets.

I don't want the squiggle to have subjective criteria, but that doesn't mean that the modelling has to go unquestioned.

Except you're arguing exactly that; the modelling isn't taking in to account subjective things and therefore it's wrong.

It's fine to argue against what the model is saying, but unless you've got some form of better alternative, quantifiable metric to use, you're just arguing for a subjective interpretation.
 
Aren't you in here arguing against modelling?

Or do you just not rate squiggle because the model doesn't rate your team?
He hasn't been arguing against modelling. Only questioning how this particular model has gone from ranking Hawthorn 10th to 16th with the only significant change being players who weren't going to be best 22 this season having been mostly replaced with players who are easily among our best.

A fair question given every other model featured on The Squiggle's site ranks them higher than The Squiggle does. And that it's very clear to anyone with even half an idea about the Hawthorn list that on balance the players who've been moved on aren't even close to as valuable as the players who are returning.

This model has changed over the years to take into account extra variables, and it will surely change into the future. The points raised by SYL are a good basis to discuss the way the player rating system has been implemented into the model.

Obvious weaknesses that jump out to me having read the last few pages:
  • The AFL Player Ratings system has seemingly been selected just because it was there - Why not some other rating system?
  • The fact that an important player missing long term means their rating points drop doesn't make much sense if a teams position on the Squiggle deviates negatively because of their absence. It would make more sense that if a player misses and then the team slips that their player rating would actually increase to reflect their obvious importance.
  • That any value is associated with players who were delisted and not picked up by any other club. And conversely that draftees who will slot straight into the best 22 aren't afforded any value.

Now I don't care what the Squiggle currently rates my team as it will have no influence on how Hawthorn actually performs (or doesn't perform) in 2022 - I've always preferred it as a tool to observe trends of what has happened rather than predictions on what might happen anyway. But I've been following this model since it was first shared on BigFooty and have always enjoyed discussing it. People should be free to use their in-depth knowledge of their club to point out discrepancies with what the model is saying so that maybe it can be tweaked to become a more accurate model. It's certainly not perfect otherwise it wouldn't have changed so much over the years.
 
Aren't you in here arguing against modelling?

Or do you just not rate squiggle because the model doesn't rate your team?
Wow, is that really what you think you read?

I love the squiggle, always have. Doesn’t mean you can’t query or discuss it with Final Siren.
 
Except you're arguing exactly that; the modelling isn't taking in to account subjective things and therefore it's wrong.

It's fine to argue against what the model is saying, but unless you've got some form of better alternative, quantifiable metric to use, you're just arguing for a subjective interpretation.
No, I’ve never said the modelling is wrong at all. That would be incorrect.
 
He hasn't been arguing against modelling. Only questioning how this particular model has gone from ranking Hawthorn 10th to 16th with the only significant change being players who weren't going to be best 22 this season having been mostly replaced with players who are easily among our best.

A fair question given every other model featured on The Squiggle's site ranks them higher than The Squiggle does. And that it's very clear to anyone with even half an idea about the Hawthorn list that on balance the players who've been moved on aren't even close to as valuable as the players who are returning.

This model has changed over the years to take into account extra variables, and it will surely change into the future. The points raised by SYL are a good basis to discuss the way the player rating system has been implemented into the model.

Obvious weaknesses that jump out to me having read the last few pages:
  • The AFL Player Ratings system has seemingly been selected just because it was there - Why not some other rating system?
  • The fact that an important player missing long term means their rating points drop doesn't make much sense if a teams position on the Squiggle deviates negatively because of their absence. It would make more sense that if a player misses and then the team slips that their player rating would actually increase to reflect their obvious importance.
  • That any value is associated with players who were delisted and not picked up by any other club. And conversely that draftees who will slot straight into the best 22 aren't afforded any value.

Now I don't care what the Squiggle currently rates my team as it will have no influence on how Hawthorn actually performs (or doesn't perform) in 2022 - I've always preferred it as a tool to observe trends of what has happened rather than predictions on what might happen anyway. But I've been following this model since it was first shared on BigFooty and have always enjoyed discussing it. People should be free to use their in-depth knowledge of their club to point out discrepancies with what the model is saying so that maybe it can be tweaked to become a more accurate model. It's certainly not perfect otherwise it wouldn't have changed so much over the years.

FS answered this though.

Hawthorn has improved their list less in the off-season than teams around them. Therefore the ladder position slides.

In the case of Gunston; if a player has only played 1 game since 2020, why would their rating not slip?

If all people want to do is argue that a quantitative model isn’t taking in to account subjective information because Hawthorn, then it’s not a productive discussion.

If you don’t believe AFL Player Ratings are the best source of rating data, then I’m sure FS is more than happy to have alternative suggestions for sources. Which no one has done.

Funny that it’s a few Hawks supporters that are suddenly taking issue with Squiggle’s predictions.
 
Funny that it’s a few Hawks supporters that are suddenly taking issue with Squiggle’s predictions.
This hasn’t happened.

You’re the one stomping your feet the hardest.
 
Last edited:
FS answered this though.

Hawthorn has improved their list less in the off-season than teams around them. Therefore the ladder position slides.
Yes, he did. But the way that list improvement has been calculated has weaknesses. Direct example: James Cousins, a fringe midfielder, was delisted and no other club picked him up but 140 points are still deducted from Hawthorn. Josh Ward is drafted, looks likely he'll be part of the best 22, but he's got no points. No club would take Cousins over Ward yet the model values the loss of Cousins significantly more than the incoming Ward.

In the case of Gunston; if a player has only played 1 game since 2020, why would their rating not slip?
In the case of Sicily; why would he have no rating at all? Again, are we supposed to believe he has less value than players who were delisted and not picked up elsewhere?

If all people want to do is argue that a quantitative model isn’t taking in to account subjective information because Hawthorn, then it’s not a productive discussion.
It's not about subjective information not being taken into account in the model. It's about using critical thinking to identify logical inconsistencies between observations (ie. Cousins delisted and not picked up by anyone) and what data the model uses to draw conclusions (ie. that Cousins > Ward, Sicily, Day, Impey, Bramble) for the purposes of refining that model.

If you don’t believe AFL Player Ratings are the best source of rating data, then I’m sure FS is more than happy to have alternative suggestions for sources. Which no one has done.
Here's my suggestion then. Don't include one at all. The whole thing seems shoehorned in. Garbage in, garbage out.

Funny that it’s a few Hawks supporters that are suddenly taking issue with Squiggle’s predictions.
Like SYL, I'm a big fan of the Squiggle. I want to see it be the best it can be. We've made observations that suggest there's an issue with how it works. If you want to write those off as sour grapes then so be it. But again, just because it says something doesn't mean it will happen. I'd be questioning it all the same (actually more) if it was predicting Hawthorn to win the flag.
 

(Log in to remove this ad.)

Yes, he did. But the way that list improvement has been calculated has weaknesses. Direct example: James Cousins, a fringe midfielder, was delisted and no other club picked him up but 140 points are still deducted from Hawthorn. Josh Ward is drafted, looks likely he'll be part of the best 22, but he's got no points. No club would take Cousins over Ward yet the model values the loss of Cousins significantly more than the incoming Ward.


In the case of Sicily; why would he have no rating at all? Again, are we supposed to believe he has less value than players who were delisted and not picked up elsewhere?


It's not about subjective information not being taken into account in the model. It's about using critical thinking to identify logical inconsistencies between observations (ie. Cousins delisted and not picked up by anyone) and what data the model uses to draw conclusions (ie. that Cousins > Ward, Sicily, Day, Impey, Bramble) for the purposes of refining that model.


Here's my suggestion then. Don't include one at all. The whole thing seems shoehorned in. Garbage in, garbage out.


Like SYL, I'm a big fan of the Squiggle. I want to see it be the best it can be. We've made observations that suggest there's an issue with how it works. If you want to write those off as sour grapes then so be it. But again, just because it says something doesn't mean it will happen. I'd be questioning it all the same (actually more) if it was predicting Hawthorn to win the flag.

So for all that, you have 1 single suggestion, the bolded.

The rest is just you wanting subjective information to be used to modify a quantitative input.
 
How often has it been better than the regular media predictions at the start of the year?
Maybe Final Siren has a model on how his model performs against media predictions?
😂
 
So for all that, you have 1 single suggestion, the bolded.

The rest is just you wanting subjective information to be used to modify a quantitative input.
You do realise that just because something is quantitative doesn't automatically make it accurate? It just means it can be quantified and measured against similar data.

And just because someone makes an observation regarding their team doesn't automatically make it subjective. It can be qualitatively verified that James Cousins isn't worth more to Hawthorn than 5 of its other players by virtue of the fact he was delisted and not picked up by any other club, and that 4 of those players when fully fit are selected in the senior side. Those aren't opinions, those are facts. So any quantitative model that utilises data that rates a delisted player no one else wanted over a bunch of best 22 players is likely not going to be producing a particularly valuable result (with consideration to how much that player rating aspect is weighted in the model output).

Anyway, I've said my piece on this and FS can do what he likes with it.
 
How often has it been better than the regular media predictions at the start of the year?
I started tracking this in 2019, because I suspected the answer was "almost always." Preseason ladder predictions in the media can be really wild, and only rarely did anyone ever go back and check how they held up.

Unfortunately we only got in one regular year before the massive disruption of 2020, where the fixture completely changed one round after the season started. And 2021 wasn't back to normal, either, with most of the fixture yet to be written at the start of the year. That meant models couldn't calculate which teams had easier/harder draws - or, in 2020, calculated it from a fixture that wasn't used in practice.

But in 2019, Squiggle's preseason ladder prediction was better than 29 out of 30 media ladders (details here). In 2020, it was better than 21 out of 38 media ladders (details here). In 2021, it was better than 19 out of 25 media ladders (details here).

I'm pretty sure at least a couple of these media types check the models, too! So their predictions are actually not completely independent.
 
You do realise that just because something is quantitative doesn't automatically make it accurate? It just means it can be quantified and measured against similar data.

Who said it was accurate? People were always free to disagree with how Squiggle rates their side. Just seems like SYL was upset that his subjective views weren't being accounted for in the model.

And just because someone makes an observation regarding their team doesn't automatically make it subjective.

Kinda does.

It can be qualitatively verified that James Cousins isn't worth more to Hawthorn than 5 of its other players by virtue of the fact he was delisted and not picked up by any other club

That doesn't automatically make this true.

that 4 of those players when fully fit are selected in the senior side.

That's nice, but not a quantitative metric that can be modelled.

Those aren't opinions, those are facts.

The value of a player selected in a future side isn't a measurable input however.

So any quantitative model that utilises data that rates a delisted player no one else wanted over a bunch of best 22 players is likely not going to be producing a particularly valuable result

Again, being delisted doesn't inherently provide the value of a player. Jarrod Lyons was delisted, for example. Different clubs, with different list needs, can value different players differently.

On actual game output in 2021 though, Cousins for example was Top-10 average/game at Hawthorn for Disposals (8th), Tackles (5th), i50s (2nd), Goal Assists (2nd), Clearances (9th), and Score Involvements (5th). Why shouldn't his player rating reflect his value based on actual output, instead of your perceived value of him based on whether he's listed or not?

Anyway, I've said my piece on this and FS can do what he likes with it.

He sure can, despite people demanding their subjective interpretation of things somehow be added in to a model. As I said, if there's an alternative data source or metric that might produce a more accurate model, them I'm sure he'd be happy to add it in.

Instead this whole argument is; Hawthorn supporters think the players coming back will improve their side more than Squiggle does, and thus they think their side will outperform the ladder position of Squiggle.

Also note, FS has already said Squiggle rates Hawthorn's Round 1 2022 side as better than their Round 23 2021 side, it's just improved less than the teams around them, so ladder position slides.
 
He hasn't been arguing against modelling. Only questioning how this particular model has gone from ranking Hawthorn 10th to 16th with the only significant change being players who weren't going to be best 22 this season having been mostly replaced with players who are easily among our best.

A fair question given every other model featured on The Squiggle's site ranks them higher than The Squiggle does. And that it's very clear to anyone with even half an idea about the Hawthorn list that on balance the players who've been moved on aren't even close to as valuable as the players who are returning.

This model has changed over the years to take into account extra variables, and it will surely change into the future. The points raised by SYL are a good basis to discuss the way the player rating system has been implemented into the model.

Obvious weaknesses that jump out to me having read the last few pages:
  • The AFL Player Ratings system has seemingly been selected just because it was there - Why not some other rating system?
  • The fact that an important player missing long term means their rating points drop doesn't make much sense if a teams position on the Squiggle deviates negatively because of their absence. It would make more sense that if a player misses and then the team slips that their player rating would actually increase to reflect their obvious importance.
  • That any value is associated with players who were delisted and not picked up by any other club. And conversely that draftees who will slot straight into the best 22 aren't afforded any value.

Now I don't care what the Squiggle currently rates my team as it will have no influence on how Hawthorn actually performs (or doesn't perform) in 2022 - I've always preferred it as a tool to observe trends of what has happened rather than predictions on what might happen anyway. But I've been following this model since it was first shared on BigFooty and have always enjoyed discussing it. People should be free to use their in-depth knowledge of their club to point out discrepancies with what the model is saying so that maybe it can be tweaked to become a more accurate model. It's certainly not perfect otherwise it wouldn't have changed so much over the years.
Right, and I do my best to make Squiggle accurate, but it's very far from infallible! Squiggle gets things wrong all the time. I'm just happy if it gets fewer things wrong than most people.

And this stuff is all probablistic, anyway. Squiggle thinks Hawthorn are less likely to have a good season than Carlton. It doesn't think it's impossible. And it could have a totally different opinion after Round 1.

AFL Player Ratings - I use this for two reasons. Firstly, it is incredibly difficult to create a player rating system for AFL that doesn't suck donkey balls. Many people have tried, smarter people than me, and even the best of their efforts still do seem to suck donkey balls to various extents. But because players matter, and therefore we want to be able to quantify the kind of scoreboard impact we should expect when a particular player goes in or out of the team, there is value in these systems, even though we can identify particular situations in which they clearly suck donkey balls.

Secondly, I've always found player-based metrics to be relatively weak compared to team-based metrics, so even if AFL Player Ratings is just ball-parking it, that's good enough - the added precision I'd get from a superior system wouldn't make a detectable difference in most cases. It'd be more fruitful to increase accuracy on injury lists - whether Player X is likely to miss the first 1 month or 3 months. That can be hard to know, and make a sizeable impact. If I remember right, Squiggle was quite bullish on Hawthorn a year ago (or two years ago?) before some off-season strife.

Returning Players - I haven't studied it, to be honest, but I do tend to back Champion Data that players returning from long layoffs are more likely to come back at a reduced level. There are exceptions, of course. But players often take time to get back to their best.

Value of Delisted Players - Well it's very common, of course, for teams to give games to kids who are objectively worse at that stage of their careers than other players who are being shown the door. I dunno how much we could learn about a team's future performance based on who drafted / didn't draft their ex-players.
 
Right, and I do my best to make Squiggle accurate, but it's very far from infallible! Squiggle gets things wrong all the time. I'm just happy if it gets fewer things wrong than most people.

And this stuff is all probablistic, anyway. Squiggle thinks Hawthorn are less likely to have a good season than Carlton. It doesn't think it's impossible. And it could have a totally different opinion after Round 1.

AFL Player Ratings - I use this for two reasons. Firstly, it is incredibly difficult to create a player rating system for AFL that doesn't suck donkey balls. Many people have tried, smarter people than me, and even the best of their efforts still do seem to suck donkey balls to various extents. But because players matter, and therefore we want to be able to quantify the kind of scoreboard impact we should expect when a particular player goes in or out of the team, there is value in these systems, even though we can identify particular situations in which they clearly suck donkey balls.

Secondly, I've always found player-based metrics to be relatively weak compared to team-based metrics, so even if AFL Player Ratings is just ball-parking it, that's good enough - the added precision I'd get from a superior system wouldn't make a detectable difference in most cases. It'd be more fruitful to increase accuracy on injury lists - whether Player X is likely to miss the first 1 month or 3 months. That can be hard to know, and make a sizeable impact. If I remember right, Squiggle was quite bullish on Hawthorn a year ago (or two years ago?) before some off-season strife.

Returning Players - I haven't studied it, to be honest, but I do tend to back Champion Data that players returning from long layoffs are more likely to come back at a reduced level. There are exceptions, of course. But players often take time to get back to their best.

Value of Delisted Players - Well it's very common, of course, for teams to give games to kids who are objectively worse at that stage of their careers than other players who are being shown the door. I dunno how much we could learn about a team's future performance based on who drafted / didn't draft their ex-players.

Don’t think you aren’t appreciated, mate. You have a huge following. Anyway I like to think squiggle is about what should happen, not what some random thinks will happen, with all their unconscious bias

9 wins for hawthorn is not that wild a statement anyway
 
Right, and I do my best to make Squiggle accurate, but it's very far from infallible! Squiggle gets things wrong all the time. I'm just happy if it gets fewer things wrong than most people.

And this stuff is all probablistic, anyway. Squiggle thinks Hawthorn are less likely to have a good season than Carlton. It doesn't think it's impossible. And it could have a totally different opinion after Round 1.

AFL Player Ratings - I use this for two reasons. Firstly, it is incredibly difficult to create a player rating system for AFL that doesn't suck donkey balls. Many people have tried, smarter people than me, and even the best of their efforts still do seem to suck donkey balls to various extents. But because players matter, and therefore we want to be able to quantify the kind of scoreboard impact we should expect when a particular player goes in or out of the team, there is value in these systems, even though we can identify particular situations in which they clearly suck donkey balls.

Secondly, I've always found player-based metrics to be relatively weak compared to team-based metrics, so even if AFL Player Ratings is just ball-parking it, that's good enough - the added precision I'd get from a superior system wouldn't make a detectable difference in most cases. It'd be more fruitful to increase accuracy on injury lists - whether Player X is likely to miss the first 1 month or 3 months. That can be hard to know, and make a sizeable impact. If I remember right, Squiggle was quite bullish on Hawthorn a year ago (or two years ago?) before some off-season strife.

Returning Players - I haven't studied it, to be honest, but I do tend to back Champion Data that players returning from long layoffs are more likely to come back at a reduced level. There are exceptions, of course. But players often take time to get back to their best.

Value of Delisted Players - Well it's very common, of course, for teams to give games to kids who are objectively worse at that stage of their careers than other players who are being shown the door. I dunno how much we could learn about a team's future performance based on who drafted / didn't draft their ex-players.
Thanks for the detailed and reasoned response.

I totally get you on the "far from infallible", "happy if it gets fewer things wrong than most people" approach. That's why I enjoy it so much and like to contribute to discussions that might help it get even fewer things wrong in the future.

On the AFL Player Ratings system, there used to be a whole section on the AFL website for it with explanations on how it worked but not sure where it's gone, but from memory the scores were based on a rolling 2 season window and the scores were for comparing players who played in the same position as categorised. Essentially it was harsh on new players <2 seasons in and players with LTIs, but favourable to fringe players who consistently got games.

I wonder if you were to translate SuperCoach player prices into workable numbers (could just drop the $, or make them % relative to the highest priced player) and use that instead if that would be more useful. Reason being that at least provides some sort of value for players who've been out with LTIs (and includes discounts based on length) and for draftees. It's far more sensitive than the AFL Player Ratings system which could far more quickly realise how impactful a player like Matt Rowell was before his injury sidelined him, or in general any player who's having a breakout season or dropped off a cliff. Much like the Squiggle it weights recent form more heavily but isn't overly reactive in that one poor game isn't necessarily going to tank a players rating if they follow it up with what they typically output.
 

Remove this Banner Ad

Back
Top