Remove this Banner Ad

Certified Legendary Thread The Squiggle is back in 2023 (and other analytics)

🥰 Love BigFooty? Join now for free.

i hope this disproves the 'junk time' myth.
when a team kicks 3 goals in last 5 minutes and wins the game we don't say its junk time, IF they were behind in the last qtr
only if they were already in front

squiggle i daresay wouldn't look at anything so arbitrary
Heh, that's true, although no doubt sides really do try harder when the game is on the line, and I reckon motivation is a real factor that models can't detect.

But when people dismiss junk time goals as meaningless, I think that's usually wrong, because a better side wouldn't have allowed them through, junk time or not. Or to put it another way, if all sides tend to relax a bit once they have the game won, the side that only lets through a couple of goals in their relaxed state is still better than the side that lets through seven.
 
Heh, that's true, although no doubt sides really do try harder when the game is on the line, and I reckon motivation is a real factor that models can't detect.

But when people dismiss junk time goals as meaningless, I think that's usually wrong, because a better side wouldn't have allowed them through, junk time or not. Or to put it another way, if all sides tend to relax a bit once they have the game won, the side that only lets through a couple of goals in their relaxed state is still better than the side that lets through seven.
spot on

melbourne fans had september off last year.......all because of two junk time goals in 22 games that they didn't score
 
Heh, that's true, although no doubt sides really do try harder when the game is on the line, and I reckon motivation is a real factor that models can't detect.

But when people dismiss junk time goals as meaningless, I think that's usually wrong, because a better side wouldn't have allowed them through, junk time or not. Or to put it another way, if all sides tend to relax a bit once they have the game won, the side that only lets through a couple of goals in their relaxed state is still better than the side that lets through seven.
It from column a and bit from column b

You often watch your players put the cue in the rack - esp if finals are round the corner and noone wants to be injured.

Adelaide as a team had the cue in the rack against us last game last year for sure
 
Yep, this is correct, except the Eagles vs Saints game comes out as 79-68 (not 89-60), an 11-point win.

That's almost the same as the real-life 13-point margin, since there was hardly any difference in goalkicking accuracy between the two sides -- they were both very accurate. So scoring shots wasn't a factor here.

I think the compensation for inaccuracy is a good addition to the algorithm, and it's also good that Squiggle tries to keep the algorithm simple and focus on the key elements. The picture is more complex though. Melbourne have been very accurate this year and that has boosted their winning margins but a big part of the reason for that is the way they play they score a lot from very close range and that's why they don't miss much.
 

Log in to remove this Banner Ad

By now there's still some hangover effect from 2017 but it's not huge.

This is a Squiggle of 2018 results alone, i.e. with all teams starting at 50 Attack and 50 Defence.

SoGdtt4.png

This is glorious for North.
 
Agree. It tells me that we are on the right path this year.

Richmond and Melbourne are scary good according to Squiggle (all hail thee). Will be interesting to see if they fall back to the peloton later in the year.

This week is huge. Squiggle is tipping Geelong to win 81-70. Be interesting to see what happens.
 
I think the compensation for inaccuracy is a good addition to the algorithm, and it's also good that Squiggle tries to keep the algorithm simple and focus on the key elements. The picture is more complex though. Melbourne have been very accurate this year and that has boosted their winning margins but a big part of the reason for that is the way they play they score a lot from very close range and that's why they don't miss much.
There was a site that looked into this, http://figuringfooty.com/, who’d basically mapped every shot location and result they could to determine the percentage chance of an average AFL player kicking a goal from any position. The results to this aren’t exactly a surprise (if you go further out and on an angle, it’s harder to kick a goal), but they used it to compare a side’s actual scores to the scores that would be expected if an average player took the same shots. If a team gets more shots in more successful areas, their expected score would rise.
I’d love to see those expected scores put in a squiggle algorithm, to see what would change, but the site owner was picked up by a club so I doubt it is possible in the near future.
 
My view of those Hawthorn sides is that they were great sides all year, that became exceptional during finals and then simply unstoppable on GF day - 2014 and 15 specifically. That ability to turn it on in finals regularly it what sets them apart from other great sides for me. The last 2 years have seen the Bulldogs and Tigers noticeably 'timing their run', but this trend goes back to at least those Hawthorn sides.
Hawthorn actually had a couple of close pre-lims though but maybe these are the close loses of the season being converted.

Definitely turned up to play on GF days...except for 2012...
 
There was a site that looked into this, http://figuringfooty.com/, who’d basically mapped every shot location and result they could to determine the percentage chance of an average AFL player kicking a goal from any position. The results to this aren’t exactly a surprise (if you go further out and on an angle, it’s harder to kick a goal), but they used it to compare a side’s actual scores to the scores that would be expected if an average player took the same shots. If a team gets more shots in more successful areas, their expected score would rise.
I’d love to see those expected scores put in a squiggle algorithm, to see what would change, but the site owner was picked up by a club so I doubt it is possible in the near future.
Champion Data's 'expected score' is printed for every match in the Herald Sun post-game.
 
Let me start out by saying i love me some squiggle, just have some questions about the squiggles rating methods.

Its my understanding that the motion of a team on the squiggle is purely isolated to the individual game said team is playing that week. Is there potential to get a more accurate rating by having a dynamic rating system, in that week on week results effect previous weeks projected scorelines?

I'll give a bit of an example to outline what i mean, this example focuses on the matches leading up to a round 3 matchup:

Team A:
Round 1 - Beats a bottom 4 team
Round 2 - Beats a bottom 4 team
Round 3 - Beats Team B: 95 - 75 (Projected Score - L 55 - 80)

Team B:
Round 1 - Beats a Top 4 team
Round 2 - Beats a Top 4 team
Round 3 - Looses to Team B: 95 - 75 (Projected Score - W 80 - 55)

You would expect Team A to get a nice boost on the squiggle, as they would have most likely been projected to take an L (the projected score i have picked is a placeholder for a general idea). However, what if the next 3 weeks results go like this:

Team A:
Round 4 - Beats a Top 4 team
Round 5 - Beats a Top 4 team
Round 6 - Beats a Top 4 team

Team B:
Round 4 - Looses a bottom 4 team
Round 5 - Looses a bottom 4 team
Round 6 - Looses a bottom 4 team

These 3 rounds outcome would suggest that Team B was actually not as good as the squiggle originally rated them when it generared the projected score and hence, Team A should not have received the squiggle movement they would have.

In this scenario, if the squiggle was able to retrospectively put a projected score for the game in round 3 as though the results in the following 3 weeks occured prior to the game, isn't there potential to get a more accurate rating for a team?

Again, love me some squiggle, just stuck in Sydney airport with too much time on my hands. Would love to get some opinions from other posters.
 
There was a site that looked into this, http://figuringfooty.com/, who’d basically mapped every shot location and result they could to determine the percentage chance of an average AFL player kicking a goal from any position. The results to this aren’t exactly a surprise (if you go further out and on an angle, it’s harder to kick a goal), but they used it to compare a side’s actual scores to the scores that would be expected if an average player took the same shots. If a team gets more shots in more successful areas, their expected score would rise.
I’d love to see those expected scores put in a squiggle algorithm, to see what would change, but the site owner was picked up by a club so I doubt it is possible in the near future.
Yes, "Expected Score" is a more sophisticated method of doing the same thing, i.e. removing some random noise from goalkicking accuracy. It's more popular in soccer.

To get an Expected Score, you rate every shot taken based on a range of factors including where it was taken from and how much pressure the kicker was under at the time. So if the data tells you that a scrambled kick under pressure from 40 metres out on on angle results in a behind 50% of the time, a goal 25% of the time, and out of bounds 25% of the time, then that shot has an Expected Score of 2 points, which is (1 x 0.5) + (6 x 0.25) + (0 x 0.25). Then every time your model sees a shot like that, it should score it as 2 points, regardless of whether in reality it scored a goal, a behind, or nothing, on the theory that you're removing the luck component.

Champion Data produces Expected Scores and publishes them for most games in the Herald Sun. I know these are used by at least one well-known public model: Ratings Software / AFL Live Ladders. Figuring Footy, as you mentioned, produced his own Expected Scores using a combination of public and CD data until he was hired by Port.

Expected Scores require a fair bit of human judgement, like a lot of "pressure" stats, so I do wonder a little about how objective they are. They also suffer from a bit of generalization because (AFAIK) they're not individualized for players, so Eddie Betts having a shot from the pocket gets the same Expected Score as everyone else.

The North v Brisbane game last weekend was an interesting one, because the real score was 141 to 87, but CD's Expected Score was 103 to 100! That is, CD says a true measure of that contest was a 3-point win to North, not a 54-point win. Which is a heck of a difference. On Squiggle's scoring shots metric, it was a 36-pt win.
 
Let me start out by saying i love me some squiggle, just have some questions about the squiggles rating methods.

Its my understanding that the motion of a team on the squiggle is purely isolated to the individual game said team is playing that week. Is there potential to get a more accurate rating by having a dynamic rating system, in that week on week results effect previous weeks projected scorelines?

I'll give a bit of an example to outline what i mean, this example focuses on the matches leading up to a round 3 matchup:

Team A:
Round 1 - Beats a bottom 4 team
Round 2 - Beats a bottom 4 team
Round 3 - Beats Team B: 95 - 75 (Projected Score - L 55 - 80)

Team B:
Round 1 - Beats a Top 4 team
Round 2 - Beats a Top 4 team
Round 3 - Looses to Team B: 95 - 75 (Projected Score - W 80 - 55)

You would expect Team A to get a nice boost on the squiggle, as they would have most likely been projected to take an L (the projected score i have picked is a placeholder for a general idea). However, what if the next 3 weeks results go like this:

Team A:
Round 4 - Beats a Top 4 team
Round 5 - Beats a Top 4 team
Round 6 - Beats a Top 4 team

Team B:
Round 4 - Looses a bottom 4 team
Round 5 - Looses a bottom 4 team
Round 6 - Looses a bottom 4 team

These 3 rounds outcome would suggest that Team B was actually not as good as the squiggle originally rated them when it generared the projected score and hence, Team A should not have received the squiggle movement they would have.

In this scenario, if the squiggle was able to retrospectively put a projected score for the game in round 3 as though the results in the following 3 weeks occured prior to the game, isn't there potential to get a more accurate rating for a team?

Again, love me some squiggle, just stuck in Sydney airport with too much time on my hands. Would love to get some opinions from other posters.
It's come up before but requires so much work to implement and test that I've never done it properly. The lowest-hanging fruit for Squiggle at the moment is probably player-based ratings, so it could adjust based on who's in and out of the team this week.
 

Remove this Banner Ad

Sydney +43 v Carlton
Western Bulldogs v Melbourne +49
Hawthorn v Port Adelaide +3
Gold Coast v Geelong +22
Essendon v Richmond +26
West Coast +58 v St Kilda
North Melbourne +31 v Brisbane
Colingwood +25 v Fremantle
Adelaide +26 v GWS

7/9. Was a kick out on Hawthorn, and lost on Adelaide's continuing form slump

1. Richmond 30.4 (+2)
2. Melbourne 30.3 (-1)
3. West Coast 25.2 (-1)
4. North Melbourne 19.7
5. Geelong 16.5 (+2)
6. Collingwood 11.9 (+2)
7. Sydney 11.45 (-2)
8. Port Adelaide 9.3 (-2)
9. Hawthorn 3.5 (+1)
10. Adelaide -2.8 (-1)
11. Essendon -4.5
12. GWS -7.9 (+2)
13. Brisbane -11.8 (-1)
14. St Kilda -12.0 (+3)
15. Fremantle -13.3 (-2)
16. Western Bulldogs -22.8
17. Carlton -23.6 (+1)
18. Gold Coast -27.0 (-3)

Port Adelaide v Richmond +15
Geelong v North Melbourne +3
GWS +28 v Gold Coast
St Kilda v Sydney +18
Brisbane v Essendon +1
Fremantle +2 v Adelaide
Melbourne +18 v Collingwood

And the predictive ladder

1. Richmond 17.9 (+1)
2. West Coast 17.8 (-1)
3. Melbourne 17.0
4. North Melbourne 15.1
5. Sydney 14.0
6. Geelong 13.9 (+1)
7. Port Adelaide 13.44 (-1)
8. Collingwood 13.36
9. Hawthorn 12.9
10. Adelaide 10.2
11. GWS 9.7 (+2)
12. Essendon 8.4 (-1)
13. Fremantle 8.0 (-1)
14. Western Bulldogs 5.6 (+1)
15. Brisbane 5.42 (+1)
16. St Kilda 5.37 (+2)
17. Gold Coast 5.0 (-3)
18. Carlton 4.8 (-1)
 
spot on

melbourne fans had september off last year.......all because of two junk time goals in 22 games that they didn't score

Melbourne missed the 8 because Adelaide had a junk time game. Resting key players allowing WC to get the % to get that last spot.
 
Melbourne missed the 8 because Adelaide had a junk time game. Resting key players allowing WC to get the % to get that last spot.
Fundamentally disagree. It was Melbournes responsibility to finish high enough to play finals, not the Crows to get them there. When it's that close an infinite number of things could have changed the outcome.
 
Just curious... Does it factor in the various grounds, just home and away or simply overall form? I.e. Geelong could play a home game at any of Etihad, The G or kardinia and I'd assume that their form at each of the 3 varies (strong at kardinia, good at the G and decent at Etihad). I'm not particularly well versed in all of this.
 
Just curious... Does it factor in the various grounds, just home and away or simply overall form? I.e. Geelong could play a home game at any of Etihad, The G or kardinia and I'd assume that their form at each of the 3 varies (strong at kardinia, good at the G and decent at Etihad). I'm not particularly well versed in all of this.
As of this year, it uses a "familiarity" algorithm that assigns home ground advantage based on how often the team has played at this venue and in this state in recent years.

That's a small upgrade on Squiggle v1, which used a simple 12-pt bonus for home games against interstate opposition.

It doesn't attempt to figure out which teams play which grounds better, so for example doesn't give Sydney any special bonus for playing at Kardinia, despite their positive track record there. Some models do that, but I've never found it very reliable.

This week there are five games with HGA of 10-12 points, with the Saints getting only 8 points vs Sydney due to the Swans playing at Docklands a bit, and Melbourne v Collingwood at the G is basically neutral.
 
Melbourne missed the 8 because Adelaide had a junk time game. Resting key players allowing WC to get the % to get that last spot.
melbourne had themselves to blame, junktime is an excuse for the weak minded
 

🥰 Love BigFooty? Join now for free.

Melbourne missed the 8 because Adelaide had a junk time game. Resting key players allowing WC to get the % to get that last spot.

If Melbourne beat Collingwood as they should have then the WC vs Adelaide game would've been meaningless.
 
It's come up before but requires so much work to implement and test that I've never done it properly. The lowest-hanging fruit for Squiggle at the moment is probably player-based ratings, so it could adjust based on who's in and out of the team this week.

So next week???

:D
 
Let me start out by saying i love me some squiggle, just have some questions about the squiggles rating methods.

Its my understanding that the motion of a team on the squiggle is purely isolated to the individual game said team is playing that week. Is there potential to get a more accurate rating by having a dynamic rating system, in that week on week results effect previous weeks projected scorelines?

I'll give a bit of an example to outline what i mean, this example focuses on the matches leading up to a round 3 matchup:

Team A:
Round 1 - Beats a bottom 4 team
Round 2 - Beats a bottom 4 team
Round 3 - Beats Team B: 95 - 75 (Projected Score - L 55 - 80)

Team B:
Round 1 - Beats a Top 4 team
Round 2 - Beats a Top 4 team
Round 3 - Looses to Team B: 95 - 75 (Projected Score - W 80 - 55)

You would expect Team A to get a nice boost on the squiggle, as they would have most likely been projected to take an L (the projected score i have picked is a placeholder for a general idea). However, what if the next 3 weeks results go like this:

Team A:
Round 4 - Beats a Top 4 team
Round 5 - Beats a Top 4 team
Round 6 - Beats a Top 4 team

Team B:
Round 4 - Looses a bottom 4 team
Round 5 - Looses a bottom 4 team
Round 6 - Looses a bottom 4 team

These 3 rounds outcome would suggest that Team B was actually not as good as the squiggle originally rated them when it generared the projected score and hence, Team A should not have received the squiggle movement they would have.

In this scenario, if the squiggle was able to retrospectively put a projected score for the game in round 3 as though the results in the following 3 weeks occured prior to the game, isn't there potential to get a more accurate rating for a team?

Again, love me some squiggle, just stuck in Sydney airport with too much time on my hands. Would love to get some opinions from other posters.

My algorithm avoids this problem by evaluating every past result on the team's current rating. That has problems too - many teams genuinely play well or badly for part of the season. One day I might put in the effort to produce a hybrid version.
 
This may have been answered, but how did Richmond jump so dramatically after Round 4?

Yes they won 110-17 but it was against Brisbane at the M.C.G. - surely that is not worthy of such a jump! If they had beaten a decent team away from home by a similar margin then I might understand such a jump relative to the competition
 
This may have been answered, but how did Richmond jump so dramatically after Round 4?

Yes they won 110-17 but it was against Brisbane at the M.C.G. - surely that is not worthy of such a jump! If they had beaten a decent team away from home by a similar margin then I might understand such a jump relative to the competition

Holding any team to less than 3 goals will have you smash any Squiggle prediction.

Thus the huge defense gain.
 

Remove this Banner Ad

Certified Legendary Thread The Squiggle is back in 2023 (and other analytics)

🥰 Love BigFooty? Join now for free.

Back
Top