Remove this Banner Ad

Certified Legendary Thread Race for the flag, in squiggly lines

🥰 Love BigFooty? Join now for free.

All the algorithms have gone up by 20-25 matches, when there were 11 extra matches in 2011, and another extra 11 in 2012 (adding up to an increase of 22 from 2010-2012). Which means that the algorithms is roughly tipping all the extra matches correctly.

Which means it is more accurate now. It isn't tipping the extra games at the same level of accuracy, it's doing it at an increased level of accuracy.

This is because equalisation in the AFL doesn't work as effectively as it used to.
If you look at the results the addition of 11 games in 2011 and a further 11 in 2012, actually reduced the number of competitive games by about 11 with each expansion side having close to 20 guaranteed losses each year.

I would expect the squiggle to get back to a similar percentage of correct tips in a year or so. That level should put it around 140 to 145 correct annually.

2014 was actually close to the mark, but 2015 had another basket case in Carlton added to the mix.
 
The difference between 2010 and last year appears about 20 tips. With 22 extra games per year this seems about right. 18 extra tips would be expected if going at 80%

Sent from my SM-G920I using Tapatalk
 
So, if we take the red squiggle (6 points home adv.) which has been the more accurate in 2 of the past 3 seasons, would the tips simply change by 6 points each team (e.g. instead of Adelaide defeating Geelong by 5 points, Geelong defeats Adelaide by 1), or is it more complicated than that?
A little more complicated. Teams move on the chart based on how much the scores differ from expectation. Since the 6-pt algorithm has a different expectation to regular squiggle, it will move teams to slightly different positions after each interstate match. Which means it rates teams a little differently, and thus produces different tips.

This isn't a big effect, though.
 
Because home advantage, mostly.

Here is the regular squiggle algorithm, which awards 12 points for home state advantage, plus three variants: one that doesn't award any home advantage, one that awards 6 points, and one that awards 18 points:

7YsQh1r.jpg

They're very similar! But the yellow regular squiggle is the most successful: It's best (or equal best) in 8 of those years, and worst (or equal worst) only once.

It also has a lower error margin, which is usually a good guide as to whether an algorithm is genuinely accurate or just getting lucky.

Out of interest have you just ran that only for 0, 6, 12 and 18 points? Is 12 still the most accurate number if you ran every number from 0 to 18?
 

Log in to remove this Banner Ad

Hi Final Siren, Just found a tiny glitch.

On your Tip/Predictions I got down as far as Round 11.

The predictor has Hawthorn beating North but it has given us the win to go back on top of the ladder. Hopefully the error is in our favour :D
 
Hi Final Siren, Just found a tiny glitch.

On your Tip/Predictions I got down as far as Round 11.

The predictor has Hawthorn beating North but it has given us the win to go back on top of the ladder. Hopefully the error is in our favour :D
Look at Norths % to win each game.
Add them all up.
Average them out.
 
So that ladder for each round isn't worked out on just the predictor results for each round?

Or maybe my first response would be clearer........ "Fuggin' wha......??":D
Final Siren plz
 
Out of interest have you just ran that only for 0, 6, 12 and 18 points? Is 12 still the most accurate number if you ran every number from 0 to 18?
I've run every possible value. I just give my system sanity ranges, which for home ground advantage might be 0 to 40 points, and let it try all combinations of values, variables and algorithms.
 
So that ladder for each round isn't worked out on just the predictor results for each round?

Or maybe my first response would be clearer........ "Fuggin' wha......??":D

No, it's a probabilistic ladder based on the likelihood the squiggle assigns for each team to win each of it's matches.

From the 'Info' tab on the squiggle web page:

"This is how the ladder will look if the squiggle has correctly rated every team and nobody gets better or worse.

For the home & away season, it uses a probabilistic ladder, not a simple tally of tips. Both teams are awarded a win probability from each game, so that if the squiggle thinks Hawthorn is 68% likely to beat Collingwood, it will award the Hawks 0.68 wins and the Pies 0.32 wins, increasing both team's tally of "probable wins" by less than 1.

This is because if a team plays 10 games with 60% likelihood of winning each game, we should expect them to win about 6/10—not, as we would get if we tipped each game and tallied up the tips, 10/10. We know that upsets will happen; we just don't know when. A probabilitistic ladder accounts for the likelihood that teams will sometimes unexpectedly win or lose, even though we doesn't know when.

This can look like a bug in the predictor, if you see a team tipped to win a match that doesn't seem to be credited. For example, a team might be on "15 (14.7)" wins, which means 14.7 "probable wins" rounded off to 15. (Rounding occurs so that teams can be secondarily ranked by their percentage.) And then that team is tipped to win the following week, but it remains on 15 wins, now "15 (15.3)". What has happened is the number of probable wins hasn't risen by enough to be rounded to a higher number. It has earned 0.6 more probable wins, but this still rounds off to 15. The predictor is saying it's still most likely this team will be on 15 wins, after accounting for the likelihood that some of its tips will be wrong."
 
No, it's a probabilistic ladder based on the likelihood the squiggle assigns for each team to win each of it's matches.

From the 'Info' tab on the squiggle web page:

"This is how the ladder will look if the squiggle has correctly rated every team and nobody gets better or worse.

For the home & away season, it uses a probabilistic ladder, not a simple tally of tips. Both teams are awarded a win probability from each game, so that if the squiggle thinks Hawthorn is 68% likely to beat Collingwood, it will award the Hawks 0.68 wins and the Pies 0.32 wins, increasing both team's tally of "probable wins" by less than 1.

This is because if a team plays 10 games with 60% likelihood of winning each game, we should expect them to win about 6/10—not, as we would get if we tipped each game and tallied up the tips, 10/10. We know that upsets will happen; we just don't know when. A probabilitistic ladder accounts for the likelihood that teams will sometimes unexpectedly win or lose, even though we doesn't know when.

This can look like a bug in the predictor, if you see a team tipped to win a match that doesn't seem to be credited. For example, a team might be on "15 (14.7)" wins, which means 14.7 "probable wins" rounded off to 15. (Rounding occurs so that teams can be secondarily ranked by their percentage.) And then that team is tipped to win the following week, but it remains on 15 wins, now "15 (15.3)". What has happened is the number of probable wins hasn't risen by enough to be rounded to a higher number. It has earned 0.6 more probable wins, but this still rounds off to 15. The predictor is saying it's still most likely this team will be on 15 wins, after accounting for the likelihood that some of its tips will be wrong."
Thanks! I understood nearly all of that! Cheers :thumbsu:
 
All the algorithms have gone up by 20-25 matches, when there were 11 extra matches in 2011, and another extra 11 in 2012 (adding up to an increase of 22 from 2010-2012). Which means that the algorithms is roughly tipping all the extra matches correctly.

Which means it is more accurate now. It isn't tipping the extra games at the same level of accuracy, it's doing it at an increased level of accuracy.

This is because equalisation in the AFL doesn't work as effectively as it used to.
Tipping definitely got easier in 2011. Most of my algorithms suddenly became more accurate, with the regular squiggle jumping from a 20-year average of 67% to 78% in 2011 and 2012.

Tellingly, one algorithm that got worse was HOMER, which simply tips the home team: It fell from a 20-yr average of 60% to 57% in 2011 and 56% in 2012. That suggests to me that the gap between teams widened so that it wasn't as easily overcome by home ground advantage.

I don't think that's all because of GC and GWS, although obviously they were a big factor. The comp spread out a lot in 2011, with Collingwood (20-2), Geelong (19-3), and Hawthorn (18-4) all strong at the top end, while Port (3-19) and Brisbane (4-18) were stinking it up at the bottom alongside Gold Coast (3-19). Then in 2012 and 2013 Melbourne went from bad to historically bad.

So since 2011 we've had an unusually high number of teams who can be counted on to lose almost every match. That does seem to be evening out, but only slowly. The league is still a lot less even than it was.
 

Remove this Banner Ad

No, it's a probabilistic ladder based on the likelihood the squiggle assigns for each team to win each of it's matches.

From the 'Info' tab on the squiggle web page:

"This is how the ladder will look if the squiggle has correctly rated every team and nobody gets better or worse.

For the home & away season, it uses a probabilistic ladder, not a simple tally of tips. Both teams are awarded a win probability from each game, so that if the squiggle thinks Hawthorn is 68% likely to beat Collingwood, it will award the Hawks 0.68 wins and the Pies 0.32 wins, increasing both team's tally of "probable wins" by less than 1.

This is because if a team plays 10 games with 60% likelihood of winning each game, we should expect them to win about 6/10—not, as we would get if we tipped each game and tallied up the tips, 10/10. We know that upsets will happen; we just don't know when. A probabilitistic ladder accounts for the likelihood that teams will sometimes unexpectedly win or lose, even though we doesn't know when.

This can look like a bug in the predictor, if you see a team tipped to win a match that doesn't seem to be credited. For example, a team might be on "15 (14.7)" wins, which means 14.7 "probable wins" rounded off to 15. (Rounding occurs so that teams can be secondarily ranked by their percentage.) And then that team is tipped to win the following week, but it remains on 15 wins, now "15 (15.3)". What has happened is the number of probable wins hasn't risen by enough to be rounded to a higher number. It has earned 0.6 more probable wins, but this still rounds off to 15. The predictor is saying it's still most likely this team will be on 15 wins, after accounting for the likelihood that some of its tips will be wrong."
You should put this on the squiggle so people don't ask the same question every time.
 
Little bit of a furphy that one, the difficulty of the draw surely has to be reassessed post-season.

Out of the top 8 sides, we had 3 double ups against other top 8 sides, as did Freo. North/Adelaide/Richmond had 2. Hawthorn/Sydney/Dogs only had 1 return match against other finalists!

Replace our draw with anyone else's in that top 8 and we probably finish on at least as many wins as we had.
Agree here. What might have seemed hard or easy draw at the beginning of the season is not so now.
 

🥰 Love BigFooty? Join now for free.

Remove this Banner Ad

Certified Legendary Thread Race for the flag, in squiggly lines

🥰 Love BigFooty? Join now for free.

Back
Top