Certified Legendary Thread Race for the flag, in squiggly lines

Remove this Banner Ad

I sat through that no-goal-after-quarter-time effort at Kardinia, and sat through Geelong handing us our lowest score at home the other year as well. I cannot for the life of me understand being favourites with the bookies.
I agree, I'm putting some coin on Geelong this week purely for the value. Line bet isn't worth it since you guys seem to win big or lose. Should be more like an even money game in my opinion.
 

Log in to remove this ad.

I sat through that no-goal-after-quarter-time effort at Kardinia, and sat through Geelong handing us our lowest score at home the other year as well. I cannot for the life of me understand being favourites with the bookies.

I agree, I cannot see you blokes winning. But that is because of Geelong's past form and your blokes general shitness last year.

I think it will be close and if you blokes do get up I wont be surprised. Hard to tip this one.
 
Running at ~65% at the present time.
Any idea on how it is fairing against other "predictors" out there?
It's currently 3-4 tips ahead of the three computer models I know of (Massey, FootyForecaster, FMI). Versus humans, and just eyeballing it, I think it's around top 25% compared to The Age and the Herald-Sun expert tipsters. It's a few behind Roby from the Power Rankings thread, who got off to a flier this year.

65% is a little below the long-term average (69% over the last 20 years), but some years are more predictable than others. The worst year in the last two decades was 1997 (60%) and the best was 2012 (78%). The last five years have been very predictable, historically speaking.

What I hope people do with the squiggle, though, is not blindly take its tips, but use it as a tool! The squiggle only knows who played where and what the score was; it doesn't know who's in and out, who has the mozz on whom, who needs the win more, and so on. So when you know better, tip differently! A human plus a model should always do better than either alone.

The squiggle isn't smarter than you are, it just has a better memory.
 

(Log in to remove this ad.)

Final Siren what's your perspective on using objective player measurements in ranking tools? Such as using super coach or dream team points to judge player weightings so that ins and outs have greater impact, by creating a combined player score. Gold Coast are missing their best players and those that are playing aren't good enough, so therefore their team rating should be significantly less than their opposition. It would add a lot more overhead no doubt but it might add a little more accuracy to some tips.
 
Final Siren what's your perspective on using objective player measurements in ranking tools? Such as using super coach or dream team points to judge player weightings so that ins and outs have greater impact, by creating a combined player score. Gold Coast are missing their best players and those that are playing aren't good enough, so therefore their team rating should be significantly less than their opposition. It would add a lot more overhead no doubt but it might add a little more accuracy to some tips.
Although I vaguely remember earlier in the thread some rumblings about adding more variables doesn't necessarily make it more accurate. But, this is one variable that should be given some serious thought.
 
Final Siren what's your perspective on using objective player measurements in ranking tools? Such as using super coach or dream team points to judge player weightings so that ins and outs have greater impact, by creating a combined player score. Gold Coast are missing their best players and those that are playing aren't good enough, so therefore their team rating should be significantly less than their opposition. It would add a lot more overhead no doubt but it might add a little more accuracy to some tips.
It's definitely worth exploring. It means tracking a crapload of players, rather than just 18 teams, and I'd want at least 20 years of data, but it's doable.

It would also be interesting to see how players are associated with team performance. Dreamteam stats are okay, but you really want to identify players who lift their team, even if they don't necessarily get a lot of the ball.
 
It's definitely worth exploring. It means tracking a crapload of players, rather than just 18 teams, and I'd want at least 20 years of data, but it's doable.

It would also be interesting to see how players are associated with team performance. Dreamteam stats are okay, but you really want to identify players who lift their team, even if they don't necessarily get a lot of the ball.
'True player value' > AFL Player Rating points > Champion Data ranking points > Dream Team points
 
'True player value' > AFL Player Rating points > Champion Data ranking points > Dream Team points
At the start of this year I looked at using AFL Player Ratings as a tipping model. I was just using the pre-game averages for all 22 players and "tipping" whichever team had the greater value. The difference in total Rating Points for the two teams within a single game (ie. not the players' averages, but their actual in-game rating) has an R-squared of 0.95 against the final margin. Ranking Points (SuperCoach) is 0.80 and Dream Team / AFL Fantasy is 0.66.

It ran between 71% and 73% for 2014, 2013 and 2012. Before then there's not enough data to be useful since Ratings can first be calculated in 2010. Of the 60-odd games where it disagreed with the bookies, the bookies were correct 33-29 (or a similar ratio, I don't have exact numbers in front of me). After actually tracking it through this season it is going terribly and only running at 63% (45/72).
It might be a bad model that fluked a few years, but this year I'm blaming:
  • Not having the dedication to change tips after final teams are announced. It can change significantly when there are a couple of players missing.
  • Not having an ability to assess new players or off-season improvement of players. I haven't done any research into whether previous seasons were also worse at the start of the year.
 
'True player value' > AFL Player Rating points > Champion Data ranking points > Dream Team points
Just defeats the purpose of it being objective because then you're determining who has more value to the team than others. Might as well just trot off to Roby's thread and his years of trolling nonsense.
 
It's definitely worth exploring. It means tracking a crapload of players, rather than just 18 teams, and I'd want at least 20 years of data, but it's doable.

It would also be interesting to see how players are associated with team performance. Dreamteam stats are okay, but you really want to identify players who lift their team, even if they don't necessarily get a lot of the ball.

No doubt about that and I don't know anywhere you can actually get that information from or at least scrape it easily. There's a few stats sites out there but they have to manually input for every player and that's just balls. I guess you could scrape the AFL site after a game. At least the score should only affect the probability once the normal inputs have been taken into account, but it means that future forecasting becomes moot because you can't predict who's in or out.

Guess it becomes more like Roby's model.
 
Not having the dedication to change tips after final teams are announced. It can change significantly when there are a couple of players missing.

chunkychicken seems Roby changing his tips is only fair.
Not having an ability to assess new players or off-season improvement of players. I haven't done any research into whether previous seasons were also worse at the start of the year.
This is the biggest drawback of any model, that it can't predict natural improvements or declines of player groups as their careers evolve. But you could conceivably do it with enough data.

- Determine an average rating for 1st year players based on their age and draft position. This might help predictions in the first few rounds where a highly talented 1st round draft pick or talented mature recruit might be the edge for some sides to beat others. For the latter case I'm thinking players the quality of vanDenberg, Podsiadly or Barlow.
- Calculate expected improvements for players under 100 games as they go through 1st, 2nd, 3rd preseasons based previously observed data.

Then you could get a rough approximation of what a player's ranking is without the data for the player backing that up. You would need multiple careers worth of data though, probably difficult to do now. A decade's time, though, you could.

I'm of the view that most "upsets" are young teams playing closer to their expected potential. I doubt any model picked GWS over Hawthorn, but if you took into account where GWS's young group should be based on draft position, age, and number of preseasons and games played, then maybe a model would have predicted it.

I recall Ron The Bear having stats on how often a younger side beat a much older one. It was tilted towards the elder side something like 4 wins to every one the younger side gets.
 
Last edited:
What about looking at club B&F results instead of (or combined with) something like the AFL player ratings. This would give a much better reflection of the worth of someone like Eric MacKenzie or Luke McPharlin.
 
I recall Ron The Bear having stats on how often a younger side beat a much older one. It was tilted towards the elder side something like 4 wins to every one the younger side gets.

The advantage depends on the age difference. It's a near-linear progression.

Diff (yrs)|P|W|L|D|Win %
\< 0.5|4620|2368|2210|42|51.71
\0.5 - 1|3938|2161|1735|42|55.41
\1 - 1.5|2732|1589|1110|33|58.77
\1.5 - 2|1702|1073|612|17|63.54
\2 - 2.5|916|602|303|11|66.32
\2.5 - 3|451|314|133|4|70.07
\3+|293|231|59|3|79.35
\Totals|14652|8338|6162|152|57.43
 

Remove this Banner Ad

Back
Top