Certified Legendary Thread Race for the flag, in squiggly lines

Remove this Banner Ad

Are you talking about Champion Data scores or AFL Player Ratings?

Because CD try to quantify the value of individual possessions, acts, to determine a player's worth. Whereas I don't think the AFL Player Ratings are that transparent and seem pretty dodgy (IMO).
Didn't someone mention earlier that the player ratings are meant to represent actual points (as in six points = a goal) a player "earned" for their team? It makes sense that players from Fremantle would be underrated as they will generally set/up score less goals and then fail to pick up points for the team defensive efforts/structures/gameplan that prevent the opposition from scoring.
 
Didn't someone mention earlier that the player ratings are meant to represent actual points (as in six points = a goal) a player "earned" for their team? It makes sense that players from Fremantle would be underrated as they will generally set/up score less goals and then fail to pick up points for the team defensive efforts/structures/gameplan that prevent the opposition from scoring.
That's an interesting point. Very difficult to give ranking to players denying space ahead of the ball and forcing a turnover through indecision.
 
Even comparing players in the same areas of the grounds has its flaws. How do you compare a rebounding defender to a lock down KPD, or an inside mid who extracts the ball to an outside mid who runs and carries? I'm sure it could be broken down even further within those sub types. To the point where every footballer is a beautiful, unique snowflake.
Yeah exactly. It's why I suggested using an overall team score based on metrics rather than subjective measurements. This shouldn't be used as key determining factor, just should add some value all the same.
 

Log in to remove this ad.

Team score is still subjective. A team that has a 80-20 win in wet conditions gets rated as well defensively by a score based model as one that win 80-20 under the roof.
 
AFL Player Ratings only change value when a player actually plays. There isn't any decay factor, which is why Brent Harvey was in the top 5 players in AFL for a long time, regardless of the fact he wasn't even in the top 50 for just about anyone concerned with football. He wasn't even in North's top 5 best players.

That's not entirely true. It factors in the performance of the 40 most recent games in a rolling window, it doesn't simply continue to accumulate over the course of the career. So your comment about Harvey is completely facetious as he would need to consistently perform when he plays otherwise his ranking would drop.
 
Team score is still subjective. A team that has a 80-20 win in wet conditions gets rated as well defensively by a score based model as one that win 80-20 under the roof.
Not really if you give a weight to every stat, like DT or SC does. Therefore you're just leveraging off of their own fantasy system.
If team a has a higher score than team b + team b home ground advantage (where a is the away team), then a should win more than 50% of the time up to nearly 100%. Obviously can't guarantee 100%.

You can still give probability factors for weather and playing conditions if you have a historical record of them, but they become almost moot. For example, Sydney might play once or twice at Etihad. Their record is very good there in recent years, whereas St Kilda might play there 10 or 11 times and win maybe 30% of their games. Is it because St Kilda is better out of those conditions? Not really no.

That's when you're getting into pretty irrelevant data. The most important piece of data is motivation, which is impossible to predict or measure.
 
That's not entirely true. It factors in the performance of the 40 most recent games in a rolling window, it doesn't simply continue to accumulate over the course of the career. So your comment about Harvey is completely facetious as he would need to consistently perform when he plays otherwise his ranking would drop.
His 40 most recent games were spread over 3, nearly 4 seasons. He wasn't even in the top 5 or 10 in that time at his own club. The AFL player rankings are full of crap anyway and it just singles out a few of the really good midfielders and assigns a pretty crap score to everyone else.
 
His 40 most recent games were spread over 3, nearly 4 seasons. He wasn't even in the top 5 or 10 in that time at his own club. The AFL player rankings are full of crap anyway and it just singles out a few of the really good midfielders and assigns a pretty crap score to everyone else.

Nearly 4 seasons? Also not true. Most players play 40 games over the course of 2 seasons. Brent Harvey has his spread over 3 seasons, only missing games through suspension I believe. Try as you might to invalidate his rating but his rating has nothing to do with longevity or games played, he's on level playing field as any other player with over 40 games experience in the league.

As for AFL player ratings. They're not great but to be honest theyre more of a tool to evaluate players on a position to position basis.
The AFL website even does this. No point using it to compare a midfield with a defender, but comparing a defender with to another defender using the ratings is totally valid.
 
His 40 most recent games were spread over 3, nearly 4 seasons. He wasn't even in the top 5 or 10 in that time at his own club. The AFL player rankings are full of crap anyway and it just singles out a few of the really good midfielders and assigns a pretty crap score to everyone else.
"A player's rating is determined by aggregating his points tally based on a rolling window of the previous two seasons. For example, after round six of the 2014 season, a player's rating will be based on matches from round seven of the 2012 season onwards"
 
It's currently 3-4 tips ahead of the three computer models I know of (Massey, FootyForecaster, FMI). Versus humans, and just eyeballing it, I think it's around top 25% compared to The Age and the Herald-Sun expert tipsters. It's a few behind Roby from the Power Rankings thread, who got off to a flier this year.

65% is a little below the long-term average (69% over the last 20 years), but some years are more predictable than others. The worst year in the last two decades was 1997 (60%) and the best was 2012 (78%). The last five years have been very predictable, historically speaking.

What I hope people do with the squiggle, though, is not blindly take its tips, but use it as a tool! The squiggle only knows who played where and what the score was; it doesn't know who's in and out, who has the mozz on whom, who needs the win more, and so on. So when you know better, tip differently! A human plus a model should always do better than either alone.

The squiggle isn't smarter than you are, it just has a better memory.

It's more than a few. It's actually four. But don't worry mate this week you should be back almost even, your Tigers are on a roll and should get up, there's one. Dr. Jekyll North should turn up this week, there's two. And of course not even I would tip the Dees, but the Power Rankings are saying otherwise.
 

(Log in to remove this ad.)

Just something completely random- are there records of which team won the coins toss in each game?
I wonder if the coin toss winning team wins more or less often. Ie does it really matter???
I hope you aren't comparing the toss of a coin to form and ladder position.
 
What Roby offers isn't an actual model, though. Roby is a person plus a spreadsheet. For example, here is the author of FMI saying their model is tipping Carlton, but they personally think GWS will win. When sure enough the Giants get up, FMI doesn't count that as a correct tip, because the model got it wrong....

Wouldn't say 'personally thought GWS would win', but the form indicators for both teams were so diametrically opposed that the Carlton tip was all but expected/known to be wrong.
And certainly its not counted as correct when the model said otherwise. What it does indicate is that there needs to be a more accelerated adjustment for teams that get on a good run. And perhaps that is the hardest part of any 'non-emotive' tipping predictor... just when is a team really doing well enough, or just having a smaller purple patch (i.e. WB who went 4-1, then 0-3)?
 
Wouldn't say 'personally thought GWS would win', but the form indicators for both teams were so diametrically opposed that the Carlton tip was all but expected/known to be wrong.
And certainly its not counted as correct when the model said otherwise. What it does indicate is that there needs to be a more accelerated adjustment for teams that get on a good run. And perhaps that is the hardest part of any 'non-emotive' tipping predictor... just when is a team really doing well enough, or just having a smaller purple patch (i.e. WB who went 4-1, then 0-3)?

Form factor affecting probability of results. Hawthorn is still at the very top of your ratings and it'll take a sea change to drop them, even though they're 4-4 on the ladder and will likely make 3rd at best. Massey has Hawks 4th, you have them 1st, Squiggles 2nd, PowerRankings 1st and FootyForecaster 1st.

It's a shame you don't have your own thread for this but you use your blog well.
 
Is the residue of 2013 and 2014 coming out of the hawks squiggle and they are now in 2011 2012 territory
I wouldn't put much weight on this round's results and the Squiggle. Both Freo and Hawthorn's games played in pretty awful conditions, which accentuate Freo's defensive strengths, and limited Hawthorn's attacking strengths. Play both games in dry conditions and Hawthorn win by 100+ and Freo get an improved result against Adelaide, but improve little defensively.
 
Liking the squiggle thinking the eagles will beat the dockers in a grand final, even though it rates freo with the home subiaco crowd

In fact it only has eagles losing to freo, and freo losing to hawthorn in tassie for the rest of the year

If it eventuates, we may feel a little left out in victoria, but well get a huge financial input when they all come over here
 
Last edited:

Remove this Banner Ad

Back
Top