Remove this Banner Ad

Mega Thread Nick Daicos - Can he be the GTWEB? Part 2

  • Thread starter Thread starter Fadge
  • Start date Start date
  • Tagged users Tagged users None

🥰 Love BigFooty? Join now for free.

In terms of the laughing emoji - go for your life - I didn't word it well. "It's" is referring to the Champion Data system. It's designed to evaluate teams. I was ridiculing the PHD - a PHD on the waste product of a system that doesn't work for it's purpose - evaluating teams. Someone has written a PHD on it's evaluation of players - which isn't actually what it's doing - it's just something they sell off - because it creates a talking point for the media and some buy it.

"And if you look at that PHD - it's not about evaluating individual players from different teams - that's not what the system is built for. It's built as an attempt to evaluate teams. The individual player stuff is just spin off sales from a waste product.
A logical person should have understood it was Champion Data you were referring to in the first instance.

You shouldn't have had to clarify it.
 
I'm glad it says 'new method' and not 'accurate method', because when in 2025 a player kicks more goals than any other player has since 2009, and that player is only ranked the 52nd best player in 2025, it doesn't do a very good job of 'assessing player performance'.
Cameron was still the highest rated permanent key forward. Darcy and Thillthorpe got the bonus ruck points (mobile rucks always rack up points). Grouping players by position is actually really simple if your IQ is at least 70.
 
?? That isn't what you were blabbering about.

That article is about confirming the validity of the ratings in relation to predicting match outcome and margin.

The data confirmed the relationship between score margin and team rating differential. Not sure why you are posting that? As that isn't being disputed.

Even just looking at early results in 2026 that is holding true.

TRD = team rating differential

The close games have close TRD.
  • Rd 0 Dogs beat Brisbane by 5, TRD was 7
  • Rd1 Carlton beat Richmond by 4, TRD was 7
  • Saints lost to Pies by 12, TRD of -4
  • Saints lost to Dees by 13, TRD of -11
And the belting have similar TRD
  • GC beat Cats by 56 TRD was 51
  • GC beat WC by 59, TRD was 58
You were butchering your numbers by creating ridiculous ratios that were bunkum.

The TRD isn't being the questioned, what is being noted is the link between actual game style - play in an attacking high scoring game teams and BOTH teams players have more scoreboard impact and BOTH get high TRs.

Back to the close games, TRD 4-11

Rd0 - Dogs and Lions - BOTH teams scored above 100, BOTH teams had a TR above 220.
Rd0 - Pies v Saints - BOTH teams scored under 80, BOTH teams had TRs in 190
Rd1 - Richmond and Carlton BOTH scored under 80, BOTH teams had TR below 190
Rd1 - Melb v StK, BOTH teams scored above 100, BOTH teams had a TR above 230.

Saints in a low scoring scrap where they lost by 12 against with a TRD of -4 managed a TR of 190, Saints in a high scoring game where they lost by 13 had a TRD of -11 managed a TR of 231.

Should start becoming clear that it is easier to accumulate actions that have positive scoreboard contribution in a game where both teams are hitting the scoreboard.

Saints v Dees game had a combined for 474 player rating points, the Saints v Pies just 384 player rating points. Playing in a high scoring game meant players shared an extra 90 player rating points amongst themselves...for a game that finished with same score margin and similar TR differential.

Because the Saints have played 1 high scoring game and 1 low scoring game, their current 2026 avg TR is 211, but they have a % of just 87%.

Pies as we have played two low scoring games have an avg TR of 195 and a % of 98...be a weird stretch to try and claim Saints have been better than the Pies so far in 2026 (but pretty easy to claim the Saints have scored more than us, back up by their avg team ratings).

You can create some more redundant ratios where you take percentage difference between averages and compare that to % of and find % difference in avg TR if you want again...
I'm not going to give an intro to stats lecture on a Nick Daicos threat but you're approaching this the wrong way. You're isolating out a one-game sample - and all of the issues it presents (such as the nature of the opponents you play against), when the principle of this system is that it makes sense when you play all opposition teams once, as a collective population.

You're doing the statistical equivalent of claiming that Brisbane are currently one of the worst teams in football because they are 0-2. No, they're 0-2 because they've played two strong opponents. Collingwood will play games against opponents who play in matches that are high scoring in coming weeks, and get more opportunity to score more points. It doesn't mean that for that week Collingwood played a better game, but when you average out that game with all the others Pies play, it then creates an accurate measurement of the team rating.

And because every Pies fan seems incapable of using logic that flows on from another:

Because the opposition play in a low scoring game, it means that you prevent opposition players from getting lots of points, meaning that the opposition players rank lower than you.

For example, take the North vs Port game. You claim it's "low scoring" and therefore unfair to say (North) players. But we're talking about ranking here - in a low scoring game where both players rank lowly, the North players still rank above the Port players. So while North players may rank below the high-scoring Brisbane players that got lots of points despite a loss losing in a high scoring game to the Dogs players... next week when (say) North play Bulldogs and Brisbane play Port, the benefit is flipped, and the fairness is evened out. North players get opportunities to rack up points in a high scoring loss, and Brisbane players won't rack up many points despite winning to a low-scoring Port team. So this logic applies over the course of a season evens out, and applies to the rankings fairly.

You went on a very long rant to be wrong again.
 
Last edited:
I've said numerous times that I rate Daicos as the 4th best player despite the ratings system having him 16th. The obvious corollary of such logic of how I rate Daicos is that there would be players, like Richards, that the ratings system rates 2nd but I rate outside the top 10.

Your inability to understand what I would think as a very obvious corollary here speaks volumes about how you understand and apply the thought process of logic here.
Yes I am unable to understand someone so adamantly using ths player ratings system as definitive evidence that one player is better than another and then dismissing it for a comparison between other players.


"It doesn't suit my argument to suggest that winning scores by big margins is good and suggests that the team doing so is actually good at the sport, because it might suggest that Bontempelli is a good player, so I'm just going to say it's not actually good.
I didn't come close to saying that. Of course it can be good. But some teams in all sports use end game strategies whereby they reduce the likelihood of a big win. I've played and coached basketball my whole life and it's still the standard strategy in any decent standard league to take time off the clock with slow offence when you've got a decent lead. TBH, I don't really like it on a basketball court or when Collingwood do it, but it's pretty clear that in the AFL, some teams do it and some don't. It also seems to me that how much you blow out teams by doesn't correlate particularly well with victory in competitive games. I haven't written a PHD on it though.
But I should take your word for it, forum dude. Do you have a PhD in statistics that gives you credentials to disagree?
You're literally just telling yourself this to make yourself feel better, rather than taking the thesis for what it is... that is building upon scientific exploration of others... which is what academia and it's applications is. Isn't all academia that cites other work just the "waste product" of that work? How is any research that builds upon the research of others just not "waste product" according to your logic here? My god. This is incredible. I don't understand it, so it must be bad and therefore worthy of illogical and misguided derision. All because I just want to say Daicos good Bont bad like some caveman.

You need to read it again. It's not saying what you think it is. Statistics are used to show that the accumulated player ratings for a team correlated with game outcomes during the studied period. It's looking at individual games and comparisons with direct opponent - not player averages for the season. Your thrashing stuff is not relevant at all. It's looking at outcomes of individual games. It's stuff about individual players is used to show how the ratings are formed. It suggests that a team having a higher sum of player ratings than their opponent on the day correlates well with victory. That's the thesis. So it's relevant for individual games in terms of who contributed the most in that game. It's relevance to comparing players across games wouldn't be through the average as it's only looking at win/loss correlation. It's relevance would be using it to do a Brownlow/Coaches votes system based on it, to see who impacts their games the most often by consistently having the highest ratings in their games.
 

Log in to remove this Banner Ad

For example, take the North vs Port game. You claim it's "low scoring" and therefore unfair to say (North) players. But we're talking about ranking here - in a low scoring game where both players rank lowly, the North players still rank above the Port players.

Yes!

So can you now see that if you go with average player ratings, a more attacking higher scoring team's players will average more over the course of the season than a lower scoring team's players.

Let's ignore the Bont comparison - but it's why Richards averages more than Nick Daicos. He plays in a higher scoring team and thus averages higher scores.
 
Yes!

So can you now see that if you go with average player ratings, a more attacking higher scoring team's players will average more over the course of the season than a lower scoring team's players.

Let's ignore the Bont comparison - but it's why Richards averages more than Nick Daicos. He plays in a higher scoring team and thus averages higher scores.
I can't believe you cannot be capable of following the logic I clearly laid out here.

We're measuring how teams and players rank against each other.

There is no advantage to playing in a team where those matches are higher scoring, because you're giving every team you come up against an opportunity for their players to get a boost in ratings points to their typical game, even if they lose. You do that for every other team, every week. Therefore, for the boost they get, they get to rise up the rankings, even if they lose.

There is no disadvantage to playing for a team that plays low scoring games. Because when your opposition's high points scoring players come up against you, they still might put in a match winning performance, but it's reflected in a lower player rating points game, because all of that low scoring team's games nuke the point scoring ability of the opposition. Hence therefore an opponent's average on the season ratings points might drop, even if they won.

Over the course of a season, it averages out.

About the 55th time in this thread I've had to repeat myself because you've literally just obtusely ignored what I've already said as a correction to just bleat you being wrong again 🤷
 
I'm not going to give an intro to stats lecture on a Nick Daicos threat but you're approaching this the wrong way. You're isolating out a one-game sample - and all of the issues it presents (such as the nature of the opponents you play against), when the principle of this system is that it makes sense when you play all opposition teams once, as a collective population.
What nonsense.
You're doing the statistical equivalent of claiming that Brisbane are currently one of the worst teams in football because they are 0-2. No, they're 0-2 because they've played two strong opponents.
Completely irrelevant and nothing to do with your inane rambling about team ratings.
Collingwood will play games against opponents who play in matches that are high scoring in coming weeks, and get more opportunity to score more points.
Will we? What if it is our chosen style to play defensive brand of football?

Last year during H&A Pies games totalled almost 700 points in total less than Dogs games. Basically 5 goals extra were scored in every Dogs game compared to Pies games.
It doesn't mean that for that week Collingwood played a better game, but when you average out that game with all the others Pies play, it then creates an accurate measurement of the team rating.
Team A play a game style where they try and win 80-65 style games, and are able to do this successfully and win 16 games.

Team B happy to try and outscore there opponent and shoot for 114-105 style wins, and are able to only win 12 games for year.

Which team do you think would end the season with higher avg team rating?
And because every Pies fan seems incapable of using logic that flows on from another:

Because the opposition play in a low scoring game, it means that you prevent opposition players from getting lots of points, meaning that the opposition players rank lower than you.
No shit.
That is OUR point.
For example, take the North vs Port game. You claim it's "low scoring" and therefore unfair to say (North) players. But we're talking about ranking here - in a low scoring game where both players rank lowly, the North players still rank above the Port players.
No you aren't talking ranking, you are talking RATINGS.

You can have a rating of 16 and be the 1st RANKED player on the ground in a low scoring struggle where ZERO players get a rating above 20.

Or you can get a rating of 16 and be the 9th RANKED player on the ground in a high scoring shootout.
So while North players may rank below the high-scoring Brisbane players that got lots of points despite a loss losing in a high scoring game to the Dogs players... next week when (say) North play Bulldogs and Brisbane play Port, the benefit is flipped, and the fairness is evened out. North players get opportunities to rack up points in a high scoring loss, and Brisbane players won't rack up many points despite winning to a low-scoring Port team. So this logic applies over the course of a season evens out, and applies to the rankings fairly.
??
This is where you fall to pieces.
Teams don't all play the same game style, it doesn't "balance" out.

Some teams are negative defensive orientated teams that play low scoring games are trying to win 80-65.

Other teams try and just outscore their opponent and are happy to win 115-105.

So ALL season long, certain teams players get opportunities to rack up points in the high scoring free flowing games.

In 2025, the high scoring free flowing team who was racking up rating points was the Dogs.

Then at the end of the season you had 5 of the top 29 RANKED players according to average player rating points.

You are then using the really strong rating points to try and argue how your Dogs player is better than players from other teams who don't play the higher scoring game style that helps rack up ratings points.
 
Team A play a game style where they try and win 80-65 style games, and are able to do this successfully and win 16 games.

Team B happy to try and outscore there opponent and shoot for 114-105 style wins, and are able to only win 12 games for year.

Which team do you think would end the season with higher avg team rating?
You're completely misunderstanding how the player ratings system works.

You do realise that contested possessions, intercepts, pressure acts (including tackles), spoils and frees for one (when they occur at the point that an opponent has possession) are all awarded?

Therefore, in a low scoring game, this usually and often occurs because players from both teams are intercepting the ball often, applying lots of pressure, and effecting spoils in marking contests. This gets credited in the ratings points.

This is why Collingwood's defenders are all so highly rated. 8 of them in the top 100 of all defenders (when, given that there are 18 teams in total, an "average" team should only have 5-6). Because Collingwood as a team get ratings points when they do defensive actions (with generally lead to the opposition failing to score), it's just that it's the Collingwood defenders, not Nick Daicos, who are doing those actions.
You can have a rating of 16 and be the 1st RANKED player on the ground in a low scoring struggle where ZERO players get a rating above 20.

Or you can get a rating of 16 and be the 9th RANKED player on the ground in a high scoring shootout.
I understand the point you're making. Yes, you are correct. But the point you're missing is that it becomes irrelavent over time over the course of the season. You would be correct, if I was making a point of saying that Bontempelli was the better player because he's played against high-scoring opponents in 2 games so far and Daicos low-scoring opponents in 2 games so far. Others have (incorrctly) made that point. I haven't. I've compared the whole season of last year between the two players, where both Daicos and Bont generally played opposition that, on average, did not have high scoring or low scoring games.

Some teams are negative defensive orientated teams that play low scoring games are trying to win 80-65.
You are missing two points:

1. Opposition players get fewer points, therefore they drop in ratings compared to the rest of the league (including your own players) when they lose a game to you by not scoring many points

2. In a low scoring game, players from both teams get defensive points.

The fact that Collingwood might have played a game where their opponents only score 65 points ... does provide points, in the form of the intercepts, pressure acts and spoils that the Pies players did to prevent the opposition from scoring more than 65 points.

Have you actually considered that, for a moment, that Nick Daicos's lack of defensive work is being carried by the even more outstanding defensive work by the rest of the Pies teams to cover for the fact that Daicos does very little defensive work? That gets assigned ratings points to the Pies team.
So ALL season long, certain teams players get opportunities to rack up points in the high scoring free flowing games.
And so do their opponents.

In 2025, the high scoring free flowing team who was racking up rating points was the Dogs.
Dogs also racked up the ratings points, to be rank 1 ... because they actually were rank 1 in average margin across all games, (despite missing finals) believe it or not (after finals).

Then at the end of the season you had 5 of the top 29 RANKED players according to average player rating points.

Yes.

A), we were rank 1 for average margin. On average, we scored 29.3 more points than our opponents. Ironically (given the Dogs didn't play finals), that was actually rank 1 in the 2025 season.

B) It has been pointed out very many times that the Dogs were a very top heavy team last year. 5 outstanding footballers, but lots of very poor defenders. We only had 2 defenders that ranked in the top 100 of all defenders last year, I will point out for the millionth time this thred.

If the Dogs had the Pies' defenders last year, we would have won the flag.
 
I can't believe you cannot be capable of following the logic I clearly laid out here.

We're measuring how teams and players rank against each other.

There is no advantage to playing in a team where those matches are higher scoring, because you're giving every team you come up against an opportunity for their players to get a boost in ratings points to their typical game, even if they lose. You do that for every other team, every week. Therefore, for the boost they get, they get to rise up the rankings, even if they lose.
You do realise that by playing in a high scoring team your players get the boost in rating points EVERY GAME.

Their typical game is the boosted ratings points game.

That is how Bulldogs had 5 of 29 RANKED players based on avg ratings points in 2025...
 
Last edited:
If anyone else is still reading this thread.

Pies fans have consistently misunderstood AFL Player Ratings points and have claimed that the default assumption is that because Pies games are low scoring and because Western Bulldogs games are high-scoring, it overrates Bontempelli and underrates Daicos.

I'm going to prove them wrong with this simple screenshot:
1773931628238.webp


In this picture, there are four columns. The first one is the amount of team player ratings points per game for 2025. Not differential, simple average.

The second column is on average per game, how many more points did each team score more than the opponent.

You can create a scatter plot with these two numbers, which is the graph on the right, because points per game more than your opponent clearly has a linear relationship with generating AFL Player Ratings points.

Teams to the right and below of the trend line are the ones who tend to get more points for defensive actions as a team, but this didn't meant that the opposition failed to score.

The teams above and to the left of the line are the ones didn't register lots of ratings points for defensive actions, but in any case, opposition teams failed to score.

As you can see, Adelaide were a major outlier here.

The third column is a linear forecast of how many player ratings points that each team should have gotten given their average margin. That is, what the trend line says should have been your ratings points given what your average margin was.

For instance, Adelaide as a team only got 205.7 average ratings points per game last year. Given that they, on average, scored 23.4 more points than their opponent, they should have actually gotten 217.5 average ratings points per game.

The fourth column is such a percentage adjustment.

For instance, the Western Bulldogs generated 3.8% more ratings points as a team than their margins would have suggested. It is quite possible that for all of Bontempelli's defensive acts, such as laying tackles, this didn't necessarily prevent Dogs' opposition from scoring. For Collingwood, they underscored by 0.8%. For all of Daicos' lack of tackle laying, Pies' opponents in any case marginally failed to score.

Okay, so lets adjust that -3.8% and 0.8% to Bontempelli and Daicos.

Bontempelli - 19.79 -> 19.06
Daicos - 15.61 -> 15.73

Earlier, Pies fans laughed at the system because players like Dawson were underrated for Adelaide. And they're probably right! with Adelaide's 5.7% adjustment and with adjusting all other players for all teams (including the Dogs), Dawson increases from 28th to about 21st best player. So it underrated Crows players.

Have a good night Pies fans!
 
Last edited:
You do realise that by playing in a high scoring team your players get the boost in rating points EVERY GAME.

Their typical game is the boosted ratings points game.

That is how Bulldogs had 5 of 29 RANKED players based on avg ratings points in 2025...
Aaaaaaaaaaaaaaaaaaand I've just proved you wrong with my above post.

You are so in over your head it's ceased to be funny anymore.

Have a great night!
 
Earlier in this thread, I conceded that the Western Bulldogs were only about the 5th best team in the league last year - because it's not really relevant to generate scoreboard points (and therefore player ratings points in a linear relationship with the scoreboard points) against rank non-finals teams.

The 5-7thish best team in the league averaged about 210 player ratings points, much below the Dogs' 230 per game. Approximately 10% worse.

So lets adjust down Bontempelli by 10%. Lets see what his ratings points would have been if it is only fair to assume that the Dogs were only about the 5th best team in the league.

Bontempelli ratings points 19.79 -> 17.81.

What does 17.81 rank?

1. That number of 17.81 is still rank 1 in the league.

Marcus Bontempelli is the best greatest player in the league.
 
If anyone else is still reading this thread.

Pies fans have consistently misunderstood AFL Player Ratings points and have claimed that the default assumption is that because Pies games are low scoring and because Western Bulldogs games are high-scoring, it overrates Bontempelli and underrates Daicos.

I'm going to prove them wrong with this simple screenshot:
View attachment 2556200


In this picture, there are four columns. The first one is the amount of team player ratings points per game for 2025. Not differential, simple average.

The second column is on average per game, how many more points did each team score more than the opponent.

You can create a scatter plot with these two numbers, which is the graph on the right, because points per game more than your opponent clearly has a linear relationship with generating AFL Player Ratings points.

Teams to the right and below of the trend line are the ones who tend to get more points for defensive actions as a team, but this didn't meant that the opposition failed to score.

The teams above and to the left of the line are the ones didn't register lots of ratings points for defensive actions, but in any case, opposition teams failed to score.

As you can see, Adelaide were a major outlier here.

The third column is a linear forecast of how many player ratings points that each team should have gotten given their average margin. That is, what the trend line says should have been your ratings points given what your average margin was.

For instance, Adelaide as a team only got 205.7 average ratings points per game last year. Given that they, on average, scored 23.4 more points than their opponent, they should have actually gotten 217.5 average ratings points per game.

The fourth column is such a percentage adjustment.

For instance, the Western Bulldogs generated 3.8% more ratings points as a team than their margins would have suggested. It is quite possible that for all of Bontempelli's defensive acts, such as laying tackles, this didn't necessarily prevent Dogs' opposition from scoring. For Collingwood, they underscored by 0.8%. For all of Daicos' lack of tackle laying, Pies' opponents in any case marginally failed to score.

Okay, so lets adjust that -3.8% and 0.8% to Bontempelli and Daicos.

Bontempelli - 19.79 -> 19.06
Daicos - 15.61 -> 15.73

Earlier, Pies fans laughed at the system because players like Dawson were underrated for Adelaide. And they're probably right! with Adelaide's 5.7% adjustment and with adjusting all other players for all teams (including the Dogs), Dawson increases from 28th to about 21st best player. So it underrated Pies players.

Have a good night Pies fans!
standing ovation GIF
 

Remove this Banner Ad

If anyone else is still reading this thread.

Pies fans have consistently misunderstood AFL Player Ratings points and have claimed that the default assumption is that because Pies games are low scoring and because Western Bulldogs games are high-scoring, it overrates Bontempelli and underrates Daicos.
This is correct, your poor analysis even confirms this.

All you are arguing is that is doesn't overrate Bont too much🤣
You can create a scatter plot with these two numbers, which is the graph on the right, because points per game more than your opponent clearly has a linear relationship with generating AFL Player Ratings points.
You are confusing the relationship, that your own article stated.

The linear relationship is between match score margin and total team rating differential.

If that relationship wasn't there then it would mean massive flaws in the algorithm. As all it is saying is if you win a game by 50 that yes there is a large team rating differential.

Win a close game and the team rating differential is close...the ratings align with result. Phew!

The relationship is not between margin and total rating points as you are incorrectly trying to claim.
Teams to the right and below of the trend line are the ones who tend to get more points for defensive actions as a team, but this didn't meant that the opposition failed to score.
This is hilarious - Ess, NM and WB are to the right and below....WB were bottom 3 for tackles, spoils a defensive pressure acts.

Haven't you been arguing the Dogs were poor defensively?
The teams above and to the left of the line are the ones didn't register lots of ratings points for defensive actions, but in any case, opposition teams failed to score
Pies and Adelaide are above and to the left, two of the strongest defensive teams.

And again you were arguing that Pies get most of our points from defensive actions earlier.

Something doesn't add up with your "analysis".
.The third column is a linear forecast of how many player ratings points that each team should have gotten given their average margin. That is, what the trend line says should have been your ratings points given what your average margin was.
For instance, Adelaide as a team only got 205.7 average ratings points per game last year. Given that they, on average, scored 23.4 more points than their opponent, they should have actually gotten 217.5 average ratings points per game.
Why?

Again it is the team ratings differential that has the relationship with margin.

Adelaide in 2025 had an avg margin of 23.4 and a total team rating differential of 23.6.

Adelaide were another strong defensive team, so over the course of the season they nuked their avg player rating points both FOR them but also to their OPPONENTS....this is what you keep pretending doesn't happen, or incorrectly claiming it all evens out.

Adelaide had their total team ratings avg of 205.7 and their opponents average was down at 182.1.

Adelaide's total rating differential aligned almost exactly with their scoring margin (as your paper confirmed) which makes logical sense.

But being a defensive team, Adelaide nuked their own total player rating average as a result of their game style...you keep pretending this doesn't happen and everyone should be equal 🧐

Guess what, just like you have proven with Adelaide nuking their rating points to keep the differential and margin relationship....it works the other way too with high scoring attacking teams. They can boost their total player rating average as a result of game style

Okay, so lets adjust that -3.8% and 0.8% to Bontempelli and Daicos.
Thanks for confirming again that the Pies fans were correct, player ratings overrates Bontempelli and underrates Daicos.
 
Cameron was still the highest rated permanent key forward. Darcy and Thillthorpe got the bonus ruck points (mobile rucks always rack up points). Grouping players by position is actually really simple if your IQ is at least 70.
So you're happy that Jezza was considered outside the top 50 players in the competition in 2025?

And you're supporting an algorithm that concludes that?

Gotcha.
 
So you're happy that Jezza was considered outside the top 50 players in the competition in 2025?

And you're supporting an algorithm that concludes that?

Gotcha.
I'll applaud you when you make a single coherent point as well.

Cameron as the highest rank permanent KPF sounds right, and unlike you I don't have a meltdown when Geelong supporters aren't number 1 for any ranking system.

When people talk about Ablett as the best midfielder so far this century, it's consensus opinion that is supported by a wide variety of metrics (stats, accolades, votes etc). It isn't some Geelong supporter assertion that we had to push down everybody's throats while they disagreed and could use evidence to support alternatives.

I factor in many things when assesing players. It would be just as foolish to live and die by the game-by-game voting systems as it would be to only use statistics/algorithms. Then it would be foolish to completely dismiss the merit of either. Balance is key. You're an extremist.
 
I'll applaud you when you make a single coherent point as well.

Cameron as the highest rank permanent KPF sounds right, and unlike you I don't have a meltdown when Geelong supporters aren't number 1 for any ranking system.

When people talk about Ablett as the best midfielder so far this century, it's consensus opinion that is supported by a wide variety of metrics (stats, accolades, votes etc). It isn't some Geelong supporter assertion that we had to push down everybody's throats while they disagreed and could use evidence to support alternatives.

I factor in many things when assesing players. It would be just as foolish to live and die by the game-by-game voting systems as it would be to only use statistics/algorithms. Then it would be foolish to completely dismiss the merit of either. Balance is key. You're an extremist.
So what you're saying is you disregard PLaYeR RaTiNGZ when determining Jezza's standing in the competition, aside from the fact he was the highest rated forward in the competition (but that's only if you exclude Darcy and Thilthorpe, because they do ruck work as well, and the algorithm overrates ruck work).

Gotcha.
 
I can't believe you cannot be capable of following the logic I clearly laid out here.

We're measuring how teams and players rank against each other.

There is no advantage to playing in a team where those matches are higher scoring, because you're giving every team you come up against an opportunity for their players to get a boost in ratings points to their typical game, even if they lose. You do that for every other team, every week. Therefore, for the boost they get, they get to rise up the rankings, even if they lose.

There is no disadvantage to playing for a team that plays low scoring games. Because when your opposition's high points scoring players come up against you, they still might put in a match winning performance, but it's reflected in a lower player rating points game, because all of that low scoring team's games nuke the point scoring ability of the opposition. Hence therefore an opponent's average on the season ratings points might drop, even if they won.

Over the course of a season, it averages out.

About the 55th time in this thread I've had to repeat myself because you've literally just obtusely ignored what I've already said as a correction to just bleat you being wrong again 🤷
It's about the 55th time you've been astonishingly off the mark.

Do you know the metric you're using for player ratings is a season average?

You've conceded that playing in a high scoring game is likely to increase your rating in that game.

And you're now suggesting that playing in one high scoring game in a year will have the same impact on your season average as playing in 23 high scoring games in a year...

Wow.
 

🥰 Love BigFooty? Join now for free.

So what you're saying is you disregard PLaYeR RaTiNGZ when determining Jezza's standing in the competition, aside from the fact he was the highest rated forward in the competition (but that's only if you exclude Darcy and Thilthorpe, because they do ruck work as well, and the algorithm overrates ruck work).

Gotcha.
Any time you use "so what you're saying" you get it completely wrong. It's a really lazy way to use a strawman argument.

What I've been consistent on is that Player Ratings should be used to compare players of the same primary position. It's fairly similar in fact when using any metric, as the same biases against vertain positions exists with coaches and Brownlow votes too.

It wouldn't make sense to compare a back pocket and midfielders clearance numbers, so it doesn't make sense to interpret Player Ratings of midfielders vs defenders, or rucks vs forwards etc.

If you wanted to assess Cameron in 2025 you look at his Player Ratings, coaches votes, Brownlow votes etc among permanent key forwards - then scoring metrics like SIs, goals and goal assists. There can be other intricacies but it's a solid start. You've never really presented a system to assess and compare forwards.
 
Now let's turn our attention to Tom Stewart in 2019, a year when he was awarded his second All-Australian guernsey, and came third in Geelong's best and fairest.

There were 222 defenders in total according to PLaYeR RaTiNGZ, and Tom Stewart was rated in 160th place.

The highest rated defender that year was Trent McKenzie, who played 1 game.

And here are some players who can consider themselves very unlucky to have missed out on an AA guernsey in 2019, given they were comfortably rated higher than Stewart:
Colin O'Riordan
Lewis Young
Marty Hore
Luke Brown
Conor Glass
Daniel McKenzie
Ben McNiece

Thoughts?
 
Any time you use "so what you're saying" you get it completely wrong. It's a really lazy way to use a strawman argument.

What I've been consistent on is that Player Ratings should be used to compare players of the same primary position. It's fairly similar in fact when using any metric, as the same biases against vertain positions exists with coaches and Brownlow votes too.

It wouldn't make sense to compare a back pocket and midfielders clearance numbers, so it doesn't make sense to interpret Player Ratings of midfielders vs defenders, or rucks vs forwards etc.

If you wanted to assess Cameron in 2025 you look at his Player Ratings, coaches votes, Brownlow votes etc among permanent key forwards - then scoring metrics like SIs, goals and goal assists. There can be other intricacies but it's a solid start. You've never really presented a system to assess and compare forwards.
OK.

We're getting there.

You're now telling us we shouldn't use Player Ratings to rate players.

👏 👏 👏
 
Now let's turn our attention to Tom Stewart in 2019, a year when he was awarded his second All-Australian guernsey, and came third in Geelong's best and fairest.

There were 222 defenders in total according to PLaYeR RaTiNGZ, and Tom Stewart was rated in 160th place.

The highest rated defender that year was Trent McKenzie, who played 1 game.

And here are some players who can consider themselves very unlucky to have missed out on an AA guernsey in 2019, given they were comfortably rated higher than Stewart:
Colin O'Riordan
Lewis Young
Marty Hore
Luke Brown
Conor Glass
Daniel McKenzie
Ben McNiece
Stewart's 2019 season was overrated.

He did well with coaches votes frequency (10 times, equal most among defenders), club BnF voting, Supercoach points and some key statistical metrics for rebounding defenders (disposals, rebound 50s, defensive 1v1 loss %) but was middling or poor for others (intercepts, score involvements). He wasn't in the top 8 defenders for total coaches votes.

It wasn't a bad season by any means and sure, the Player Rating ranking seems harsh, but you can find hundreds of examples where Player Ratings basically agree with any other metric you look at in isolation or collectively.

You're just trying to find the most outrageous anomalies to discredit them when the other guy has completely ripped you to shreds on this topic.
 
If anyone else is still reading this thread.

Pies fans have consistently misunderstood AFL Player Ratings points and have claimed that the default assumption is that because Pies games are low scoring and because Western Bulldogs games are high-scoring, it overrates Bontempelli and underrates Daicos.

I'm going to prove them wrong with this simple screenshot:
View attachment 2556200


In this picture, there are four columns. The first one is the amount of team player ratings points per game for 2025. Not differential, simple average.

The second column is on average per game, how many more points did each team score more than the opponent.

You can create a scatter plot with these two numbers, which is the graph on the right, because points per game more than your opponent clearly has a linear relationship with generating AFL Player Ratings points.

Teams to the right and below of the trend line are the ones who tend to get more points for defensive actions as a team, but this didn't meant that the opposition failed to score.

The teams above and to the left of the line are the ones didn't register lots of ratings points for defensive actions, but in any case, opposition teams failed to score.

As you can see, Adelaide were a major outlier here.

The third column is a linear forecast of how many player ratings points that each team should have gotten given their average margin. That is, what the trend line says should have been your ratings points given what your average margin was.

For instance, Adelaide as a team only got 205.7 average ratings points per game last year. Given that they, on average, scored 23.4 more points than their opponent, they should have actually gotten 217.5 average ratings points per game.

The fourth column is such a percentage adjustment.

For instance, the Western Bulldogs generated 3.8% more ratings points as a team than their margins would have suggested. It is quite possible that for all of Bontempelli's defensive acts, such as laying tackles, this didn't necessarily prevent Dogs' opposition from scoring. For Collingwood, they underscored by 0.8%. For all of Daicos' lack of tackle laying, Pies' opponents in any case marginally failed to score.

Okay, so lets adjust that -3.8% and 0.8% to Bontempelli and Daicos.

Bontempelli - 19.79 -> 19.06
Daicos - 15.61 -> 15.73

Earlier, Pies fans laughed at the system because players like Dawson were underrated for Adelaide. And they're probably right! with Adelaide's 5.7% adjustment and with adjusting all other players for all teams (including the Dogs), Dawson increases from 28th to about 21st best player. So it underrated Crows players.

Have a good night Pies fans!
You're using the wrong measures against the argument. You're showing that accumulated rankings within individual games are pretty accurate at measuring the final scoreboard. But still off. If it was totally accurate it would align with a teams percentage and Cats would have the highest average, followed by Crows, with Dogs in 3rd. But instead it aligns more closely with which teams scored the most.

It's the nature of the system. It rewards an attacking game plan much more highly than a defensive game plan. Defensive is about positioning. Good defensive teams get block off outlets and force teams to play through contest. You don't get points for positioning. You only get points for direct defensive involvement. So on a defensive play where you cut out the outlets and force a team into contest, good defence results in one bloke getting points for their defensive action when the attacking team is forced to play through contest. Meanwhile in a successful attacking play, the ball gets clicked around with multiple players getting points.

Basically, the team who gets more numbers back just isn't going to score as highly as the team with less. Teams with less numbers back are scoring more either for either pressure actions around the ball if they're stacking the midfield and more for successful attacks if they're running harder forward. They also thump shit teams that way.

Dogs defensive issues aren't just player related. Their philosophy is to kick more goals than the opposition by kicking as many goals as they can, whereas other competitive teams do it by reducing the number of goals the opposition kick more effectively.
 
Last edited:

Remove this Banner Ad

Remove this Banner Ad

🥰 Love BigFooty? Join now for free.

Back
Top Bottom