Champion Data’s List & Player Ratings - An Accurate Tool or Total Fabrication ??

Remove this Banner Ad

Gets e even better when you see the players CD rate "Elite"
ELOUUjrU4AAdjsr

ELOUUjuUcAAIqO7

ELOUUjuUYAA0GHn


Sorry Hawk fans Tex Walker, Connor McKenna, Huge Greenfield and Majak Daw are Elite talent and Tom Mitchell isn't
Majak Daw is elite!
 
The West Australian ran an article yesterday claiming Champion Data ranked their pin up boy Nic Natanui as the Best Player in the AFL.

I wouldn’t buy that toe rag if you paid me all the oil in Saudi Arabia so I’m a bit sketchy on the details but has anyone seen the data that The West is referencing? To be honest I wouldn’t put it past CD to come to the conclusion that an injury prone ruckman who hasn’t made an AA since 2012 is the league’s best player but then again I definitely wouldn’t put it past The West to ‘interpret’ the data that way so they could manufacture yet another Nic Nat headline.

Either way it seems a bit baffling to me.

It’s another freo conspiracy
Embarrassing melt
 

Log in to remove this ad.

Statistical output vs actual performance are two different things though. It's the stats vs watching football test.
There are plenty of statistically better players relative to others, it's often not the best measure.

The below is Dusty 2017 vs Mitchell's 2017 season. Statistically you could make a case that Mitchell's look better. More disposals, better efficiency, less turnovers, more tackles, that all looks pretty good. Watching football tells you Dusty was the better player on the field in 2017.



19.2
Kicks​
13.9​
10.6​
Handballs​
21.8
29.8​
Disposals​
35.8
4.1​
Marks​
5.3
1.5
Goals​
0.5​
1.2
Behinds​
0.5​
3.5​
Tackles​
6.5
0​
Hitouts​
0​
6.0
Inside 50s​
3.5​
1.2
Goal Assists​
0.4​
1.5
Frees For​
1.4​
1.8
Frees Against​
1.6​
14.5​
Contested Possessions​
14.8
14.4​
Uncontested Possessions​
21.5
19.3​
Effective Disposals​
26.1
64.8%​
Disposal Efficiency %​
72.9%
5.0
Clangers​
4.5​
1.0
Contested Marks​
0.1​
1.0
Marks Inside 50​
0.3​
6.4
Clearances​
6.3​
0.9​
Rebound 50s​
2.1
1.0
One Percenters​
0.9​
1.0
Bounces​
0.3​
85.2​
Time On Ground %​
87.5
3.4
Centre Clearances​
2.3​
3.0​
Stoppage Clearances​
4.0
9.0
Score Involvements​
6.9​
482.9
Metres Gained​
308.4​
6.2
Turnovers​
4.8​


Obviously. We are talking about a tool that measures statistical output though.

*Edit: I would not be surprised if CD rated Dusty's 2017 season well above Mitchell's as they use metrics way more advanced than what you find on footywire.
 
Obviously. We are talking about a tool that measures statistical output though.
I think the biggest value in stats is it shows trends, patterns and highlights things that might be missed. It doesnt necessarily show which player is more impactful or a better kick or better when the game is on the line or prone to go missing etc.

What it might show though is player X never tackles, or player Y needs to hit the scoreboard more or that teams generally win when they lead in X, Y and Z in a game.

The challenge is when we see rankings its logical to expect it to roughly match observed performance. When it doesn't, it makes that statistical tool appear less valid.
 
Mmm I kind of agree, but I think you're being a bit generous to CD.

For one thing, a lot of their numbers are definitely more than simple tallies of stats. For example, "Pressure Rating" is based on a model of what CD think pressure is. "Expected Score" is from a CD model of what they think a typical team would have scored from the same opportunities. Except for very basic stats, CD numbers carry with them a subjective opinion about what they think good football is.
I know they use more than just stats. That's what makes it good (or at least an improvement on traditional stats). They are actually attempting to incorporate all the things we see with our eyes (not perfect).
Their models are tested against the actual outputs though, it's a constant feedback loop on based on the data. Additionally, the models are only getting more accurate with time as the tracking technology improves. From what I remember when I looked into it, the expected score model is based on actual data they have tracked over time. They don't just bum pluck something based on nothing.


Another thing is that they resist publishing clear predictions, but must know that each year they produce a list like this, a bunch of articles will be written by the same journalists they work closely with all year, like "CD predict a rise for the Bulldogs." And the general public naturally has an appetite for predictions made by the AFL's official stats company. So I do feel like they have their cake and eat it too, to a degree, by facilitating a bunch of predictions, but not being held to them.


Also just FYI "infer" is when you use what you know to figure out something else - like using handball and tackle numbers to calculate a pressure rating. Which is indeed what CD do. You mean "imply," which is when you suggest more information than is immediately apparent.

This para is what I was referring to about inferring something from these lists. It was about what the public do, not CD. When people (either media or people on here) say that x team will do y, or x player is not better than y player or whatever. Your example; "CD predict a rise for the Bulldogs"... well actually they did nothing of the sort. It should read "CD ranks bulldogs 2nd of 18 teams based on these statistical outputs/measures under these assumptions". People are thinking the system does something that it doesn't do.
 
It indicates that CD kpi's are worth much and they also totally ignor preseason injuries and surgeries.

It's a 'who's the best if they are fit and don't drop form.
Incorrect. It indicates exactly what I said it does re: Melbourne. The system does not try to give you a reason for why that has happened because that is getting into the realm of subjective analysis.

Once again, it's a tool. To use it appropriately you need to understand its basis.
 
but their model doesn’t present what is happening on the field. Geelong had the best defence last year, the numbers clearly back it up, everybody struggled to score against them. When essentially the same defence rates as 16th, you can defend your model all you want, but the model is at best poor and i am leaning towards completely useless as it’s misleading And demonstrably wrong.

they need to tweak the context and methodology otherwise putting out this information just reflects poorly on them. It seems to me they are just aggregating player ratings whereas they probably need to add more data to the equation so they don’t look like idiots.

predicting the future is a difficult beast, but at least if you are using stats as the basis for your predictions, use a methodology that is at least arguable.
Is the ranking saying Geelong's team defence is 16th best, or is it saying that their players classified as defenders are ranked 16th when comparing their output to other teams? I don't think you know, which proves my point exactly. How can you draw your own conclusions when you don't know the basis for the ranking?

If all you'd want to know was "scores against" to rank best defence that info is easily found. Obviously that's not what this is presenting.

It's a shame that it seems the link to explain the player ratings system is broken atm on the afl site....
 
I think the biggest value in stats is it shows trends, patterns and highlights things that might be missed. It doesnt necessarily show which player is more impactful or a better kick or better when the game is on the line or prone to go missing etc.

What it might show though is player X never tackles, or player Y needs to hit the scoreboard more or that teams generally win when they lead in X, Y and Z in a game.

The challenge is when we see rankings its logical to expect it to roughly match observed performance. When it doesn't, it makes that statistical tool appear less valid.
But it does match observed (or rather, recorded) performance, based on the measures they are using. It is only reporting what has happened on the field.
You might not agree with the measures they are using (or the assumptions) and hence disagree with the ranking output, that's totally valid.
Someone else may think that 1 kick is worth 5 handballs and create a model based on that which then spits out rankings. Totally fine, however you'd need to understand that basis before concluding anything other saying "a ranking system sets up this way creates this list".
 
Incorrect. It indicates exactly what I said it does re: Melbourne. The system does not try to give you a reason for why that has happened because that is getting into the realm of subjective analysis.

Once again, it's a tool. To use it appropriately you need to understand its basis.
It's subjective already, because it's called a List Rating. That's where it crosses the line from being a stat to a model.

Your point would hold if this were something like "Fewest Goals Per Inside 50s." With a stat like that, the number is a fact, and people can debate how relevant it is to team defence, or overall strength, or whatever.

But this is literally "How Good The Players Are." If it's inaccurate, it can't be excused by saying, "Actually, it is accurate at measuring whatever input stats they chose to use." That doesn't matter, because it's failing at what it's supposed to model.
 
This para is what I was referring to about inferring something from these lists. It was about what the public do, not CD.
Yep, both "imply" and "infer" are about suggesting extra information than is superficially present, but "imply" is when the speaker does it and "infer" is when the listener does it.
 
Yep, both "imply" and "infer" are about suggesting extra information than is superficially present, but "imply" is when the speaker does it and "infer" is when the listener does it.
Right, so the clarification was unnecessary as CD are acting as the speaker and the public the listener in the context of my original comment.
 

(Log in to remove this ad.)

It's subjective already, because it's called a List Rating. That's where it crosses the line from being a stat to a model.

Your point would hold if this were something like "Fewest Goals Per Inside 50s." With a stat like that, the number is a fact, and people can debate how relevant it is to team defence, or overall strength, or whatever.

But this is literally "How Good The Players Are." If it's inaccurate, it can't be excused by saying, "Actually, it is accurate at measuring whatever input stats they chose to use." That doesn't matter, because it's failing at what it's supposed to model.
If the issue is what it's called, let's just refer to it as a List Assessment rather than Rating then. Obviously it's based on a model, but that doesn't necessarily make it subjective. Models not only using statistical data, but assigning value to that data based on historical results are not subjective. That is a purely objective model.

Actually, it's literally "How good players are based on x model/methodology". My whole point is that people are taking it as some definitive list, when realistically there could be 100 different models that spit out different rankings and they are all going to be valid if you consider their basis. No model is perfect.
 
Right, so the clarification was unnecessary as CD are acting as the speaker and the public the listener in the context of my original comment.
Well no, because you said "CD are inferring," when CD is the speaker. But I honestly didn't mean to be a grammar nazi about it! People often mix them up.
 
If the issue is what it's called, let's just refer to it as a List Assessment rather than Rating then. Obviously it's based on a model, but that doesn't necessarily make it subjective. Models not only using statistical data, but assigning value to that data based on historical results are not subjective. That is a purely objective model.

Actually, it's literally "How good players are based on x model/methodology". My whole point is that people are taking it as some definitive list, when realistically there could be 100 different models that spit out different rankings and they are all going to be valid if you consider their basis. No model is perfect.
I agree with your conclusion; I just think it's a bit disingenuous for anyone (including me) to publish a set of numbers from a model under a headline like "List Ratings" and not expect people to judge it accordingly.
 
I agree with your conclusion; I just think it's a bit disingenuous for anyone (including me) to publish a set of numbers from a model under a headline like "List Ratings" and not expect people to judge it accordingly.
Yeah I get that, and do agree. It sucks that the proper explanation of the model is not given along with these things as it could actually promote good discussion and improve understanding how it all works. The presentation (as well as how the findings are reported by the media) could definitely be improved.
 
I don’t even know how this whole champion data ranking system works and what criteria they use, but in 2019 Hawthorn ranked 1st in contested marks, 1st in intercept marks and 3rd in points conceded. Yet the Hawthorn back six is ranked 11th in the competition? That makes a lot of sense. :$:thumbsu:
A lot of our defense was further up the ground preventing the ball getting inside 50. I'm guessing that is the same as Geelong. Maybe once a team gets through strong midfield defense the defenders are more exposed so that lowers their ranking
 
Last year they ranked Melbourne's list as number 1.

Twelve months later pretty much the same list is ranked 13th.

That basically sums up the accuracy of CD rankings IMO.
You don’t think that it’s evidence that additional data is being used to support their opinions - or that the gap between the team ranked 1st and 13th may have been exceptionally marginal?
 
A lot of our defense was further up the ground preventing the ball getting inside 50. I'm guessing that is the same as Geelong. Maybe once a team gets through strong midfield defense the defenders are more exposed so that lowers their ranking
Actually Geelong and Hawks stopped teams getting it inside the def 50 and once there they didn't let them score much either so exposing the defence isn't what happened. Look at table below for example.

The ranking on champion data for defenders is all about those possession numbers it seems rack up lots of and metres gained and your good rack up low to none (even if the oppo don't score ) is bad.

ocd2hvt2e1g41.png
 
Yeah I get that, and do agree. It sucks that the proper explanation of the model is not given along with these things as it could actually promote good discussion and improve understanding how it all works. The presentation (as well as how the findings are reported by the media) could definitely be improved.
I've heard Champion Data say they consider themselves a "data wholesaler," where they produce these kinds of things for the media (or clubs, or the AFL, or whoever) and Joe Public only sees it once it's been filtered through those layers.

That often puts them in a weird spot, especially when they tweet to the public like this, because we get the headlines and topline numbers (Cats 16th best defence! Brisbane 2nd worst attack!) but no explanation.

It turns people off. It's the worst kind of stats, when there's a mysterious black box saying, "Team X is bad" for unknown reasons.
 
Incorrect. It indicates exactly what I said it does re: Melbourne. The system does not try to give you a reason for why that has happened because that is getting into the realm of subjective analysis.

Once again, it's a tool. To use it appropriately you need to understand its basis.

Whatever.

It indicates who performed well under certain kpi's in the past with a weighting towards younger unproven kids picked early or who debut early.

It's accuracy in predicting the future is flawed as it over sells some kpi's and doesn't know how to factor in the efficiency of game plans.

It's a very rough tool.
 
If I just cut and pasted last year's ladder and said "this is my ranking of the lists", it would be no less sensible than CD's "data-driven" account. So what are they adding?

It's not clear that adding extra layers of data reveals anything useful, accurate or insightful.

I like their bucket loads of data. Paints a picture for me whether it's team related or at a player level.

The CD interpretation of that data is dubious at best. I would prefer to analyse the data myself as opposed to having some "CD analyst" do it for me.
 

Remove this Banner Ad

Back
Top