Well I'm confused. I was under the impression the data said we have the shitest list.
No, the person interpreting the data has made that conclusion. Doesn't mean they're right.
You could rank teams on any number of data points, and pre-existing form is about the simplest approach you could take for a junk article. In-depth analysis and forecasting doesn't sell papers though, while "Carlton has the worst list" riles up Blues fans and tickles the fancy of everyone else.
The champion data numbers are what the clubs and media outlets use. It's good. BB's comment is also very true. This is an article; it's meant to get some kind of a reaction. They want people to talk about it and for it to be shared across social media platforms.
The guy who wrote the article for afl.com.au, Nathan Schmook, was probably given a basic table from Champion Data that's been pulled from their upcoming 2018 Prospectus (which is mentioned at the bottom) and been instructed to make something out of it. He's decided that the sum of a list worth of individual in-game output is a measure of talent.
He also talks about grouping players into elite, above average, average and below average categories. That stuff is fine as the players are grouped by a ridiculous number of measurable events. A possession/disposal is rated on being contested/unconstested, if it was effective (hit the target/to advantage), the distance and direction of the disposal (long kicks forward rated much higher than short handballs back), plus added factors like was it a clearance, etc. It's far from the basic stats you see on the afl app. Find the little Vice mini-documentary thing on the stats Champion Data does, it's actually pretty interesting.
As for a little explanation of how you can take a basic form of the CD numbers and extrapolate them different ways...
If you extrapolate the average of every 2017 Carlton and Adelaide players supercoach score (uses a basic CD number set) Carlton actually has a higher number than Adelaide; 54 to 52.
If you consider that Adelaide used substantially less players than we did and deduct the 'did not play' scores of 0 from those averages all of a sudden it's Carlton 64 - Adelaide 73.
So do we have the better full list because when factoring in
every player we had a higher average contribution? Or is it Adelaide because the players that didn't contribute aren't measureable and should be deducted from the comparison?
Anyone with excel, some free time and a basic stats understanding will probably be able to create a set of factually accurate results that can prove that any of the teams has the 'least talented' list or vice versa.