Review Port overcome Dogs 86 - 69

Remove this Banner Ad

The hours and hours dedicated to talk shows and not one so called analyst has mentioned it. Do these blokes seriously watch footy? The only one who brought it up was fagan and he totally trolled Bevo on it.

Luke Hodge has brought it up many times, thinks it’s crazy the we allow easy entries. 14 in the comp for opposition marks inside 50?


On iPhone using BigFooty.com mobile app
 
What it fails to do (despite its conclusions) is validate the accuracy of ratings given to individual players and their use as a basis of comparing one player against another or against the cohort. And yet this is exactly what so many people want to use the ratings for, understandably. In fact I don't think I've ever seen anyone discuss the collective ratings. It has always been about individual players.
If you take the player ratings dogwatch it's more accurate than other statistical methods of rating player's ability, measured by the fact it's provably more accurate in predicting future results. Like the Squiggle thread on the main board, the reason AFL Player Ratings is used to predict the value of player ins and outs is that over the literally thousands of games that have been played since player ratings has begun, it's, as a whole, marginally more accurate than using supercoach points. And I'm using Supercoach points as a proxy for the best "box score" system as opposed to an position/possession equity based system.

For example, say only Bontempelli is returning this week, AFL Player Ratings might say it's a more valuable return than Supercoach points, because he's more highly rated in PRP than SCP. As such, the rating system that integrates AFL Player Rating points would predict us to win by a bigger margin than a rating system that integrates SC points. Say the former 11 points, say the latter 10 points. And then we end up winning by 30 points, so the PRP system was proven to be a better predictor. It's not always that obvious and we're talking about margins like 51% vs 49% but at the same time it's a scoring system that's been in place since 2010 so we have literally something like almost 100,000 player-games worth of individual points to use, so while it's a small sample for a Buku Khamis, the fact that it's 51% to 49% more accurate has a big enough sample to give us cofidence)

Obviously it doesn't measure everything. It makes some assumptions that not everyone would agree with about the philosophy about what a contested possession and uncontested possession is, and divides them up among the players in a chain of possession on the basis of that assumption. But as a whole, its a very good system, or at the very least can be assumed to be as accurate in suggesting who played a "good" and "bad" game as Supercoach points being a all-in-one summary of the collection of stats that we look at more generally.

More specifically to this quote:
"Percentage agreement and Pearson correlational analyses revealed that winning teams displayed a higher total team rating in 94.2% of matches, and an association of r = 0.96 (95% confidence interval of 0.95-0.96) between match score margin and total team rating differential, respectively. A PART analysis resulted in seven rules capable of determining the extent to which relative contributions of rating subcategories explain Win/Loss at an accuracy of 79.3%. These models support the validity of the AFL Player Ratings system, and its use as a pertinent system for objective player analyses in the AFL."

I do agree with you this is kinda bullshit, and it's bastardising the intention of what AFL Player Rating points is. AFL Player Ratings points is an equity-based system, essentially giving points to players if they move the ball into a position and possession state, that is more likely for the team to kick a goal. For example, a team with possession is more likely to kick a goal than a team without possession, and a ball closer to your goals is more likely to kick the next goal than the opposition. So if a team wins the ball out of a centre clearance (equity state = 0) and kicks a goal without opposition possession (equity state = 6, because 6 scoreboard points were scored), 6 points are subdivided among the players that were part of the possession kicked the goal.

Suggesting that correlates with the final score is kinda "duh" because you could use that reductive thinking of equity points and take it to its most reductive extreme, and literally only give points to players who literally hit the scoreboard. Ie, I give 6 points to the player who literally scored 1 goal 0, and give 0 points to the player who scored 0.0. It would have an even greater correlation to the final margin, but hardly tells you about who played well and didn't.

Of course it can't measure everything, because it's fundamentally blind to the players on the players on the field that aren't physically imposing themselves on the ball at any given time through possession or pressure or spoils or whatever. For example, the same amount of points were given to Buku to his very difficult and well-executed game-breaking kick that was weighted perfectly to a bloke running in space that he did, vs. if the made that same kick if opposition structure broke down and he had a paddock to aim that kick to. But at the same time, quantifying that in any method is difficult, and we're talking about the merits of one quantitative assessment over the other, not a qualitative assessment vs. a quantitative one (which is a different debate).

FWIW going back to the original point, the ratings systems always used to come up with funky ratings for players that made you go "aha" and make you realise a certain player was always worth more than you thought at first glance. Cyril Rioli's mixture of forward pressure and effective disposal mixed with gaining metres in the forward half (always worth a lot, as moving the ball into a retained-possession state in your own f50 is worth more than many box score stats can give credit for). Nic Naitanui's ability over his career to win groundballs at a rate that was basically incomparable to other rucks - many rucks won possesisons, but they were marks, handball receives or pre-stoppage contesteded possessions. Naitanui used to win post-stoppage groundballs more than many midfielders when other rucks would win like 1 or 2 a game. And we were all championing the cause of Bontempelli in 2015 and how he should have been a smokey for All-Australian that year despite literally starting the season as a 16-gamer or whatever, because the AFL Player Rating points demonstrated how he combined contested possession and metres gained at a rate no other player in the competition was matching.
 

Log in to remove this ad.

I didn't realise how bad it was. Our 4th lowest rated match for pressure since 2019.
I thought Essendon were bad last week, but it turns out, despite winning, we were worse and again below average.

Match
WB: 177 (Below Average)
ES: 180 (Average)

Over the accumulated 8 rounds played so far, how do we compare/rank for pressure? Would a running total of pressure over a season be a thing? Is the pressure gauge valued/rated as an indicator internally?
 
I thought Essendon were bad last week, but it turns out, despite winning, we were worse and again below average.

Match
WB: 177 (Below Average)
ES: 180 (Average)

Over the accumulated 8 rounds played so far, how do we compare/rank for pressure? Would a running total of pressure over a season be a thing? Is the pressure gauge valued/rated as an indicator internally?
I’m not sure how accurate the pressure ratings are but the failure to apply strong and consistent pressure is a huge factor in deciding who wins IMO. It’s not only that lack of pressure allows easy entry and pinpoint passes into the 50 as we saw against Port, it’s also a good indicator of whether a side is “up” in terms of energy and determination.

FWIW I place no trust at all in the pressure ratings reported on AFLW matches. They are always overstated for some reason. However to my eye the ratings seem pretty close for AFLM matches I’ve watched.
 
You trotted this out last week and I critiqued it.

It hasn't become any more accurate with your posting it again this week.

In summary, the paper largely correlates a team's winning percentages with higher ratings for the team collectively. Well you don't need a lengthy academic paper to tell you that. Any one of us here could have told you that's what happens in nearly any sport. At least 94.2% of the time.

What it fails to do (despite its conclusions) is validate the accuracy of ratings given to individual players and their use as a basis of comparing one player against another or against the cohort. And yet this is exactly what so many people want to use the ratings for, understandably. In fact I don't think I've ever seen anyone discuss the collective ratings. It has always been about individual players.

It also fails to demonstrate that the ratings take into account enough of what happens in a game to be reliable all the time. As I said last week that's a difficult and very ambitious objective.

I pointed out the Buku case this week as just another example of where the ratings system contradicts just about everybody's eye test (and even that of his teammates). He wasn't great but he clearly wasn't the worst on the ground by a big margin which is the impression you'd get from looking at his score. I'd argue he wasn't the worst on the ground at all.

I don't hate the ratings. They are another useful way to look at the footy. They give you a good general idea and they mean well, but they are clearly not 100% reliable. I just dislike the slavish faith that some people put in them. By all means use them in discussions but let's not hold them up as the gold standard for assessing a player's performance.
i 'trotted it out' in reply to someone else.
if you read the article and are able to understand it, it does allow for all the things you say.
didnt say it was perfect but the nonsense on here from you and others that its invalid is unsubstantiated.
empirically validated is empirically validated, no need to say any more
 
I’m not sure how accurate the pressure ratings are but the failure to apply strong and consistent pressure is a huge factor in deciding who wins IMO. It’s not only that lack of pressure allows easy entry and pinpoint passes into the 50 as we saw against Port, it’s also a good indicator of whether a side is “up” in terms of energy and determination.

FWIW I place no trust at all in the pressure ratings reported on AFLW matches. They are always overstated for some reason. However to my eye the ratings seem pretty close for AFLM matches I’ve watched.

Buckley said something interesting on the Saints/Dees broadcast yesterday - Melbourne were winning by 7 or 8 goals in the second quarter (I think) and he was saying not to ascribe this to the Saints not turning up. They were actually applying very good pressure it's just that Melbourne were very composed and still executing effectively under that pressure, while still shutting down the Saints offense effectively.

The reason why Melbourne are a good few rungs ahead of the others is not only that they are applying strong pressure and implementing an incredibly effective defensive system, it's also that even when good teams bring a similar level when Melbourne have the ball they are still coming out on top by not rushing, making the right decisions and then executing them well. Whereas I would argue even when many of the current dogs team are not under elite pressure they're not composed, are making poor decisions and then executing badly as well. Difficult to overcome that.
 
If you take the player ratings dogwatch it's more accurate than other statistical methods of rating player's ability, measured by the fact it's provably more accurate in predicting future results. Like the Squiggle thread on the main board, the reason AFL Player Ratings is used to predict the value of player ins and outs is that over the literally thousands of games that have been played since player ratings has begun, it's, as a whole, marginally more accurate than using supercoach points. And I'm using Supercoach points as a proxy for the best "box score" system as opposed to an position/possession equity based system.

For example, say only Bontempelli is returning this week, AFL Player Ratings might say it's a more valuable return than Supercoach points, because he's more highly rated in PRP than SCP. As such, the rating system that integrates AFL Player Rating points would predict us to win by a bigger margin than a rating system that integrates SC points. Say the former 11 points, say the latter 10 points. And then we end up winning by 30 points, so the PRP system was proven to be a better predictor. It's not always that obvious and we're talking about margins like 51% vs 49% but at the same time it's a scoring system that's been in place since 2010 so we have literally something like almost 100,000 player-games worth of individual points to use, so while it's a small sample for a Buku Khamis, the fact that it's 51% to 49% more accurate has a big enough sample to give us cofidence)

Obviously it doesn't measure everything. It makes some assumptions that not everyone would agree with about the philosophy about what a contested possession and uncontested possession is, and divides them up among the players in a chain of possession on the basis of that assumption. But as a whole, its a very good system, or at the very least can be assumed to be as accurate in suggesting who played a "good" and "bad" game as Supercoach points being a all-in-one summary of the collection of stats that we look at more generally.

More specifically to this quote:


I do agree with you this is kinda bullshit, and it's bastardising the intention of what AFL Player Rating points is. AFL Player Ratings points is an equity-based system, essentially giving points to players if they move the ball into a position and possession state, that is more likely for the team to kick a goal. For example, a team with possession is more likely to kick a goal than a team without possession, and a ball closer to your goals is more likely to kick the next goal than the opposition. So if a team wins the ball out of a centre clearance (equity state = 0) and kicks a goal without opposition possession (equity state = 6, because 6 scoreboard points were scored), 6 points are subdivided among the players that were part of the possession kicked the goal.

Suggesting that correlates with the final score is kinda "duh" because you could use that reductive thinking of equity points and take it to its most reductive extreme, and literally only give points to players who literally hit the scoreboard. Ie, I give 6 points to the player who literally scored 1 goal 0, and give 0 points to the player who scored 0.0. It would have an even greater correlation to the final margin, but hardly tells you about who played well and didn't.

Of course it can't measure everything, because it's fundamentally blind to the players on the players on the field that aren't physically imposing themselves on the ball at any given time through possession or pressure or spoils or whatever. For example, the same amount of points were given to Buku to his very difficult and well-executed game-breaking kick that was weighted perfectly to a bloke running in space that he did, vs. if the made that same kick if opposition structure broke down and he had a paddock to aim that kick to. But at the same time, quantifying that in any method is difficult, and we're talking about the merits of one quantitative assessment over the other, not a qualitative assessment vs. a quantitative one (which is a different debate).

FWIW going back to the original point, the ratings systems always used to come up with funky ratings for players that made you go "aha" and make you realise a certain player was always worth more than you thought at first glance. Cyril Rioli's mixture of forward pressure and effective disposal mixed with gaining metres in the forward half (always worth a lot, as moving the ball into a retained-possession state in your own f50 is worth more than many box score stats can give credit for). Nic Naitanui's ability over his career to win groundballs at a rate that was basically incomparable to other rucks - many rucks won possesisons, but they were marks, handball receives or pre-stoppage contesteded possessions. Naitanui used to win post-stoppage groundballs more than many midfielders when other rucks would win like 1 or 2 a game. And we were all championing the cause of Bontempelli in 2015 and how he should have been a smokey for All-Australian that year despite literally starting the season as a 16-gamer or whatever, because the AFL Player Rating points demonstrated how he combined contested possession and metres gained at a rate no other player in the competition was matching.
in other words, it gives a good crude indication of player impact, thats all i ever said, if it differentiates at an aggregate level then the inference is it will also do it at an individual level, that is the assumption of the validation article which is generally accepted. i post the data weekly for this reason, not to claim its the only stat to look at. but it does have merit and those who keep saying it has no merit are simply wrong!

geez, i can see why alot of the more balanced posters dont come on here anymore
 
in other words, it gives a good crude indication of player impact, thats all i ever said, if it differentiates at an aggregate level then the inference is it will also do it at an individual level, that is the assumption of the validation article which is generally accepted. i post the data weekly for this reason, not to claim its the only stat to look at. but it does have merit and those who keep saying it has no merit are simply wrong!

geez, i can see why alot of the more balanced posters dont come on here anymore
I'm happy to see it each week and appreciate your posting it.

BTW who said it had no merit?
 
sorry to harp on it but to put this Player Rating stuff to bed
"findings support the validity of the AFL Player Ratings system, and its use as a pertinent system for objective player analyses in the AFL"
this claim is scientific, it means its more that ok as a crude player comparison, it was peer reviewed by two independent academics with a PhD, and an editor before being published. So im sorry, im going to accept that evidence over someone's opinion on BF, keep commenting on them, keep pulling up those who want to completely discredit them with a swipe of the keyboard, and keep using them as part of my data triangulation to better understand team and individual performance.
 
I just watch the game and throw spaghetti at the players on my tv screen if I think they aren’t trying well enough

My ratings are based on screen visibility by the end of the 4th quarter

The type of sauce I am eating does make a difference so you have to account for that

Creamy carbonara has more stickability
 
I’m not sure how accurate the pressure ratings are but the failure to apply strong and consistent pressure is a huge factor in deciding who wins IMO. It’s not only that lack of pressure allows easy entry and pinpoint passes into the 50 as we saw against Port, it’s also a good indicator of whether a side is “up” in terms of energy and determination.

FWIW I place no trust at all in the pressure ratings reported on AFLW matches. They are always overstated for some reason. However to my eye the ratings seem pretty close for AFLM matches I’ve watched.
Most likely due to the higher congestion, the pressure Ratings used for AFL wouldn't really translate to AFLW.
 
Good discussion 3NP. :thumbsu:

I place even less store in Supercoach, but you probably guessed that anyway.
Why use any stats at all (or try to sumrise a player's game using stats) if you believe they're worthless?

Of course the eye picks up on many things. But following 22 or 44 players and the value of 350 disposals and hundreds of pressure acts and one percenters a game is very hard. Remembering the value of all these actions after the fact is hard. Which is why it's good to use stats. There may be many things it doesn't pick up on, but it does pick up on the fact that it hasn't forgotten any of the disposals and I can tell you I certainly have. All you have to do is remember any time there was any confusion as to which player actually won the ball, for example. I don't ID players correctly 99+% of the time.
 

(Log in to remove this ad.)

geez, i can see why alot of the more balanced posters dont come on here anymore
You are actually in a direct discussion with one of the more balanced posters this board has had in all my years on here!

Dogwatch is always respectful, balanced and not prone to hyperbole.

There are also many other balanced posters on here. Some I disagree with, some I agree with. But in between the occasional sniping and insults, which still occurred years ago btw, there are still plenty of quality discussions.
 
Why use any stats at all (or try to sumrise a player's game using stats) if you believe they're worthless?

Of course the eye picks up on many things. But following 22 or 44 players and the value of 350 disposals and hundreds of pressure acts and one percenters a game is very hard. Remembering the value of all these actions after the fact is hard. Which is why it's good to use stats. There may be many things it doesn't pick up on, but it does pick up on the fact that it hasn't forgotten any of the disposals and I can tell you I certainly have. All you have to do is remember any time there was any confusion as to which player actually won the ball, for example. I don't ID players correctly 99+% of the time.
This discussion demonstrates the perils of trying to debate these things by posts. That's not intended as a criticism of you or Al Dente or anyone else. It's the limitation of the medium we're communicating through.

Unless you're being sarcastic (and I don't think you are, at least not in a big way) you seemed to have formed the view that I think statistics are worthless. That's not the case at all. On the contrary, I'm a great believer in them which is probably why the use (and potential misuse) of them is something I feel strongly about.

I really don't think there's a huge difference of opinion between any of us - even Al Dente, although he/she might see it differently.

If we were all in the one room exchanging our views the different shades of interpretation and misunderstanding would quickly become obvious and put us back in sync. We'd probably have a very productive and enjoyable discussion.

When it's done by messaging, nuances get lost, misunderstandings easily escalate and that's fertile ground for offence to be taken. Once personal offence is taken it's almost impossible to get it back to an objective discussion so I usually just end it there.

Anyway I thought I'd try to explain the misunderstanding on this occasion. You've always seemed like a clear-headed sort of poster so I'm thinking you'll get it.
 
What's the correct number? 97?
No, the correct number of climate scientists who agree that climate change is happening and its caused by human action over the past 2-3 centuries is 99.999%

There are some geologists who disagree, but really theyre more like engineers than scientists.....
 
You are actually in a direct discussion with one of the more balanced posters this board has had in all my years on here!

Dogwatch is always respectful, balanced and not prone to hyperbole.

There are also many other balanced posters on here. Some I disagree with, some I agree with. But in between the occasional sniping and insults, which still occurred years ago btw, there are still plenty of quality discussions.
My BF Data suggests that the number of posters who would disagree with you is statistically insignificant (1%) with a margin of error of +/-1%
 
I thought Essendon were bad last week, but it turns out, despite winning, we were worse and again below average.

Match
WB: 177 (Below Average)
ES: 180 (Average)

Over the accumulated 8 rounds played so far, how do we compare/rank for pressure? Would a running total of pressure over a season be a thing? Is the pressure gauge valued/rated as an indicator internally?

Pressure can change depending on conditions such as rain which would usually increase pressure (more tackles) so it's probably wise to look at the overall pressure gauge match by match or quarter by quarter instead of the season but for what it's worth we are currently ranked 6th for overall pressure.

However, if broken down by zone it's a different story...

D50: 17th
MID: 2nd
F50: 16th

As for the AFLW dogwatch, much like with ranking points, pressure is scaled due to the shorter matches so it will always be higher than AFLM and therefore have different benchmarks. For the next AFLW season I'll try and get those and add them to the breakdowns.
 
Player
Coahes Votes
CD Rating
AFL Player Rating
Boak
(8) 1st=​
1st​
4th​
Naughton
(8) 1st=​
4th​
3rd​
Marshall
(8) 1st=​
3rd​
5th​
Macrae
(2) 4th=​
2nd​
1st​
Finlayson
(2) 4th=​
10th​
11th​
Rozee
(2) 4th=​
20th=​
10th​
 
Pressure can change depending on conditions such as rain which would usually increase pressure (more tackles) so it's probably wise to look at the overall pressure gauge match by match or quarter by quarter instead of the season but for what it's worth we are currently ranked 6th :think: for overall pressure.

However, if broken down by zone it's a different story...

D50: 17th:openmouth:
MID: 2nd
F50: 16th:openmouth:

As for the AFLW dogwatch, much like with ranking points, pressure is scaled due to the shorter matches so it will always be higher than AFLM and therefore have different benchmarks. For the next AFLW season I'll try and get those and add them to the breakdowns.
Well that's pretty stark. I know where I'd be starting....and it's not fwd.

With all the discussion about statistics and their worth, I wonder how much key indicators might change in value (if at all) if you reframed the premise of what key indicators suggest the best statistical chance of winning matches to what key indicators suggest the the best statistical chance of not losing matches.
 
Pressure can change depending on conditions such as rain which would usually increase pressure (more tackles) so it's probably wise to look at the overall pressure gauge match by match or quarter by quarter instead of the season but for what it's worth we are currently ranked 6th for overall pressure.

However, if broken down by zone it's a different story...

D50: 17th
MID: 2nd
F50: 16th

As for the AFLW dogwatch, much like with ranking points, pressure is scaled due to the shorter matches so it will always be higher than AFLM and therefore have different benchmarks. For the next AFLW season I'll try and get those and add them to the breakdowns.
Thanks OG. My beef is that the category thresholds seem much the same (or identical?) despite the scaling up. In other words in every game the pressure is categorised as really, really high and much higher than "average". At least that's my recollection from the last AFLW season, and maybe the one before.

Now it stands to reason that roughly half the performances must be on or below average and roughly half must be on or above average (unless there is some extraordinary skewing in a handful of matches) but this seemed not to be the case. i.e. Nearly all games seemed to have extremely high pressure.

Is this consistent with your understanding?

Or to put it another way, I don't care what the raw numbers are or whether they have any relationship to the numbers used for AFLM games just as long as the thresholds for the categories of Poor/Below Average/Average/Above Average/Exceptional/whatever in AFLW games are set at data points that reflect the distribution of performances that we are actually seeing.
 

Remove this Banner Ad

Back
Top