Remove this Banner Ad

Opinion "Help me out where I need faith!" - The Statistical Data Thread

Do you believe?


  • Total voters
    67

🥰 Love BigFooty? Join now for free.

You have talked about recent changes compared to historical data. Historical data includes our best games and our worst games.
* Wingard was in a few weeks ago and so was Impey. So its not that big a move over the whole season.
* SPP has gone down in output in recent weeks. Yes Boak has played 1 great game but thats it.
* Neade is playing about the same as he does in his good patches.

I'm looking at things in terms of our current form and moving forward from that.

Using Neade as an example, he may be playing the same but his impact is much higher because our team is now structured to benefit from it.
 
What???

The only skewing might relate to Essendon being in or out of the top 9 and its not massive if you look at the number ie conversions for inside 50's. Did you actually look at the criteria that HPN use??. Its about relative performance. The umpires decision against Dixon didn't affect how many inside 50's we have had for the year and if he goals it makes a minor change to the offensive figure used.

Just on this, the data in the top half vs bottom half isn't reflective of relative performance. That's the original analysis. When a data set is arbitrarily split and re-analysed, you're effectively performing 2 analyses, the original relative performance analysis with the added step of a sub-group analysis using already analysed data. When you start to look at data this way, i.e. an analysis within an analysis, the risk of type I and type II errors sky rockets. In simplest terms, the more complicated the analysis, the more unreliable it becomes.

Most importantly though, any analysis model that doesn't accurately predict the past, in this case alignment to current ladder position for top 9 vs bottom 9 differential, is useless. Overall relative performance is informative to a point because it largely aligns to the current ladder.
 
Last edited:
The Hurling People Now folks reckon Port have made liars of them. They realised that raw stats are meaningless, you need to dig, just like PortWTF did with his different ladders for top 8/bottom 10 then when I asked him to do top 4/middle class/cellar dwellers and posted them on the previous page.

Port Adelaide have made liars of us

After round 21 there is little movement in relative rankings, but Sydney and GWS rise into our informally-defined historical “premiership” frame.
round-21-ratings.png



However, it’s the increasingly anomalous Port Adelaide, theoretically a contender, which we want to focus on here. The popular opinion of Port Adelaide being unable to match it with other good sides is well and truly borne out when we dig into their performance on our strength ratings by opponent. We have in the past broken up statistics by top 8 and bottom 10......... Simply put, Port Adelaide are the best side in the competition against weak opponents and they’re about as good as North Melbourne against the good teams. Below is a chart where we have calculated strength ratings through the same method as we always do using whole-of-season data, but separate ratings are derived for matches against the top half and bottom half of the competition as determined by our ratings above.

top-and-bottom1.png


Most clubs, predictably, have done better against the bad sides than the good ones. Port Adelaide, however, take this to extremes. They rate as 120% of league average in their performance against the bottom nine sides. Not even Adelaide or Sydney look that good, over the year, in beating up on the weaker teams.

That’s why we’ve been rating Port so highly this year – their performance, even allowing for the scaling we apply for opponent sets, has been abnormally, bizarrely good to the extent that it’s actually outweighed and masked their weaknesses against quality teams. Their sub-97% rating against top sides is 13th in the league, ahead of only North, Carlton, Fremantle and the Queensland sides. This divergence is more than double the size of the variance for any other team. It appears that the problem mostly strikes the Power in between the arcs. Against bottom sides, their midfield strength is streets ahead of any other side at 141% of the league average, meaning they get nearly three inside-50s for every two conceded. This opportunity imbalance makes their decent defence look better and papers over a struggling forward line. Against quality sides, that falls apart and they get less inside-50s than their opponents.
..........................

But it really is Port Adelaide who stand out here. Their output against weaker sides is really good and shouldn’t be written off. There’s obviously quality there, and they sit in striking distance of the top 4 with a healthy percentage. However, it wouldn’t be a stretch to call their overall strength rating fraudulent given its composition and we will be regarding them with a bit of an asterisk from here. Unless they can bridge the gap and produce something against their finals peers, even a top 4 berth is likely to end in ashes.

Port Adelaide have made liars of us

I am not sure they are making the right questions...
 
Just on this, the data in the top half vs bottom half isn't reflective of relative performance. That's the original analysis. When a data set is arbitrarily split and re-analysed, you're effectively performing 2 analyses, the original relative performance analysis with the added step of a sub-group analysis using already analysed data. When you start to look at data this way, i.e. an analysis within an analysis, the risk of type I and type II errors sky rockets. In simplest terms, the more complicated the analysis, the more unreliable it becomes.

Most importantly though, any analysis model that doesn't accurately predict the past, in this case alignment to current ladder position for top 9 vs bottom 9 differential, is useless. Overall relative performance is informative to a point because it largely aligns to the current ladder.

Exactly. Analysis needs to be objective, not subjective.

The real issue is that two sides in the eight have flogged us by 80+ which no other side in the eight has suffered because all of them simply lost to a side OUTSIDE the eight (and sometimes by a large margin). Which makes it seem like Port is worse then they actually are when you split it arbitrarily.

My ratings have the following:

1. Adelaide
2. Sydney
3. GWS
4. Port
5. Geelong
6. Richmond
7. Essendon
8. West Coast

9. Melbourne
10. St Kilda

And so on. Which is a more accurate reflection of a) performance b) the ladder and c) each sides prospects of winning the flag.

In conclusion - just another site to ignore. They should have stuck with their original model instead of pandering to the consensus of the suckface football public.
 

Log in to remove this Banner Ad

Just on this, the data in the top half vs bottom half isn't reflective of relative performance. That's the original analysis. When a data set is arbitrarily split and re-analysed, you're effectively performing 2 analyses, the original relative performance analysis with the added step of a sub-group analysis using already analysed data. When you start to look at data this way, i.e. an analysis within an analysis, the risk of type I and type II errors sky rockets. In simplest terms, the more complicated the analysis, the more unreliable it becomes.

Most importantly though, any analysis model that doesn't accurately predict the past, in this case alignment to current ladder position for top 9 vs bottom 9 differential, is useless. Overall relative performance is informative to a point because it largely aligns to the current ladder.
:eek::huh:
 
Just on this, the data in the top half vs bottom half isn't reflective of relative performance. That's the original analysis. When a data set is arbitrarily split and re-analysed, you're effectively performing 2 analyses, the original relative performance analysis with the added step of a sub-group analysis. When you start to look at data this way, the risk of type I and type II errors sky rockets. In simplest terms, the more complicated the analysis, the more unreliable it becomes.

Most importantly though, any analysis model that doesn't accurately predict the past, in this case alignment to current ladder position for top 9 vs bottom 9 differential, is useless. Overall relative performance is informative to a point because it largely aligns to the current ladder.
That's Mumbo Jumbo. There is nothing arbitrary about the split. The top sides play in finals. That's not arbitrary. They are sides were are going to have to beat in September. We are not taking a sample of a population and trying to apply the results of that sample to the whole population. We have the whole population surveyed. I will explain what the stats mean

In the first table it shows Port get 140% inside 50's compare to the AFL average inside 50's per team when they play the bottom 9 sides. When they play the top 9 ie the other 8 sides it only gets 96.2% of the AFL average inside 50's. That's a fact, not subject to arbitrariness or Type I or Type II errors. Did a fire alarm go off when it shouldn't or did a fire alarm fail to go off when it should have from those revelations??

When we get the ball inside 50 and try and convert those offensive inside 50's into goals and scoring shots+ we are less than average against the top 9 and virtually AFL average against the bottom 9 sides. Once again its just deeper analysis of what we already know about our team. Between Rds 3 and the recent showdown we were clear leaders for inside 50 (now 2nd) but were anywhere between 2nd and 5th for most goals kicked. Bombing long to Snake, erh sorry, the boundary line to get a throw in doesn't help our cause in the efficiency stakes. Our attack isn't that great against the best defences and even though we smacked a lot of the bottom teams 18-20 goals isn't a great return for 70 inside 50's. Why are Adelaide so good against the top 9 teams? if you've seen them play those teams, especially Adelaide oval they kick so many goals from the goal square when they use their slingshot counter attacking method. We have Sam Gray missing or not even scoring from 1 metre out. That effects the numbers produced by the HPN team.

Same sort of story for defence. We are mean and tough against the cellar dwellers but our defence isn't so good against the best sides its only average - probably because their big KPF's either get hold of us or we are team defensive against the big blokes and dont kill the ball enough and their free little blokes take advantage. Plus we also struggle to defend against real skillful fast little blokes the HIM's as opposed to the LIM's.

I will give you an example of why your theory might work for when you survey a population but doesn't work for facts ie actual events. A test cricket batsman is considered a great player if he averages 50. If your theory is right it says ok he averages 50 we have to accept he is great and no more analysis is necessary as it might cause errors. That's crap. Just like in footy, you don't play each opponent equally home and away and you don't play the best teams and worst teams equally. A player with a test average of 50 could be averaging 150 against the celler dwellers of Bangers, Zimbo's and Windies, 40 against the Kiwis, Sri Lanka and Paki's, and 23 against the best sides of England South Africa, India and Oz. Obviously he plays for 1 of those 10 nations and I didn't pick which one but lets say he is an Aussie player.

Is it Arbitrary to do that break down of his stats? What about to break down what his results have been in each country away from home vs his home results and average?? What about breaking it down to his results and average when his country wins, loses or draws? what about at different batting positions? What about in different innings of the game ie 1st, 2nd, 3rd or 4th which is usually the toughest as the wicket is worn, and the added pressure that you are chasing runs for victory. Is going that deeper level of analysis going to cause Type I or Type II errors??? I don't think so.

Here is an example. Marvin Attaputu has scored 6 double centuries. When he retired 10 years ago his 6 double centuries were equal 4th behind Bradman, Lara and Hammond suggesting greatness. His final average was 39. So prima facie he is a decent player and we by your concept we don't want to do any further analysis if we look at a list of batting stats for world cricket or Sri Lankan cricket because we might make a Type I or Type II error?? Is that your theory? Once you split his data further you find he made 4 of the 6 double centuries against the cellar dwellers of Bangladesh and Zimbabwe. He made 22 ducks from 90 innings, which as a percentage is very high for a top 6 batsman. He averaged 19.6 in the fourth innings of the game, 25.1 in Sri Lanka's second innings and and 26.2 when they lost matches. So how does splitting Attaputu's data into sub groups produce Type I or Type II errors or does it reveal the truth of how good he really was??
 
Last edited:
It's arbitrary crap because it doesn't take into account that Port has only played two top eight sides at home (Adelaide and Richmond) compared to 7 for Adelaide, 6 for GWS and Geelong, 4 for Richmond and Essendon and 3 for Sydney and 2.5 for Melbourne (Darwin is neutral).

Of course everyone plays statistically better at home against top eight sides. If we got to play Essendon, GWS and Geelong at home like Adelaide did, I can guarantee our performance against top eight sides would be much better.
 
Yes I do recall when we got this season's fixture noting that we had some really tough games away this year. It surely is another effect on results.
 
It's arbitrary crap because it doesn't take into account that Port has only played two top eight sides at home (Adelaide and Richmond) compared to 7 for Adelaide, 6 for GWS and Geelong, 4 for Richmond and Essendon and 3 for Sydney and 2.5 for Melbourne (Darwin is neutral).

Of course everyone plays statistically better at home against top eight sides. If we got to play Essendon, GWS and Geelong at home like Adelaide did, I can guarantee our performance against top eight sides would be much better.
And we lost all 3 games at home. What does that say? We are likely to only play 1 top 8 side at home come September and 2 or 3 away depending on who we play and how far we go, so September will be reflective of the season. If HPN do the further analysis for home and away for top 9 bottom 9 over the 3 categories, the fundamental results wont change much I predict. Adelaide lost 2 top 8 games at home (Melbourne and Sydney) and only one on the road (Geelong). What does that mean?

Its not arbitrary, its the result of a FIXture. Its the results of what we produced on the field. You get the benefit of playing more home games during September if you win more games. We dropped games the stats said we should have won against WCE and Richmond. But we didn't because we were inefficient. We didn't win those games and it didn't have anything to do with playing games away from home.
 
And we lost all 3 games at home. What does that say? We are likely to only play 1 top 8 side at home come September and 2 or 3 away depending on who we play and how far we go, so September will be reflective of the season. If HPN do the further analysis for home and away for top 9 bottom 9 over the 3 categories, the fundamental results wont change much I predict. Adelaide lost 2 top 8 games at home (Melbourne and Sydney) and only one on the road (Geelong). What does that mean?

Its not arbitrary, its the result of a FIXture. Its the results of what we produced on the field. You get the benefit of playing more home games during September if you win more games. We dropped games the stats said we should have won against WCE and Richmond. But we didn't because we were inefficient. We didn't win those games and it didn't have anything to do with playing games away from home.

It means that Adelaide had a heap of games against top eight opponents at home where they took advantage of umpiring decisions they normally wouldn't get. It means that our style of play generates high inside 50 numbers which skews our efficiency rating against sides that drop back and defend hard.

That's why if they don't like the figures their model is putting up, they should change it. Don't introduce some new caveat to the data which basically rates Carlton and North as being more consistent sides, cause it just looks stupid.

I haven't changed my rating system all year, and it reflects popular consensus - Adelaide is flag favourite, Sydney is second, GWS third...but any one of Port, Geelong or Richmond could win if they have a good finals campaign. These guys just picked the wrong data to measure or not enough IMO.
 
That's Mumbo Jumbo. There is nothing arbitrary about the split. The top sides play in finals. That's not arbitrary. They are sides were are going to have to beat in September. We are not taking a sample of a population and trying to apply the results of that sample to the whole population. We have the whole population surveyed. I will explain what the stats mean

In the first table it shows Port get 140% inside 50's compare to the AFL average inside 50's per team when they play the bottom 9 sides. When they play the top 9 ie the other 8 sides it only gets 96.2% of the AFL average inside 50's. That's a fact, not subject to arbitrariness or Type I or Type II errors. Did a fire alarm go off when it shouldn't or did a fire alarm fail to go off when it should have from those revelations??

When we get the ball inside 50 and try and convert those offensive inside 50's into goals and scoring shots+ we are less than average against the top 9 and virtually AFL average against the bottom 9 sides. Once again its just deeper analysis of what we already know about our team. Between Rds 3 and the recent showdown we were clear leaders for inside 50 (now 2nd) but were anywhere between 2nd and 5th for most goals kicked. Bombing long to Snake, erh sorry, the boundary line to get a throw in doesn't help our cause in the efficiency stakes. Our attack isn't that great against the best defences and even though we smacked a lot of the bottom teams 18-20 goals isn't a great return for 70 inside 50's. Why are Adelaide so good against the top 9 teams? if you've seen them play those teams, especially Adelaide oval they kick so many goals from the goal square when they use their slingshot counter attacking method. We have Sam Gray missing or not even scoring from 1 metre out. That effects the numbers produced by the HPN team.

Same sort of story for defence. We are mean and tough against the cellar dwellers but our defence isn't so good against the best sides its only average - probably because their big KPF's either get hold of us or we are team defensive against the big blokes and dont kill the ball enough and their free little blokes take advantage. Plus we also struggle to defend against real skillful fast little blokes the HIM's as opposed to the LIM's.

I will give you an example of why your theory might work for when you survey a population but doesn't work for facts ie actual events. A test cricket batsman is considered a great player if he averages 50. If your theory is right it says ok he averages 50 we have to accept he is great and no more analysis is necessary as it might cause errors. That's crap. Just like in footy, you don't play each opponent equally home and away and you don't play the best teams and worst teams equally. A player with a test average of 50 could be averaging 150 against the celler dwellers of Bangers, Zimbo's and Windies, 40 against the Kiwis, Sri Lanka and Paki's, and 23 against the best sides of England South Africa, India and Oz. Obviously he plays for 1 of those 10 nations and I didn't pick which one but lets say he is an Aussie player.

Is it Arbitrary to do that break down of his stats? What about to break down what his results have been in each country away from home vs his home results and average?? What about breaking it down to his results and average when his country wins, loses or draws? what about at different batting positions? What about in different innings of the game ie 1st, 2nd, 3rd or 4th which is usually the toughest as the wicket is worn, and the added pressure that you are chasing runs for victory. Is going that deeper level of analysis going to cause Type I or Type II errors??? I don't think so.

Here is an example. Marvin Attaputu has scored 6 double centuries. When he retired 10 years ago his 6 double centuries were equal 4th behind Bradman, Lara and Hammond suggesting greatness. His final average was 39. So prima facie he is a decent player and we by your concept we don't want to do any further analysis if we look at a list of batting stats for world cricket or Sri Lankan cricket because we might make a Type I or Type II error?? Is that your theory? Once you split his data further you find he made 4 of the 6 double centuries against the cellar dwellers of Bangladesh and Zimbabwe. He made 22 ducks from 90 innings, which as a percentage is very high for a top 6 batsman. He averaged 19.6 in the fourth innings of the game, 25.1 in Sri Lanka's second innings and and 26.2 when they lost matches. So how does splitting Attaputu's data into sub groups produce Type I or Type II errors or does it reveal the truth of how good he really was??

Its a top 9 vs bottom 9 split. I'm fairly sure the top 8 sides play finals, not the top 9. I do note that Essendon is the 9th ranked team. A top 8 vs bottom 10 split would see a nice chunk of our bad stats swing the other way. It's cherry picking the data to magnify a desired outcome.

Edit: I don't have the time nor the energy to explain why I'm right re statistical analysis but looking at the cricket examples you used, it's not an apples vs apples comparison. Using batting averages and looking where runs were scored is totally different to creating a ranking system then using values from that ranking system to conduct an additional sub-group analysis.
 
Last edited:
It's arbitrary crap because it doesn't take into account that Port has only played two top eight sides at home (Adelaide and Richmond) compared to 7 for Adelaide, 6 for GWS and Geelong, 4 for Richmond and Essendon and 3 for Sydney and 2.5 for Melbourne (Darwin is neutral).

Of course everyone plays statistically better at home against top eight sides. If we got to play Essendon, GWS and Geelong at home like Adelaide did, I can guarantee our performance against top eight sides would be much better.
I'm not sure REH is basing his argument on home team advantage, as you seem to be.
 

Remove this Banner Ad

I'm not sure REH is basing his argument on home team advantage, as you seem to be.
No Im just looking at the raw data and agreeing with it that we have the biggest swing between playing the top 9 sides vs the bottom 9 sides. And that the numbers we have produced against the top 9 have only been around AFL average. The numyers are the numbers. They show what we have done give the Fixture we were give. In September if we are any good we will have to play more finals away from home than at home. Just like we played more finals side away from home than at home.
 
Its a top 9 vs bottom 9 split. I'm fairly sure the top 8 sides play finals, not the top 9. I do note that Essendon is the 9th ranked team. A top 8 vs bottom 10 split would see a nice chunk of our bad stats swing the other way. It's cherry picking the data to magnify a desired outcome.

Edit: I don't have the time nor the energy to explain why I'm right re statistical analysis but looking at the cricket examples you used, it's not an apples vs apples comparison. Using batting averages and looking where runs were scored is totally different to creating a ranking system then using values from that ranking system to conduct an additional sub-group analysis.
Given that we have 1 game to go and WC are only about 5 goals scored for behind Essendon and Saints and Bulldogs about 20 goals behind them on percentage then looking at top 9 is relevant.

Your comment that if we remove Essendon from top 9 and go top 8 a big chunk of our bad data stats will swing the other way shows you don't understand what the data is measuring and you are just looking at the adjusted ladder position rather than thinking about what it says. The result against Essendon makes minimal impact if moved from top half to bottom half as it's only one game and the stats differential weren't that great unlike the score board. The recent showdown is where moving the results of one game make a significant difference on those stats.

Re the cricket example. If you had a list of Sri Lankan batsman and ranked them Attaptutu would be high up on that ranking just like a footy ladder ranking. But if you split the data and look at results against top 5 test cricket sides and bottom 5 his ranking on that ladder of batsman would change as he has a heavy skewing to making runs against cellar dwellers. It would be exactly the same sort of exercise as what the HPN people have done.
 
Last edited:
To me, all i know is if our skills were better than we have shown this year, we would have achieved much more and would not be depending on Richmond to lose so we can make top 4.
When we play well as a team and reduce the amount of basic skill errors in a game, we will beat the sides we have to face in the finals.

This is what seems to me to be problematic at Port and has been for some time.

Another genuine tall forward that can slide into the ruck without us losing to much, would go a long way to our accuracy issues.
example : Johnston - Hynes.
 
This seems like as relevant a place to put this as anywhere. I had been pondering the question of the draw, and how our draw compared to other teams. I decided to look at some statistical analysis of the draw, and I've provided details and assumptions in spoiler tags below to save some of the TL;DR folks.
My first step at assessing the draw was to remove the influence of games played against an opponent. Because the number of games played against each club is uneven, if there were hypothetically a team that won 100% of games, those teams that played them twice would be 1 game lower than those that played them once. The opposite would be true of a team that won 0% of games.

The net result of this was I produced a top 17 for every team that removed the influence of any games that team had played against their opposition. I then looked at the average winning percentage of each team's opposition, and ranked the teams using this (NB: the average of all averages is 49.2%, not 50% due to the draws this season. Therefore a team with a 49.2% Raw Draw rating had an exactly even draw).

This showed some interesting data. GWS had the hardest draw in the competition, with 53.1%, while Gold Coast had the easiest with 45.8%. Overall, nothing too surprising given their finishes last year. I used this data to determine an expected number of games that each team should have won, based on the draws. As expected, higher teams beat the draw by more, and lower teams by less. However, there were a few spots where positions flipped. Notably, GWS climbed over Richmond, and Sydney jumped over Port.

However, I realised that this only told part of the story. An important consideration is how teams perform at home and away. I then set up three categories of matches for each team:
  1. Home matches - played at a home venue, against a travelling opponent. For this purpose, I assumed that Ballarat was a home venue for the Western Bulldogs, and Skilled Stadium was a home venue for Geelong against the Bulldogs and Richmond.
  2. Neutral matches - matches played at a venue shared by both teams (e.g. Showdown, or matches at Etihad or MCG between Victorian based teams), or another neutral ground (e.g. Tasmania, Shanghai, Alice Springs)
  3. Away matches - the opposite of the home matches - travelling to your opponent's home matches.
The end result of this was that I was able to calculate the relative difficulty of each team's Home, Away and Neutral games.

When I looked at this, I found that on a 5:4 ratio, home matches were won by the home team. Assuming that neutral games neither team had an advantage (at least as a product of the draw), and that the ratio of home victories applied across all games, I adjusted the draw difficulty to account for where teams were played.

The net result of this adjustment was:
  • Teams that tended to play more highly ranked teams at home, and easier teams away saw their ratings decrease
  • Teams that played higher ranked teams away, and easier teams at home had difficulty ratings increase
The greatest mover upwards in difficulty in this was Fremantle, followed by Sydney and Port. Victorian teams generally saw the least movement, because they played fewer games that were classified as Home and Away. The biggest mover downwards was Geelong, predominantly due to the home matches played at Skilled against Adelaide, Port and Sydney.

The greatest mover downwards in difficulty was Adelaide, who had the largest difference between home and away difficulty of any non-Victorian side.

Finally, using this, I compared the ratings to teams position from last year. The team that won the premiership should have the hardest draw. Essendon as wooden spooner should have one of the easiest draws. I used this comparison to look at teams where there was a large difference between teams.
Cliffs Notes of the TL;DR:
  • Port's draw was in line with an expected draw given our finishing position last year.
  • Fremantle, Brisbane and Collingwood ended up with draws that were far harsher than they deserved, given their finishing positions last year
  • Western Bulldogs and North Melbourne ended up draws far easier than they deserved given their finishes last year.
  • Port had the easiest home draw in the league - the teams we played at Adelaide Oval were extremely squishy, and we were expected to win 7.3 games out of 10 (we won 8)
  • We didn't achieve as well away from home as we would have liked, but we had one of the hardest away draws in the league, with Fremantle the only non-Victorian team having a similar away draw.
  • Adelaide had the largest differential in home and away difficulty of any side in the league, with the second hardest home draw, and the second easiest away draw. Based on their draw, they were expected to win 4.2 away games (they won 5). Only North Melbourne had an easier away draw, but as a Victorian side, they only played 4 away matches.
  • Adelaide and Geelong were the best performers vs their draw, with GWS very close behind.
draw.PNG
 
Hey data nuts and science nerds, some of you may find some of these useful. Support a charity while you're at it.

Disclaimer: I've got no involvement with either the retailer or the charity.

https://www.humblebundle.com/books/data-science-books


Sent from my iPhone using Tapatalk
 
I've been crunching some numbers.

Did you know that our midfield (Ebert, Polec, Wines, Wingard, Boak, S.Gray) has spent the equivalent of an extra three quarters on the field (75 minutes) compared to Adelaide's (Sloane, M Crouch, B Crouch, Douglas, MacKay, Atkins) over the course of the season? And that's taking into account the games missed by players for both sides.

a9e4c7f20f54a1ab97da1b76b7c88cf8.png


Add that fact to the early bye, and it stands to reason why our performance would have faded in the back half of the season.

It's also why I expect us to **** shit up this week. We've been loading our midfield up not just for a four week block, but all season. Like we've been training with a 20kg weight vest and we're about to throw it off...hence the reason in the past few weeks Powell-Pepper has gone from a percentage in the high 60s for TOG up until his rest to 83% against the Bulldogs and 88% against the Suns. We've been carrying him so he doesn't burn out before finals because we need him...and now it's time for him and Wines to go to work.
 

🥰 Love BigFooty? Join now for free.

The Hurling People Now applying their new Player Approximate Value index calculation - which is explained in the link in blue just below and before the table.

https://hurlingpeoplenow.wordpress.com/2017/09/02/the-2017-pav-all-australian-team/
The 2017 PAV All Australian Team
SEPTEMBER 2, 2017 ~ EDITOR
Selection Rules
Instead of picking the top 40 players from 2017, and picking a team from there, we have decided to go down a slightly different path instead. Out of interest, here are the top 40 players according to Player Approximate Value (PAV):

aa40-2017.jpg


As you can see, there just aren’t enough defenders available to fill a team in this manner, and the number of small forwards is also a little lacking. Similar to the AFL Coaches Association All Australian Team of 2015, we have implemented several selection rules to guide us. Firstly, we wanted to pick as versatile a team as possible, with a hybrid attack, leaning to the shorter side. We have instituted a limit of 15 PAVs in order to make the side. That covers the top 96 players this year, with Dan Hannebery falling just on the wrong side.

.........

aapav-2017.jpg


The PAV AA side shares a lot of players with the true All Australian side, with 16 common members and six changes. Of those changes, a different structure or rules for selection would have put several of the official All Australians in our team.

2017-aa-team.jpg


.........
https://hurlingpeoplenow.wordpress.com/2017/09/02/the-2017-pav-all-australian-team/
Ollie in the guts is an interesting choice. maybe everyone including Port fans have undervalued Ollie's work. I had him high in my votes before Rd 11 at 2nd or 3rd but then I didn't give him many votes.
 
The Hurling People Now applying their new Player Approximate Value index calculation - which is explained in the link in blue just below and before the table.

https://hurlingpeoplenow.wordpress.com/2017/09/02/the-2017-pav-all-australian-team/
The 2017 PAV All Australian Team
SEPTEMBER 2, 2017 ~ EDITOR
Selection Rules
Instead of picking the top 40 players from 2017, and picking a team from there, we have decided to go down a slightly different path instead. Out of interest, here are the top 40 players according to Player Approximate Value (PAV):

aa40-2017.jpg


As you can see, there just aren’t enough defenders available to fill a team in this manner, and the number of small forwards is also a little lacking. Similar to the AFL Coaches Association All Australian Team of 2015, we have implemented several selection rules to guide us. Firstly, we wanted to pick as versatile a team as possible, with a hybrid attack, leaning to the shorter side. We have instituted a limit of 15 PAVs in order to make the side. That covers the top 96 players this year, with Dan Hannebery falling just on the wrong side.

.........

aapav-2017.jpg


The PAV AA side shares a lot of players with the true All Australian side, with 16 common members and six changes. Of those changes, a different structure or rules for selection would have put several of the official All Australians in our team.

2017-aa-team.jpg


.........
https://hurlingpeoplenow.wordpress.com/2017/09/02/the-2017-pav-all-australian-team/
Ollie in the guts is an interesting choice. maybe everyone including Port fans have undervalued Ollie's work. I had him high in my votes before Rd 11 at 2nd or 3rd but then I didn't give him many votes.
I am always a little torn when considering player ratings systems. I find them very interesting, but fundamentally flawed in that they, generally, only consider the offensive side of the game, which is why the only defenders who manage to rank well are linebreaking halfbacks.

Unfortunately, defensive impact is hard to measure statistically.

One idea i had to rectify this would be to consider the impact a player has on the aveage offensive output of their opponent.

Take Bobby's game against Roughead in R4 2015, where he kept roughy to 1 behind (and popped his brownlow vote cherry - sometimes the umps do have a clue).

Bobby got 47 DT points and was the 35th ranked player on the ground. Roughy got 41 DT points, about 40% of his average for the season. Add the other 60% of Roughy's average impact to Bobby' total and he jumps to about 110 DT points and up to 6th ranked player on the ground - much more in line with his actual impact on the game.

Not sure how to implement this for players with no, partial or multiple direct opponents, but it would be an interesting excercise
 
The club is putting up some stats every week from 2017 season. I don't think they will be putting up any poor stats but at least we have some numbers

Port Adelaide's 2017 season by the numbers: Stoppage
1 – The number Port Adelaide were ranked in the AFL for scoring from stoppages.

378.8 – The number of possessions the Power averaged per game in 2017.

831 – The number of hitouts Paddy Ryder had this year – ranked second in the AFL.

253 – The number of hitouts to advantage Paddy Ryder had in 2017.

891 – The number of clearances the Power had in 2017 – ranked third in the AFL during the home-and-away season.

38.7 – The number of clearances Port averaged in their 23 games in 2017 – also ranked third in the AFL per game.

6 – The number of clearances Ollie Wines averaged per game in 2017 – ranked 16th in the league.

317 – The number of contested possessions Ollie Wines had this season – ranked first at the Power and seventh overall.
http://www.portadelaidefc.com.au/news/2017-11-14/by-the-numbers-stoppage


Port Adelaide's 2017 season by the numbers: Offence
2168 – The number of points the Power scored in their 22 home-and-away games this year. They were second in the AFL for points scored, with their highest score of 150 coming in Round 6 against the Lions at the Gabba.

30 – The number of goal assists Robbie Gray had in 2017, ranked second in the AFL. Gray also kicked 47 goals of his own.

428.5 – The number of metres Jared Polec gained per game in 2017. Polec recorded the most metres gained throughout the season for the Power – 9427. Port were ranked second in the AFL for total metres gained in 2017.

1 – The number Port Adelaide were ranked in the AFL for time in forward half this season.

101 – The total number of inside 50s by Brad Ebert this year – the most at Port Adelaide.

1363 – The number of times Port Adelaide went inside 50 for the year. The Power averaged 59.3 inside 50s per game, ranked no.1 in the AFL.

5 – The number of inside 50s Chad Wingard averaged per game in 2017, the highest for the Power.
http://www.portadelaidefc.com.au/news/2017-11-10/by-the-numbers-offence
 
http://www.portadelaidefc.com.au/news/2017-11-16/by-the-numbers-defense
Port Adelaide's 2017 season by the numbers: Defense
48.9 – The number of inside 50s Port Adelaide conceded per game in 2017 – the least in the AFL

1 – The number Ken Hinkley’s men were ranked for forward-half turnovers last season.

7.33 – The number of intercept possessions Hamish Hartlett averaged per game in 2017 – ranked first at Port and 12th in the AFL.

176 – The number of tackles Brad Ebert laid last year – ranked second in the competition.

22 – The number of one-percenters Dougall Howard had in last year’s Elimination Final – the most of any player in a single game in 2017.

168 – The number of one-percenters Tom Jonas had in 2017 – ranked first at Port Adelaide and fifth in the competition.
http://www.portadelaidefc.com.au/news/2017-11-16/by-the-numbers-defense
 
The Power averaged 59.3 inside 50s per game, ranked no.1 in the AFL. and
48.9 – The number of inside 50s Port Adelaide conceded per game in 2017 – the least in the AFL

Yet we finished 7th. Shows we have a lot of work to do on skills to take advantage of these 2 stats.
 

Remove this Banner Ad

Opinion "Help me out where I need faith!" - The Statistical Data Thread

🥰 Love BigFooty? Join now for free.

Back
Top