Certified Legendary Thread The Squiggle is back in 2023 (and other analytics)

Remove this Banner Ad

Is it just me or does this total 199 wins?
It does!

The short answer as to why is "rounding." But the real answer is more interesting.

Most people think if you're going to predict a ladder, it should at least pass the sniff test of adding up to the right number of wins. Which makes intuitive sense. If the ladder can't even get that right, why should it be right about anything else?

But this is actually wrong: Using the same data, a logically impossible ladder will be more accurate than a possible one.

What people imagine is that there are lots of possible ladders and you should pick the most likely one. The problem here is there are way more possible ladders than you think. There are actually 6,402,373,705,728,000 of them, which you can calculate with 18 x 17 x 16 ... (also known as 18!, or 18 factorial). The way that works is if you're making a ladder, you can put any of 18 teams in the first slot, then any of 17 remaining teams in the next slot, and so on.

6.4 quadrillion is a lot of ladders! And we're just getting started, because a ladder with Melbourne first on 17 wins is different to a ladder with Melbourne first on 16 wins. Even if we're really conservative and say the number of wins at each ladder position can only be one of two possible values (e.g. 1st must have 17 or 16 wins, 2nd-3rd must have 16 or 15, and so on), we are now dealing with 1,678,343,853,000,000,000,000 possible ladders, or 1.67 sextillion (36 x 34 x 32...).

There are so many possible ladders that it's mind-bogglingly improbable that anyone will guess it just right. And by that, I mean even in our conservative scenario, it's less likely than you buying three Tatts tickets on three consecutive weeks and winning the jackpot each time on your first try. Even though some ladders are clearly more likely to occur than others, there are still so many combinations that for all practical purposes, it's certain that the most likely one WON'T occur.

Here's an example in practice, where Tony Corke from Matter of Stats was only dealing with a relatively tiny number of possible ladders -- 24,088, from 50,000 simulations with only 4 rounds left to play, looking at ranks only, not number of wins. Even then, the most common ladder occurred only 0.16% of the time. That is, it was 99.84% likely to be wrong in some way:


So if the mode ladder -- that is, the ladder that's most likely to occur, according to your calculations -- is definitely going to be wrong... why use a mode ladder?

There's a reason mode is used far less often than mean or median when people want to average a range of data (e.g. many possible ladders) down into a single representative value (e.g. one ladder): Mode is insensitive to the way the data is distributed.

PTN2p5e.png


For example, maybe GWS finish 4th in your mode ladder. But that might be their highest plausible position, in your sims, and they otherwise finish 5th-8th. Or maybe it's skewed the opposite way, and they rarely finish any lower than 4th. A mode ladder can't reflect this. As such, taking the average as a mean better represents what your probabilities are saying.

We can see this with a simplified example where the league has 3 teams -- West Coast, North Melbourne, and Carlton -- and they play each other twice. Let's simulate this season three times:

West Coast: 2 wins, 2 wins, 1 win
North Melbourne: 1, 0, 1
Carlton: 0, 1, 1

In reality, we'd run more than three simulations, to reduce the effect of luck -- I reckon Carlton fluked a game or two here. But anyway, with this data set, our mode ladder would look like this:

1. West Coast: 2 wins
=2. North Melbourne: 1 win
=2. Carlton: 1 win

This adds up to 4 wins, but there are only 3 games! The mode ladder is ignoring how each team's number of wins has a downward skew -- that is, how there are sims in which each team won fewer games than its mode, but no sims in which a team won more games than its mode.

A mean ladder doesn't have this problem:

1. West Coast: 1.667 wins
=2. North Melbourne: 0.667 wins
=2. Carlton: 0.667 wins

... until you round it! If you do, you will round every team up, and get the same result as the mode ladder.

Now, you don't want to round it, because that's just throwing away accuracy. You should really post your ladder with decimal places in it, and hope that people understand that 1.667 wins means "2 is more likely than 1." But most people expect their predicted ladders to have whole numbers of wins, so you might do it anyway. And at that point, someone might observe that your ladder doesn't add up right.

There's only one way to make it add up right: round a team down instead of up. But which one? There's no reason to round Carlton down and North Melbourne up: They have the exact same value. If you do round down a team, your ladder now adds up, but has become less accurate, because one team's predicted wins is 0.667 away from what your model said, instead of 0.333.

Fundamentally, this problem stems from the fact that you're trying to squash a range of values (many possibilities) down to a single value ("most likely"). There's just no way to do that without some loss of information. All you can do is choose which information you care about and prioritize that.

If you can see that the numbers in these kinds of predictions are average values representing a range of plausible ones, and that there's no point in trying to predict a ladder just right because that's impossible, then you can see, I hope, that it's better not to force it to be logically possible. And look at the decimal numbers in the far-right column instead.
 
Last edited:
It does!

The short answer as to why is "rounding." But the real answer is more interesting.

Most people think if you're going to predict a ladder, it should at least pass the sniff test of adding up to the right number of wins. Which makes intuitive sense. If the ladder can't even get that right, why should it be right about anything else?

But this is actually wrong: Using the same data, a logically impossible ladder will be more accurate than a possible one.

What people imagine is that there are lots of possible ladders and you should pick the most likely one. The problem here is there are way more possible ladders than you think. There are actually 6,402,373,705,728,000 of them, which you can calculate with 18 x 17 x 16 ... (also known as 18!, or 18 factorial). The way that works is if you're making a ladder, you can put any of 18 teams in the first slot, then any of 17 remaining teams in the next slot, and so on.

6.4 quadrillion is a lot of ladders! And we're just getting started, because a ladder with Melbourne first on 17 wins is different to a ladder with Melbourne first on 16 wins. Even if we're really conservative and say the number of wins at each ladder position can only be one of two possible values (e.g. 1st must have 17 or 16 wins, 2nd-3rd must have 16 or 15, and so on), we are now dealing with 1,678,343,853,000,000,000,000 possible ladders, or 1.67 sextillion (36 x 34 x 32...).

There are so many possible ladders that it's mind-bogglingly improbable that anyone will guess it just right. And by that, I mean even in our conservative scenario, it's less likely than you buying three Tatts tickets on three consecutive weeks and winning the jackpot each time on your first try. Even though some ladders are clearly more likely to occur than others, there are still so many combinations that for all practical purposes, it's certain that the most likely one WON'T occur.

Here's an example in practice, where Tony Corke from Matter of Stats was only dealing with a relatively tiny number of possible ladders -- 24,088, from 50,000 simulations with only 4 rounds left to play, looking at ranks only, not number of wins. Even then, the most common ladder occurred only 0.16% of the time. That is, it was 99.84% likely to be wrong in some way:


So if the mode ladder -- that is, the ladder that's most likely to occur, according to your calculations -- is definitely going to be wrong... why use a mode ladder?

There's a reason mode is used far less often than mean or median when people want to average a range of data (e.g. many possible ladders) down into a single representative value (e.g. one ladder): Mode is insensitive to the way the data is distributed.

PTN2p5e.png


For example, maybe GWS finish 4th in your mode ladder. But that might be their highest plausible position, in your sims, and they otherwise finish 5th-8th. Or maybe it's skewed the opposite way, and they rarely finish any lower than 4th. A mode ladder can't reflect this. As such, taking the average as a mean better represents what your probabilities are saying.

We can see this with a simplified example where the league has 3 teams -- West Coast, North Melbourne, and Carlton -- and they play each other twice. Let's simulate this season three times:

West Coast: 2 wins, 2 wins, 1 win
North Melbourne: 1, 0, 1
Carlton: 0, 1, 1

In reality, we'd run more than three simulations, to reduce the effect of luck -- I reckon Carlton fluked a game or two here. But anyway, with this data set, our mode ladder would look like this:

1. West Coast: 2 wins
=2. North Melbourne: 1 win
=2. Carlton: 1 win

This adds up to 4 wins, but there are only 3 games! The mode ladder is ignoring how each team's number of wins has a downward skew -- that is, how there are sims in which each team won fewer games than its mode, but no sims in which a team won more games than its mode.

A mean ladder doesn't have this problem:

1. West Coast: 1.667 wins
=2. North Melbourne: 0.667 wins
=2. Carlton: 0.667 wins

... until you round it! If you do, you will round every team up, and get the same result as the mode ladder.

Now, you don't want to round it, because that's just throwing away accuracy. You should really post your ladder with decimal places in it, and hope that people understand that 1.667 wins means "2 is more likely than 1." But most people expect their predicted ladders to have whole numbers of wins, so you might do it anyway. And at that point, someone might observe that your ladder doesn't add up right.

There's only one way to make it add up right: round a team down instead of up. But which one? There's no reason to round Carlton down and North Melbourne up: They have the exact same value. If you do round down a team, your ladder now adds up, but has become less accurate, because one team's predicted wins is 0.667 away from what your model said, instead of 0.333.

Fundamentally, this problem stems from the fact that you're trying to squash a range of values (many possibilities) down to a single value ("most likely"). There's just no way to do that without some loss of information. All you can do is choose which information you care about and prioritize that.

If you can see that the numbers in these kinds of predictions are average values representing a range of plausible ones, and that there's no point in trying to predict a ladder just right because that's impossible, then you can see, I hope, that it's better not to force it to be logically possible. And look at the decimal numbers in the far-right column instead.
Well yeah, sure.

:eek: since you put it that way.
 

Log in to remove this ad.

It does!

The short answer as to why is "rounding." But the real answer is more interesting.

Most people think if you're going to predict a ladder, it should at least pass the sniff test of adding up to the right number of wins. Which makes intuitive sense. If the ladder can't even get that right, why should it be right about anything else?

But this is actually wrong: Using the same data, a logically impossible ladder will be more accurate than a possible one.

What people imagine is that there are lots of possible ladders and you should pick the most likely one. The problem here is there are way more possible ladders than you think. There are actually 6,402,373,705,728,000 of them, which you can calculate with 18 x 17 x 16 ... (also known as 18!, or 18 factorial). The way that works is if you're making a ladder, you can put any of 18 teams in the first slot, then any of 17 remaining teams in the next slot, and so on.

6.4 quadrillion is a lot of ladders! And we're just getting started, because a ladder with Melbourne first on 17 wins is different to a ladder with Melbourne first on 16 wins. Even if we're really conservative and say the number of wins at each ladder position can only be one of two possible values (e.g. 1st must have 17 or 16 wins, 2nd-3rd must have 16 or 15, and so on), we are now dealing with 1,678,343,853,000,000,000,000 possible ladders, or 1.67 sextillion (36 x 34 x 32...).

There are so many possible ladders that it's mind-bogglingly improbable that anyone will guess it just right. And by that, I mean even in our conservative scenario, it's less likely than you buying three Tatts tickets on three consecutive weeks and winning the jackpot each time on your first try. Even though some ladders are clearly more likely to occur than others, there are still so many combinations that for all practical purposes, it's certain that the most likely one WON'T occur.

Here's an example in practice, where Tony Corke from Matter of Stats was only dealing with a relatively tiny number of possible ladders -- 24,088, from 50,000 simulations with only 4 rounds left to play, looking at ranks only, not number of wins. Even then, the most common ladder occurred only 0.16% of the time. That is, it was 99.84% likely to be wrong in some way:


So if the mode ladder -- that is, the ladder that's most likely to occur, according to your calculations -- is definitely going to be wrong... why use a mode ladder?

There's a reason mode is used far less often than mean or median when people want to average a range of data (e.g. many possible ladders) down into a single representative value (e.g. one ladder): Mode is insensitive to the way the data is distributed.

PTN2p5e.png


For example, maybe GWS finish 4th in your mode ladder. But that might be their highest plausible position, in your sims, and they otherwise finish 5th-8th. Or maybe it's skewed the opposite way, and they rarely finish any lower than 4th. A mode ladder can't reflect this. As such, taking the average as a mean better represents what your probabilities are saying.

We can see this with a simplified example where the league has 3 teams -- West Coast, North Melbourne, and Carlton -- and they play each other twice. Let's simulate this season three times:

West Coast: 2 wins, 2 wins, 1 win
North Melbourne: 1, 0, 1
Carlton: 0, 1, 1

In reality, we'd run more than three simulations, to reduce the effect of luck -- I reckon Carlton fluked a game or two here. But anyway, with this data set, our mode ladder would look like this:

1. West Coast: 2 wins
=2. North Melbourne: 1 win
=2. Carlton: 1 win

This adds up to 4 wins, but there are only 3 games! The mode ladder is ignoring how each team's number of wins has a downward skew -- that is, how there are sims in which each team won fewer games than its mode, but no sims in which a team won more games than its mode.

A mean ladder doesn't have this problem:

1. West Coast: 1.667 wins
=2. North Melbourne: 0.667 wins
=2. Carlton: 0.667 wins

... until you round it! If you do, you will round every team up, and get the same result as the mode ladder.

Now, you don't want to round it, because that's just throwing away accuracy. You should really post your ladder with decimal places in it, and hope that people understand that 1.667 wins means "2 is more likely than 1." But most people expect their predicted ladders to have whole numbers of wins, so you might do it anyway. And at that point, someone might observe that your ladder doesn't add up right.

There's only one way to make it add up right: round a team down instead of up. But which one? There's no reason to round Carlton down and North Melbourne up: They have the exact same value. If you do round down a team, your ladder now adds up, but has become less accurate, because one team's predicted wins is 0.667 away from what your model said, instead of 0.333.

Fundamentally, this problem stems from the fact that you're trying to squash a range of values (many possibilities) down to a single value ("most likely"). There's just no way to do that without some loss of information. All you can do is choose which information you care about and prioritize that.

If you can see that the numbers in these kinds of predictions are average values representing a range of plausible ones, and that there's no point in trying to predict a ladder just right because that's impossible, then you can see, I hope, that it's better not to force it to be logically possible. And look at the decimal numbers in the far-right column instead.
my head hurts.
 
It does!

The short answer as to why is "rounding." But the real answer is more interesting.

Most people think if you're going to predict a ladder, it should at least pass the sniff test of adding up to the right number of wins. Which makes intuitive sense. If the ladder can't even get that right, why should it be right about anything else?

But this is actually wrong: Using the same data, a logically impossible ladder will be more accurate than a possible one.

What people imagine is that there are lots of possible ladders and you should pick the most likely one. The problem here is there are way more possible ladders than you think. There are actually 6,402,373,705,728,000 of them, which you can calculate with 18 x 17 x 16 ... (also known as 18!, or 18 factorial). The way that works is if you're making a ladder, you can put any of 18 teams in the first slot, then any of 17 remaining teams in the next slot, and so on.

6.4 quadrillion is a lot of ladders! And we're just getting started, because a ladder with Melbourne first on 17 wins is different to a ladder with Melbourne first on 16 wins. Even if we're really conservative and say the number of wins at each ladder position can only be one of two possible values (e.g. 1st must have 17 or 16 wins, 2nd-3rd must have 16 or 15, and so on), we are now dealing with 1,678,343,853,000,000,000,000 possible ladders, or 1.67 sextillion (36 x 34 x 32...).

There are so many possible ladders that it's mind-bogglingly improbable that anyone will guess it just right. And by that, I mean even in our conservative scenario, it's less likely than you buying three Tatts tickets on three consecutive weeks and winning the jackpot each time on your first try. Even though some ladders are clearly more likely to occur than others, there are still so many combinations that for all practical purposes, it's certain that the most likely one WON'T occur.

Here's an example in practice, where Tony Corke from Matter of Stats was only dealing with a relatively tiny number of possible ladders -- 24,088, from 50,000 simulations with only 4 rounds left to play, looking at ranks only, not number of wins. Even then, the most common ladder occurred only 0.16% of the time. That is, it was 99.84% likely to be wrong in some way:


So if the mode ladder -- that is, the ladder that's most likely to occur, according to your calculations -- is definitely going to be wrong... why use a mode ladder?

There's a reason mode is used far less often than mean or median when people want to average a range of data (e.g. many possible ladders) down into a single representative value (e.g. one ladder): Mode is insensitive to the way the data is distributed.

PTN2p5e.png


For example, maybe GWS finish 4th in your mode ladder. But that might be their highest plausible position, in your sims, and they otherwise finish 5th-8th. Or maybe it's skewed the opposite way, and they rarely finish any lower than 4th. A mode ladder can't reflect this. As such, taking the average as a mean better represents what your probabilities are saying.

We can see this with a simplified example where the league has 3 teams -- West Coast, North Melbourne, and Carlton -- and they play each other twice. Let's simulate this season three times:

West Coast: 2 wins, 2 wins, 1 win
North Melbourne: 1, 0, 1
Carlton: 0, 1, 1

In reality, we'd run more than three simulations, to reduce the effect of luck -- I reckon Carlton fluked a game or two here. But anyway, with this data set, our mode ladder would look like this:

1. West Coast: 2 wins
=2. North Melbourne: 1 win
=2. Carlton: 1 win

This adds up to 4 wins, but there are only 3 games! The mode ladder is ignoring how each team's number of wins has a downward skew -- that is, how there are sims in which each team won fewer games than its mode, but no sims in which a team won more games than its mode.

A mean ladder doesn't have this problem:

1. West Coast: 1.667 wins
=2. North Melbourne: 0.667 wins
=2. Carlton: 0.667 wins

... until you round it! If you do, you will round every team up, and get the same result as the mode ladder.

Now, you don't want to round it, because that's just throwing away accuracy. You should really post your ladder with decimal places in it, and hope that people understand that 1.667 wins means "2 is more likely than 1." But most people expect their predicted ladders to have whole numbers of wins, so you might do it anyway. And at that point, someone might observe that your ladder doesn't add up right.

There's only one way to make it add up right: round a team down instead of up. But which one? There's no reason to round Carlton down and North Melbourne up: They have the exact same value. If you do round down a team, your ladder now adds up, but has become less accurate, because one team's predicted wins is 0.667 away from what your model said, instead of 0.333.

Fundamentally, this problem stems from the fact that you're trying to squash a range of values (many possibilities) down to a single value ("most likely"). There's just no way to do that without some loss of information. All you can do is choose which information you care about and prioritize that.

If you can see that the numbers in these kinds of predictions are average values representing a range of plausible ones, and that there's no point in trying to predict a ladder just right because that's impossible, then you can see, I hope, that it's better not to force it to be logically possible. And look at the decimal numbers in the far-right column instead.


That's a super impressive post. So much info and seems so complicated until you spell it out.
 
Surely the bigger fallacy is having Carlton on 6 wins :D
Carlton are easing toward 5 wins in my more recent forecasts.

ScCNp3O.png


There is a quirk where the forecast always implies the coming season will be very tight, with 1st on ~15 wins while last wins 4 or 5, even though this is unlikely. That's because someone will suffer a bad run with injury + lose a couple of close games + not be much chop to start with + begin to look toward next year and wind up winning only 4 games -- like St Kilda did this year -- or, conversely, be good + have a fortunate run with injury and win 18, like Richmond.

We can't know who will be lucky or unlucky, though, so the prediction must assume an average season for everybody.

Therefore although Carlton are predicted to finish 17th on 5-6 wins, in reality, if they finish 17th, they probably won't win 5 or 6 games -- because there will be other teams also having unlucky seasons, and Carlton must finish below all but one of them. And if Carlton do win 5 or 6 games, they'll almost certainly finish higher than 17th!

P.S. In exactly one of these sims, Carlton win 18 games.
 
Carlton are easing toward 5 wins in my more recent forecasts.

ScCNp3O.png


There is a quirk where the forecast always implies the coming season will be very tight, with 1st on ~15 wins while last wins 4 or 5, even though this is unlikely. That's because someone will suffer a bad run with injury + lose a couple of close games + not be much chop to start with + begin to look toward next year and wind up winning only 4 games -- like St Kilda did this year -- or, conversely, be good + have a fortunate run with injury and win 18, like Richmond.

We can't know who will be lucky or unlucky, though, so the prediction must assume an average season for everybody.

Therefore although Carlton are predicted to finish 17th on 5-6 wins, in reality, if they finish 17th, they probably won't win 5 or 6 games -- because there will be other teams also having unlucky seasons, and Carlton must finish below all but one of them. And if Carlton do win 5 or 6 games, they'll almost certainly finish higher than 17th!

P.S. In exactly one of these sims, Carlton win 18 games.
I would love to see this sim
 
Carlton are easing toward 5 wins in my more recent forecasts.

ScCNp3O.png


There is a quirk where the forecast always implies the coming season will be very tight, with 1st on ~15 wins while last wins 4 or 5, even though this is unlikely. That's because someone will suffer a bad run with injury + lose a couple of close games + not be much chop to start with + begin to look toward next year and wind up winning only 4 games -- like St Kilda did this year -- or, conversely, be good + have a fortunate run with injury and win 18, like Richmond.

We can't know who will be lucky or unlucky, though, so the prediction must assume an average season for everybody.

Therefore although Carlton are predicted to finish 17th on 5-6 wins, in reality, if they finish 17th, they probably won't win 5 or 6 games -- because there will be other teams also having unlucky seasons, and Carlton must finish below all but one of them. And if Carlton do win 5 or 6 games, they'll almost certainly finish higher than 17th!

P.S. In exactly one of these sims, Carlton win 18 games.
This is cool. The world doesn't have enough histograms.

Is Freo still stuck on 8 wins? Squiggle hates us I know, but surely we can eek out 9 wins this year?
 
Still a work in progress, but this is how the 2019 prediction is shaking up, factoring in trades & players returning from injury.

ZhNYdcH.png


The number of predicted wins is smooshed in toward the middle -- in reality, some teams will certainly win more than 14 games -- since we can't really know who will break away from the pack. The number in the far-right column is the best indicator of what Squiggle thinks.
freo with 8 wins for the 3rd season in a row?
 
Carlton are easing toward 5 wins in my more recent forecasts.

ScCNp3O.png


There is a quirk where the forecast always implies the coming season will be very tight, with 1st on ~15 wins while last wins 4 or 5, even though this is unlikely. That's because someone will suffer a bad run with injury + lose a couple of close games + not be much chop to start with + begin to look toward next year and wind up winning only 4 games -- like St Kilda did this year -- or, conversely, be good + have a fortunate run with injury and win 18, like Richmond.

We can't know who will be lucky or unlucky, though, so the prediction must assume an average season for everybody.

Therefore although Carlton are predicted to finish 17th on 5-6 wins, in reality, if they finish 17th, they probably won't win 5 or 6 games -- because there will be other teams also having unlucky seasons, and Carlton must finish below all but one of them. And if Carlton do win 5 or 6 games, they'll almost certainly finish higher than 17th!

P.S. In exactly one of these sims, Carlton win 18 games.
Can it predict if Carlton will ever score 100 points again ?
 

(Log in to remove this ad.)

There's never been a better time to be bearish on the Swans!

<snip>

2. Lack of Injury Upside

Every team will welcome back important players next year who weren't able to get on the park by the end of 2018. In most cases, this should make a real difference, because they were missing a lot of players, or a couple of very good ones. But three clubs were able to name sides late in 2018 that are basically identical to their best 22 today: North Melbourne, Sydney, and Richmond.

To be fair, the Swans had a couple of key players running around in the final who were clearly hampered by injury. (Likewise Richmond.) But still, it means the club was able to put out something like its best complement of soldiers even late in the season, and so was operating near its peak.

<snip>

Sydney don't have injured players to bring back.

That's rather harsh on Callum Mills, would've loved to had him helping in the midfield. Nick Smith is a pretty useful defender, out for the finals. Naismith would have been very useful as a ruck giving Sinclair a chance to swing forward to support Buddy. I've given up on Reid ever contributing and Hannebery/Jack were addition by subtraction last year so it's hard to consider them as a loss; that said just having different, healthy players that aren't complicating selection meetings and can run out each game is a step up.
 
Still a work in progress, but this is how the 2019 prediction is shaking up, factoring in trades & players returning from injury.

ZhNYdcH.png


The number of predicted wins is smooshed in toward the middle -- in reality, some teams will certainly win more than 14 games -- since we can't really know who will break away from the pack. The number in the far-right column is the best indicator of what Squiggle thinks.

Freo for Bottom 4? That's a massive call from the Squig! Ross Lyon's headless corpse would be strung from the highest point at Optus stadium if that happened. Don't get me wrong, I live to see West Coast win flags and Freo eat sh*t, but I cant see this happening. Surely they'll finish in the 9-12 range?
 
freo with 8 wins for the 3rd season in a row?
I mean statistically speaking it's more likely that a team will win about the same number of games next year as they did this year, as compared to winning a lot more or a lot less. Ladders tend to be reasonably static... you notice the bolters but most teams stay roughly where they were.
 
That's rather harsh on Callum Mills, would've loved to had him helping in the midfield. Nick Smith is a pretty useful defender, out for the finals. Naismith would have been very useful as a ruck giving Sinclair a chance to swing forward to support Buddy. I've given up on Reid ever contributing and Hannebery/Jack were addition by subtraction last year so it's hard to consider them as a loss; that said just having different, healthy players that aren't complicating selection meetings and can run out each game is a step up.
This is how I look at Sydney's 2018 -> 2019 player movements using AFL Ratings Points:
Code:
SYDNEY
         Daniel Menzel    +338.2 *
           Ryan Clarke    +222.2
       Jackson Thurlow    +129.3
          Alex Johnson      -8.1
          Jordan Foote     -21.9
          Kurt Tippett     -75.3
       Daniel Robinson     -81.4
           Harry Marsh    -123.5
           Dean Towers    -213.3
            Gary Rohan    -237.9 *
            Nic Newman    -300.8 *
         Dan Hannebery    -324.1 *
                       ---------
                          -696.6 Trade gain
                          -131.1 Best 22
* Best 22 player

So if Sydney field their Best 22 in Round 1, 2019 by this measure, then the difference to their lineup in their last game this year looks like:
Code:
INS::::
         338.2 Daniel Menzel (NEW)
         285.8 Callum Mills
         237.6 Nick Smith
         223.5 Sam Reid
         222.2 Ryan Clarke (NEW)
         171.3 Lewis Melican
     ------------------------------
         327.3 Dan Hannebery (GONE)
         300.8 Nic Newman (GONE)
         163.7 Aliir Aliir
         150.0 Ben Ronke
          85.0 Tom McCartin
          80.5 Daniel Robinson (GONE)
OUTS::::
        +371.3 Net (Ins: +1478.6, Outs: +1107.3)

So this shouldn't be taken too literally -- e.g. Naismith is on 147.7 AFL Ratings points so doesn't quite make the cut, but Sydney will very likely use him as ruck, and of course you can also argue that various players are under- or over-valued. But it provides a nice objective sense of how much proven talent will be entering and leaving the side.

The "+371.3 Net" figure at the bottom means Sydney's potential Round 1, 2019 lineup is better than their 2018 EF side by 371 AFL Ratings points. Which sounds pretty good! The problem is every club's R1 2019 lineup is better than its final 2018 game, and most are a lot better.

For example, here's Carlton:
Code:
CARLTON
            Nic Newman    +300.8 *
        Mitch McGovern    +206.2 *
           Alex Fasolo    +147.6
      Will Setterfield      +0.9
        Cameron O'Shea     -21.3
          Ciaran Byrne     -60.4
         Alex Silvagni     -71.3
           Nick Graham    -115.7
              Jed Lamb    -170.7 *
          Sam Kerridge    -173.5 *
              Sam Rowe    -204.5 *
         Aaron Mullett    -231.1 *
        Matthew Wright    -323.8 *
                       ---------
                          -716.8 Trade gain
                          -176.6 Best 22
                         +1193.1 vs last game
INS::::
         397.4 Matthew Kreuzer
         316.3 Liam Jones
         301.3 Lachie Plowman
         300.8 Nic Newman (NEW)
         275.0 Zac Fisher
         229.4 Levi Casboult
         206.2 Mitch McGovern (NEW)
         197.4 Matthew Kennedy
         165.0 Paddy Dow
         147.6 Alex Fasolo (NEW)
     ------------------------------
         323.8 Matthew Wright (GONE)
         235.1 Aaron Mullett (GONE)
         204.5 Sam Rowe (GONE)
         173.5 Sam Kerridge (GONE)
         170.7 Jed Lamb (GONE)
          67.0 Lochie O'Brien
          60.4 Ciaran Byrne (GONE)
          54.7 Cameron Polson
          46.6 Matthew Lobbe
           7.0 Tom De Koning
OUTS::::
       +1193.1 Net (Ins: +2536.4, Outs: +1343.3)
Carlton were playing a lot of kids late in 2018, so we should expect them to be substantially better just on the fact that they'll replace many of them with seasoned players.

Likewise another of Sydney's trade partners, St Kilda:
Code:
ST KILDA
         Dan Hannebery    +324.1 *
             Dean Kent     +77.2
          Hugh Goddard      -5.6
        Nathan Freeman     -18.0
         Nathan Wright     -22.9
    Darren Minchington     -41.7
          Koby Stevens    -132.5
       Maverick Weller    -218.8 *
            Tom Hickey    -229.3 *
           Sam Gilbert    -238.2 *
                       ---------
                          -505.7 Trade gain
                           -36.7 Best 22
                         +1262.2 vs last game
INS::::
         351.4 Luke Dunstan
         324.1 Dan Hannebery (NEW)
         303.5 Shane Savage
         241.2 Dylan Roberton
         223.5 Billy Longer
         199.6 Nathan Brown
         190.4 Josh Bruce
     ------------------------------
         238.2 Sam Gilbert (GONE)
          85.8 Ben Long
          76.5 Rowan Marshall
          63.3 Logan Austin
          56.5 Bailey Rice
          30.3 Ben Paton
          20.9 Lewis Pierce
OUTS::::
       +1262.2 Net (Ins: +1833.7, Outs: +571.5)

Most teams had about 3 kids playing by the end of the season, which they would hope to replace or see appropriate improvement from next year, even finals sides like Melbourne:
Code:
MELBOURNE
            Steven May    +353.1 *
     Kade Kolodjashnij    +138.0
        Braydon Preuss     +62.4
             Dean Kent     -77.2
      Cameron Pedersen    -218.2
          Bernie Vince    -231.5
           Jesse Hogan    -302.8 *
             Dom Tyson    -331.8 *
                       ---------
                          -608.0 Trade gain
                           -46.7 Best 22
                          +873.4 vs last game
INS::::
         354.2 Jake Lever
         353.1 Steven May (NEW)
         311.3 Jeff Garlett
         297.1 Jayden Hunt
         234.8 Bayley Fritsch
     ------------------------------
         331.8 Dom Tyson (GONE)
         124.8 Charlie Spargo
          89.0 Sam Weideman
          66.8 Aaron vandenBerg
          64.7 Joel Smith
OUTS::::
        +873.4 Net (Ins: +1550.5, Outs: +677.1)

An exception, interestingly, is West Coast, who despite missing two very big names in particular late last year, didn't actually have to draw too heavily on their depth. So they aren't rated as having much upside, either:
Code:
WEST COAST
            Tom Hickey    +229.3 *
       Luke Partington     -38.9
       Malcolm Karpany     -57.3
        Eric Mackenzie     -93.2
          Scott Lycett    -247.8 *
           Mark LeCras    -328.1 *
                       ---------
                          -536.0 Trade gain
                          -191.4 Best 22
                          +302.1 vs last game
INS::::
         380.4 Andrew Gaff
         237.3 Brad Sheppard
         233.5 Nic Naitanui
         229.3 Tom Hickey (NEW)
     ------------------------------
         329.4 Mark LeCras (GONE)
         239.8 Scott Lycett (GONE)
         121.5 Liam Ryan
          87.7 Daniel Venables
OUTS::::
        +302.1 Net (Ins: +1080.5, Outs: +778.4)

Of course, all this can be of no consequence at all. The Eagles dropped a truckload of experienced talent at the end of 2017, which you might logically have expected to send them down the ladder. Instead they went in the opposite direction!
Code:
WEST COAST 2017 -> 2018 ***
       Brendon Ah Chee    +156.7
              Tom Lamb      -0.0
       Simon Tunbridge     -19.0
        Jonathan Giles     -76.1
            Sam Butler    -176.4
    Sharrod Wellingham    -219.0 *
           Drew Petrie    -317.3 *
             Josh Hill    -331.9 *
          Matt Priddis    -469.4 *
          Sam Mitchell    -471.3 *
                       ---------
                         -1923.7 Trade gain
                          -849.7 Best 22
                          -656.0 vs last game
INS::::
         273.1 Chris Masten
         204.6 Scott Lycett
         201.6 Will Schofield
         156.7 Brendon Ah Chee (NEW)
     ------------------------------
         470.5 Sam Mitchell (GONE)
         469.3 Matt Priddis (GONE)
         317.6 Drew Petrie (GONE)
         234.6 Sharrod Wellingham (GONE)
OUTS::::
        -656.0 Net (Ins: +836.0, Outs: +1492.0)
 
This is how I look at Sydney's 2018 -> 2019 player movements using AFL Ratings Points:
Code:
SYDNEY
         Daniel Menzel    +338.2 *
           Ryan Clarke    +222.2
       Jackson Thurlow    +129.3
          Alex Johnson      -8.1
          Jordan Foote     -21.9
          Kurt Tippett     -75.3
       Daniel Robinson     -81.4
           Harry Marsh    -123.5
           Dean Towers    -213.3
            Gary Rohan    -237.9 *
            Nic Newman    -300.8 *
         Dan Hannebery    -324.1 *
                       ---------
                          -696.6 Trade gain
                          -131.1 Best 22
* Best 22 player

So if Sydney field their Best 22 in Round 1, 2019 by this measure, then the difference to their lineup in their last game this year looks like:
Code:
INS::::
         338.2 Daniel Menzel (NEW)
         285.8 Callum Mills
         237.6 Nick Smith
         223.5 Sam Reid
         222.2 Ryan Clarke (NEW)
         171.3 Lewis Melican
     ------------------------------
         327.3 Dan Hannebery (GONE)
         300.8 Nic Newman (GONE)
         163.7 Aliir Aliir
         150.0 Ben Ronke
          85.0 Tom McCartin
          80.5 Daniel Robinson (GONE)
OUTS::::
        +371.3 Net (Ins: +1478.6, Outs: +1107.3)

So this shouldn't be taken too literally -- e.g. Naismith is on 147.7 AFL Ratings points so doesn't quite make the cut, but Sydney will very likely use him as ruck, and of course you can also argue that various players are under- or over-valued. But it provides a nice objective sense of how much proven talent will be entering and leaving the side.

The "+371.3 Net" figure at the bottom means Sydney's potential Round 1, 2019 lineup is better than their 2018 EF side by 371 AFL Ratings points. Which sounds pretty good! The problem is every club's R1 2019 lineup is better than its final 2018 game, and most are a lot better.

For example, here's Carlton:
Code:
CARLTON
            Nic Newman    +300.8 *
        Mitch McGovern    +206.2 *
           Alex Fasolo    +147.6
      Will Setterfield      +0.9
        Cameron O'Shea     -21.3
          Ciaran Byrne     -60.4
         Alex Silvagni     -71.3
           Nick Graham    -115.7
              Jed Lamb    -170.7 *
          Sam Kerridge    -173.5 *
              Sam Rowe    -204.5 *
         Aaron Mullett    -231.1 *
        Matthew Wright    -323.8 *
                       ---------
                          -716.8 Trade gain
                          -176.6 Best 22
                         +1193.1 vs last game
INS::::
         397.4 Matthew Kreuzer
         316.3 Liam Jones
         301.3 Lachie Plowman
         300.8 Nic Newman (NEW)
         275.0 Zac Fisher
         229.4 Levi Casboult
         206.2 Mitch McGovern (NEW)
         197.4 Matthew Kennedy
         165.0 Paddy Dow
         147.6 Alex Fasolo (NEW)
     ------------------------------
         323.8 Matthew Wright (GONE)
         235.1 Aaron Mullett (GONE)
         204.5 Sam Rowe (GONE)
         173.5 Sam Kerridge (GONE)
         170.7 Jed Lamb (GONE)
          67.0 Lochie O'Brien
          60.4 Ciaran Byrne (GONE)
          54.7 Cameron Polson
          46.6 Matthew Lobbe
           7.0 Tom De Koning
OUTS::::
       +1193.1 Net (Ins: +2536.4, Outs: +1343.3)
Carlton were playing a lot of kids late in 2018, so we should expect them to be substantially better just on the fact that they'll replace many of them with seasoned players.

Likewise another of Sydney's trade partners, St Kilda:
Code:
ST KILDA
         Dan Hannebery    +324.1 *
             Dean Kent     +77.2
          Hugh Goddard      -5.6
        Nathan Freeman     -18.0
         Nathan Wright     -22.9
    Darren Minchington     -41.7
          Koby Stevens    -132.5
       Maverick Weller    -218.8 *
            Tom Hickey    -229.3 *
           Sam Gilbert    -238.2 *
                       ---------
                          -505.7 Trade gain
                           -36.7 Best 22
                         +1262.2 vs last game
INS::::
         351.4 Luke Dunstan
         324.1 Dan Hannebery (NEW)
         303.5 Shane Savage
         241.2 Dylan Roberton
         223.5 Billy Longer
         199.6 Nathan Brown
         190.4 Josh Bruce
     ------------------------------
         238.2 Sam Gilbert (GONE)
          85.8 Ben Long
          76.5 Rowan Marshall
          63.3 Logan Austin
          56.5 Bailey Rice
          30.3 Ben Paton
          20.9 Lewis Pierce
OUTS::::
       +1262.2 Net (Ins: +1833.7, Outs: +571.5)

Most teams had about 3 kids playing by the end of the season, which they would hope to replace or see appropriate improvement from next year, even finals sides like Melbourne:
Code:
MELBOURNE
            Steven May    +353.1 *
     Kade Kolodjashnij    +138.0
        Braydon Preuss     +62.4
             Dean Kent     -77.2
      Cameron Pedersen    -218.2
          Bernie Vince    -231.5
           Jesse Hogan    -302.8 *
             Dom Tyson    -331.8 *
                       ---------
                          -608.0 Trade gain
                           -46.7 Best 22
                          +873.4 vs last game
INS::::
         354.2 Jake Lever
         353.1 Steven May (NEW)
         311.3 Jeff Garlett
         297.1 Jayden Hunt
         234.8 Bayley Fritsch
     ------------------------------
         331.8 Dom Tyson (GONE)
         124.8 Charlie Spargo
          89.0 Sam Weideman
          66.8 Aaron vandenBerg
          64.7 Joel Smith
OUTS::::
        +873.4 Net (Ins: +1550.5, Outs: +677.1)

An exception, interestingly, is West Coast, who despite missing two very big names in particular late last year, didn't actually have to draw too heavily on their depth. So they aren't rated as having much upside, either:
Code:
WEST COAST
            Tom Hickey    +229.3 *
       Luke Partington     -38.9
       Malcolm Karpany     -57.3
        Eric Mackenzie     -93.2
          Scott Lycett    -247.8 *
           Mark LeCras    -328.1 *
                       ---------
                          -536.0 Trade gain
                          -191.4 Best 22
                          +302.1 vs last game
INS::::
         380.4 Andrew Gaff
         237.3 Brad Sheppard
         233.5 Nic Naitanui
         229.3 Tom Hickey (NEW)
     ------------------------------
         329.4 Mark LeCras (GONE)
         239.8 Scott Lycett (GONE)
         121.5 Liam Ryan
          87.7 Daniel Venables
OUTS::::
        +302.1 Net (Ins: +1080.5, Outs: +778.4)

Of course, all this can be of no consequence at all. The Eagles dropped a truckload of experienced talent at the end of 2017, which you might logically have expected to send them down the ladder. Instead they went in the opposite direction!
Code:
WEST COAST 2017 -> 2018 ***
       Brendon Ah Chee    +156.7
              Tom Lamb      -0.0
       Simon Tunbridge     -19.0
        Jonathan Giles     -76.1
            Sam Butler    -176.4
    Sharrod Wellingham    -219.0 *
           Drew Petrie    -317.3 *
             Josh Hill    -331.9 *
          Matt Priddis    -469.4 *
          Sam Mitchell    -471.3 *
                       ---------
                         -1923.7 Trade gain
                          -849.7 Best 22
                          -656.0 vs last game
INS::::
         273.1 Chris Masten
         204.6 Scott Lycett
         201.6 Will Schofield
         156.7 Brendon Ah Chee (NEW)
     ------------------------------
         470.5 Sam Mitchell (GONE)
         469.3 Matt Priddis (GONE)
         317.6 Drew Petrie (GONE)
         234.6 Sharrod Wellingham (GONE)
OUTS::::
        -656.0 Net (Ins: +836.0, Outs: +1492.0)
I know this is a generic type measurement, but Weideman and vandenBerg will most definitely not be "outs" in round one.

But yes, I suppose I'm missing the point.
 
Final Siren playing around with the ladder predictor and noticed the autotip doesn't work. Is this because you're trying to input the maths still or do you need to wait for the season to start for data?

Also for the margin range I reckon that it should go up to around 100-150 (manually entering the scores get rather tedious). But that's just my opinion anyway.
 
Final Siren playing around with the ladder predictor and noticed the autotip doesn't work. Is this because you're trying to input the maths still or do you need to wait for the season to start for data?

Also for the margin range I reckon that it should go up to around 100-150 (manually entering the scores get rather tedious). But that's just my opinion anyway.
I don't think anyone has published 2019 game-by-game tips yet (not even Squiggle), which AutoTip relies on. They will arrive before the start of the season, but not sure exactly when.

Thanks for the feedback!
 
I know this is a generic type measurement, but Weideman and vandenBerg will most definitely not be "outs" in round one.

But yes, I suppose I'm missing the point.
It does seem like AFL Ratings points overweight aging stars on the decline and underweight rising talent who will probably get more games. But this may be wrong; I could believe that Champion Data gets it right more often because humans tend to overrate the chances of the exciting new kids and underrate how much value there is in long-term stalwarts.

Either way, though, you are right that teams will play a few kids even if they're objectively less productive than older players, just because you need to do that to renew the team. That shouldn't matter too much to this analysis unless some teams do it a lot more than others.
 
squiggspre19.png
I think it's time to start talking about this again. This year's starting formations seem to be a cluster of teams at the top, with a tail formed by the bottom teams. Richmond, Melbourne, GWS seem to be the front runners with West Coast, Collingwood, Geelong basically there.
Perhaps of greatest interest is the awfully low ranking of Sydney, reminiscent of the 2016 prediction for Fremantle (which, in my opinion, is the squiggle's greatest prediction).
 

Remove this Banner Ad

Back
Top