Remove this Banner Ad

Team Rating System, update after Finals, Week Three

🥰 Love BigFooty? Join now for free.

Hobbes, the second ladder would be more interesting if it covered teams last 13 home and away matches (I'm assuming finals are included as you just said 'each teams last 13 matches'). Finals matches will obviously scew results as there would be a higher incidence of positively rated teams.
 
Could you explain how Adelaide is in front of an undefeated North Melbourne having lost to them only 3 weeks ago :drunk:
He is clearly trying to account for multiple factors and come up with a Ranking which indicates how well a team is travelling. i.e. Port have beaten Saints and Dons @ home while another team with 2 wins may have beaten Hawks and Dogs away. Which is more impressive.

Perhaps you should turn to the Sports page of yesterdays paper and refer to the official AFL ladder if that makes you feel better.

Stop being so negative...Thx OP for trying something really interesting...keep it up
 
North might not be as highly rated as if they had bigger wins or played better teams, but they still have a significant positive rating.

Don't get me wrong, I'm not offended as such, just seemed a weird contradiction. Way too early to tell after only 3 games with any system. Looking at previous years can provide context but is fraught with danger too, the biggest of changes can happen to teams over a pre season. Based on right this minute there is no evidence however that either Richmond or Port Adelaide are any particularly hard task this year though - even with Richmond away.
 
Don't get me wrong, I'm not offended as such, just seemed a weird contradiction. Way too early to tell. Based on right this minute there is no evidence that either Richmond or Port Adelaide are any particularly hard task this year.

Currently, Richmond are rating at a small minus and Port at a small plus (and, Adelaide's wins were quite big). So far, they look like tougher draws than Brisbane and Melbourne. The model will reevaluate results retroactively, so the value of these wins will be reconsidered if any team's form shifts.
 

Log in to remove this Banner Ad

Currently, Richmond are rating at a small minus and Port at a small plus (and, Adelaide's wins were quite big). So far, they look like tougher draws than Brisbane and Melbourne. The model will reevaluate results retroactively, so the value of these wins will be reconsidered if any team's form shifts.

Will be interesting to see how things go over the year - you have more patience than me that's for sure. We all look back at the end of the year and rate certain wins / losses based on how teams ended up, but even that isn't truly fair. Even lower teams can have periods where they are flying while ending up towards the bottom of the ladder. Injury, form, the whims of the universe all play a part.
 
For a variation, I tried using the same method but incorporating each team's last 13 matches, going back to the middle of last year's minor round.

(If I continue this calculation through the year, it will eventually converge with the other rating)

This left the following rankings:

1. Hawthorn
2. Adelaide
3. Western Bulldogs
4. Sydney
5. West Coast
6. North Melbourne
7. Geelong
8. Port Adelaide
9. Richmond (!!)
10. Gold Coast
11. Fremantle
12. GWS
13. Collingwood
14. Melbourne
15. Brisbane
16. St Kilda
17. Essendon
18. Carlton

And the following tips for this weekend:

West Coast +27 v Richmond
Geelong +40 v Essendon
Hawthorn +52 v St Kilda
Brisbane v Gold Coast +17
Carlton v Western Bulldogs +58
Adelaide +17 v Sydney
GWS v Port Adelaide +10
Melbourne v Collingwood DRAW (I calculate an advantage to Collingwood of 0.2)
North Melbourne +33 v Fremantle

And the following tips for this weekend:
that seems to suit the eye test to a lot better standard. I feel comfortable in those tips, except for Adelaide beating sydney,
 
In the initial post, no. About halfway down the page there's a second ladder and set of tips which does take into account the last 13 matches played by every team, which does extend back to the middle of last year.

I can post both versions as we go to compare them.

Yeah I saw that , but then what do you base the ratings of each team at the very start for those 13 games ?

If you start all teams as even, then the teams that get the easy games 1st up will alway be overrated and the reverse with the teams that start with a hard draw. That will then skew the rankings for all time essentially.

Using last years ladder is also flawed as lists have changed alot, maybe somehow use the premiership odds at the start of the year as abaseline, at least they are unbiased. HOw you rate each side prior to starting is vital though.
 
He is clearly trying to account for multiple factors and come up with a Ranking which indicates how well a team is travelling. i.e. Port have beaten Saints and Dons @ home while another team with 2 wins may have beaten Hawks and Dogs away. Which is more impressive.

Perhaps you should turn to the Sports page of yesterdays paper and refer to the official AFL ladder if that makes you feel better.

Stop being so negative...Thx OP for trying something really interesting...keep it up
Except his multiple factors combined to over rate the team he just happens to support, call me suspicious but I call bullshit.

For example even of the two win teams beating West Coast and The Bulldogs is surely better than beating Port and Richmond.

The problem is his system seems to favor beltings over the quality of the opposition, beating Richmond by 36 away is better than beating the Bulldogs by 3 at Etihad except does anybody actually think that?

The sample size is also way to small it probably rates port as an impressive win but they really haven't beaten anyone of note at this point.

Perhaps you should return to the Adelaide boards if you don't like people question just how an Adelaide poster manage to get his team so high in the rankings.
 
Except his multiple factors combined to over rate the team he just happens to support, call me suspicious but I call bullshit.

For example even of the two win teams beating West Coast and The Bulldogs is surely better than beating Port and Richmond.

The problem is his system seems to favor beltings over the quality of the opposition, beating Richmond by 36 away is better than beating the Bulldogs by 3 at Etihad except does anybody actually think that?

The sample size is also way to small it probably rates port as an impressive win but they really haven't beaten anyone of note at this point.

Perhaps you should return to the Adelaide boards if you don't like people question just how an Adelaide poster manage to get his team so high in the rankings.

The algorithm will continue to re-evaluate results based on the teams' future performances. So, the results against Port and Richmond will be evaluated with more accuracy when there's more data.

The method deals quite effectively with beltings. If a typical top-4 team (roughly +25) plays a typical bottom-4 team (roughly -25) then their baseline result to keep their ranking intact is +50. And wins of more that 50 are shrunk by half of the excess (so Adelaide's 58 point win over Port was actually reduced to +54, while a 120 point massacre will be recorded as +85.)

Since I've posted virtually all of my methods in advance, I'll have nowhere to hide if Adelaide's ratings are affected less favourably by future results. So I'll just keep following the numbers. Note that Adelaide has a well-known killer draw until round 8, so it's likely that they'll rate higher than their ladder placing for a little while.

Even the sainted Squiggle has Adelaide rated as the best attack in the league, and third on their flagpole measure. It's not far-out to suggest that Adelaide are a serious contender. We'll know more in the next month or so, with key matches against teams like Sydney, Geelong and Hawthorn.
 
Yeah I saw that , but then what do you base the ratings of each team at the very start for those 13 games ?

If you start all teams as even, then the teams that get the easy games 1st up will alway be overrated and the reverse with the teams that start with a hard draw. That will then skew the rankings for all time essentially.

Using last years ladder is also flawed as lists have changed alot, maybe somehow use the premiership odds at the start of the year as abaseline, at least they are unbiased. HOw you rate each side prior to starting is vital though.

All of the ratings are calculated and recalculated every round, including modifiers for past results. So, if the data is insufficient, the results will be placed into a more accurate context as more scores get entered later.
 
Just on that last post, look at what Final Siren did with his squiggles and Chuck it up on the net. Clearly and concisely describe how your system works and what the input data is, and when people offer feedback create new models. Doesn't matter if the feedback is slanderous, it's feedback.

Learn from Roby: don't be Roby.
 
Have you used your algorithm to run through last years results starting at round 1?

It seems to me, that knowing what the final result is in advance, it will show you where the multiplying factors might need a tweek when applied to a new and unknown season.
 

Remove this Banner Ad

Sure, will do.

Adelaide's three results (weighted 20-18-16) are rated at a gross of +42,+54 and -5. This takes into account the match result and away bonuses. The net results, taking into account the quality of the opposition, are 34, 57 and 2.

North Melbourne's three results are worth a gross of +8.5 (a small bonus for falling over the line in a close match), +40 and +5. The net results are +5, +21 and +19.

In layman's terms, North's failure to beat Melbourne by more than Essendon could did not really impress, and little was proved by beating Brisbane by a smaller margin than everybody else. In comparison, Adelaide had a small away loss to a strong team, and two large wins against teams rated by many as contenders. (OK, Richmond were regarded as a contender, but many are now discounting them).

This is where the ridiculousness of math gets in the way of logic. Melbourne played significantly better than they did the week before, as did the Lions after being humiliated by the Eagles in round one. It isn't our fault that teams played worse prior to facing us, it is coincidence.

The only thing your algorithm hopes for is that form fluctuates and teams overall balance out between catching teams in good form and bad. However, that isn't a guarantee. Unless you build in a factor that designates how close to their capacity a team performs at then there is a very flawed grading system comparing results.
 
This is where the ridiculousness of math gets in the way of logic. Melbourne played significantly better than they did the week before, as did the Lions after being humiliated by the Eagles in round one. It isn't our fault that teams played worse prior to facing us, it is coincidence.

The only thing your algorithm hopes for is that form fluctuates and teams overall balance out between catching teams in good form and bad. However, that isn't a guarantee. Unless you build in a factor that designates how close to their capacity a team performs at then there is a very flawed grading system comparing results.
Doesn't really sound right to me

Because then aren't we only rating teams based on their ceiling? That would put flat track bullies artificially higher
 
Have you used your algorithm to run through last years results starting at round 1?

It seems to me, that knowing what the final result is in advance, it will show you where the multiplying factors might need a tweek when applied to a new and unknown season.

I started doing this midway through last season, using all of last year's results. I can report that it works reasonably well. Some tweaks, like shrinking the biggest margins, are based on earlier models.

Applying the algorithm publicly like this is sort of a test for myself - I can see how the existing model works without any temptation to tweak as I go. If the ratings are good or bad it should be demonstrated by the predictive aspect. I'm ambivalent about including last year's results - some teams change heavily from year to year - but I've included a version which does that as a Mark-2 model. The version which uses only this year's results will be of reduced value until there are about 8 rounds of data, I'm guessing - that would be over 85% of the weighting covered.
 
This is where the ridiculousness of math gets in the way of logic. Melbourne played significantly better than they did the week before, as did the Lions after being humiliated by the Eagles in round one. It isn't our fault that teams played worse prior to facing us, it is coincidence.

The only thing your algorithm hopes for is that form fluctuates and teams overall balance out between catching teams in good form and bad. However, that isn't a guarantee. Unless you build in a factor that designates how close to their capacity a team performs at then there is a very flawed grading system comparing results.

If every team is playing better against you, maybe you're doing something wrong which you're not recognizing. Making the opponents play badly can be a mark of a good team.
 
Doesn't really sound right to me

Because then aren't we only rating teams based on their ceiling? That would put flat track bullies artificially higher

I am not sure how a flat track bully can be over-inflated. It should discount the impact of their performance if they are bullying a team that is playing shit, if they are playing a team playing well then it shouldn't be a flat track performance.
 
If every team is playing better against you, maybe you're doing something wrong which you're not recognizing. Making the opponents play badly can be a mark of a good team.

Nobody played better than us, it is why we are undefeated. There are factors that resulted in the margins being closer in Rd2 & 3 that were beyond our control.

Take my side for example, it is not within our control that Brisbane got rogered by the Eagles in Rd1 and they lifted their performance the following week. Similarly, Melbourne losing to Essendon was a significant catalyst for them playing closer to their potential than they did the week before.

It was just a coincidence that we met both sides coming off a bad loss and ironically now face Fremantle coming off another bad loss.

Things like capacity to play to your potential is a factor: Sandilands going down and Mundy not available will have a significant impact on this game, ignoring availability just makes it a poor guess.

Some teams match up well against some teams, poorly against others, it produces results which statistically make no sense. If you can't factor what are predictable outcomes then the algorithm resorts to even more guess work.

How do you factor a teams ability to score, it is much easier to score at Docklands because of the roof, was harder to score in hot and humid conditions in Brisbane, was even harder to score with the windy conditions in Tasmania.

To have a somewhat reliable algorithm it has to at least make an attempt to factor all the variables you can possibly predict otherwise you are banking on all the elements you do not take into consideration to even each other out, and they will not always even each other out.

There is always going to be the random elements of chance, however, if you ignore predictable elements then it will be a flawed model. How flawed it will be will largely determined by how accurately it can process known and significant elements about results. Most models we have seen to date you can emulate the results or do better by throwing darts at a dartboard.
 

🥰 Love BigFooty? Join now for free.

Nobody played better than us, it is why we are undefeated. There are factors that resulted in the margins being closer in Rd2 & 3 that were beyond our control.

Take my side for example, it is not within our control that Brisbane got rogered by the Eagles in Rd1 and they lifted their performance the following week. Similarly, Melbourne losing to Essendon was a significant catalyst for them playing closer to their potential than they did the week before.

It was just a coincidence that we met both sides coming off a bad loss and ironically now face Fremantle coming off another bad loss.

Things like capacity to play to your potential is a factor: Sandilands going down and Mundy not available will have a significant impact on this game, ignoring availability just makes it a poor guess.

Some teams match up well against some teams, poorly against others, it produces results which statistically make no sense. If you can't factor what are predictable outcomes then the algorithm resorts to even more guess work.

How do you factor a teams ability to score, it is much easier to score at Docklands because of the roof, was harder to score in hot and humid conditions in Brisbane, was even harder to score with the windy conditions in Tasmania.

To have a somewhat reliable algorithm it has to at least make an attempt to factor all the variables you can possibly predict otherwise you are banking on all the elements you do not take into consideration to even each other out, and they will not always even each other out.

There is always going to be the random elements of chance, however, if you ignore predictable elements then it will be a flawed model. How flawed it will be will largely determined by how accurately it can process known and significant elements about results. Most models we have seen to date you can emulate the results or do better by throwing darts at a dartboard.

I'm not even attempting this sort of subjective variable. I don't care if your three best players are injured, the umpires were on a retainer from the other team, or if the wind changed direction every quarter. If every team lifts against you, maybe you need to work on your pressure. If you suffer from injuries, maybe you need more depth in the squad
 
I'm not even attempting this sort of subjective variable. I don't care if your three best players are injured, the umpires were on a retainer from the other team, or if the wind changed direction every quarter. If every team lifts against you, maybe you need to work on your pressure. If you suffer from injuries, maybe you need more depth in the squad

Run your algorithm through last year, if it can't produce reliable results you know it's worth sticking in the bin... or dart board.
 
Sounds a lot like the squiggle


Easy way to test best algorithm is see how well it works on past systems and tweak till its predicting correctly.

Obviously pretty hard to do.
 
Run your algorithm through last year, if it can't produce reliable results you know it's worth sticking in the bin... or dart board.

I did run it last year. It worked fine. I don't know what you're calling "reliable results". If you like, I can tally the win-loss predictions and compare with bookie's favourites, and compare the margin predictions with bookie's lines.
 

Remove this Banner Ad

Team Rating System, update after Finals, Week Three

🥰 Love BigFooty? Join now for free.

Back
Top