At the risk of defending Meteoric Rise, they are talking about maximum v minimum discrepancy, rather than absolute or actual discrepancy. The GWS v Pies example is a good one because the maximum discrepancy is +4/-4 (note it can go either way, an educated guess using team bias from the coach can be helpful but lacks reliability and repeatability) while the minimum discrepancy is 0 (2&2)
So while it can express the maximum possible discrepancy over a sample size of a season, the discussion on maximum upper discrepancy should be balanced by the maximum lower bound and minimum possible discrepancy.
It would also need to be applied at scale across the competition, then standardised to generate any useful comparison. It is then possible to generate the probability of funny buggers using variance against the mean, and then draw an inference using the data against environmental context
EDIT: I should add that the lack of transparency from what I've bothered to read in generating the maximum upper bound indicates expectation bias, while lack of consideration for the minimum and maximum lower indicates a likely confirmation bias
So while it can express the maximum possible discrepancy over a sample size of a season, the discussion on maximum upper discrepancy should be balanced by the maximum lower bound and minimum possible discrepancy.
It would also need to be applied at scale across the competition, then standardised to generate any useful comparison. It is then possible to generate the probability of funny buggers using variance against the mean, and then draw an inference using the data against environmental context
EDIT: I should add that the lack of transparency from what I've bothered to read in generating the maximum upper bound indicates expectation bias, while lack of consideration for the minimum and maximum lower indicates a likely confirmation bias
Last edited: