Chief
~ Shmalpha ~
- Admin
- #1,251
But then what will people argue about on my forum???I think there's enough evidence that human assessment is definitely worse than a model, unless the human gets lucky for a while or is quite exceptional. Even the experts, who spend all day applying their human brain to the task, usually perform worse, as evidenced by your local newspaper's tips.
But I'm sure a human assessment often LOOKS better! I've seen lots of this on BigFooty, where people post rankings that are approved at the time by the hive mind, but then when you look at them later they were wildly wrong -- and not just wrong in the way where everyone was wrong (like how no-one tipped the last three premiers before the start of the season), which is understandable, but in the way where everything they claimed to have special insight about turned out to be off. That's fun to read about, and attracts attention, but is kind of unforgivable for a forecaster, since you're taking common wisdom and making it worse.
On the flip side, there is also a kind of selective memory when people make big calls and get them right! At first, this attracts ridicule, but if it turns out to be correct, popular sentiment seems to switch around to, "Yeah, but was that really surprising?" Everything seems obvious in retrospect, once it's already happened. So people underestimate what a great call it was, and forget that not many others made it.
Anyway, I guess I'm saying that if you wanted to compare the accuracy of human & computer forecasters, you should do so with real evidence, not just try to remember who posted what when.