January 2, 2023
I’m not grading individual forecasts this time. Mostly I want to take a look at the forecasting ecosystem as a whole.
The big thing this year in forecasting was the Ukraine Conflict tournament. How’s that going?
Such a shame I didn’t make it to the top 10. This isabel person is interesting, so I downloaded their track record and I’ll probably do a review of that at some point.
The tournament has involved some … questionable choices.
There was a fake nuclear alert. Some drama involving an ambiguous resolution, since reversed. At least one of the questions was obviously resolved wrong. Overall I haven’t been too impressed with the tournament. I’ve basically checked out and am now just updating my predictions to keep up with the community, in the hope of increasing my prize. I’ve managed to do so well mainly because I (1) got in early and (2) updated more often than most people.
It’s been a rough year for Kalshi. They weren’t getting much love to begin with—partly because their markets are boring, and partly because of their willingness to cooperate with regulators. Then over the summer the CFTC decided to kill PredictIt, and suddenly everyone was very confident that it was somehow Kalshi’s fault. Nuño Sempere even gave it an explicit 60% chance. To justify his reckless overconfidence, he made a statistical model in Squiggle, which is just Guesstimate-but-worse. But there’s a problem: His prior for Kalshi being, in his words, “ruthless assholes” is 33%! That’s the fucking prior here, not the posterior. It’s hidden because he says “0.2 to 0.5,” but when you integrate it you get 33%. Really, the model is very superficial. When you have only one observation—the number of years PredictIt survived, which you could treat as a geometric variable—then the prior is very influential. And Nuño had—or is claiming to have had—a stupid prior.
This was supposed to be the year Kalshi started offering election markets, like PredictIt used to. And while the CFTC was getting dogpiled with FOIA requests and lawsuits, many of the same people denouncing Kalshi were writing letters of support for them. The result was disappointing but not surprising. The CFTC never approved Kalshi’s election markets. As far as I can tell, their request is in limbo, and there’s apparently a lot of drama going on behind the scenes at the CFTC.
Does Kalshi have a future? I found this comment letter by an NGO urging the CFTC to reject Kalshi’s election markets request. The letter gives many reasons, some better than others. Part of the problem is that election markets would violate various state laws, and this would lead to boring legal issues. Sadly, if Kalshi can’t come up with more exciting markets, they may not survive for very long. I glanced at their site just now, and a lot of their markets—even the “popular” ones—have big bid-ask spreads. I myself only invested play money, and lately I haven’t done much with it. Maybe Kalshi can still turn things around, but I’m pessimistic.
Metaculus recently hosted a series of talks on forecasting-related subjects, and one of the talks mentioned a paper that suggests various ways of testing someone’s “Bayesianess.” The tests all hinge on the fact that, for the situations we care about, beliefs are martingales. I tried out the authors’ test from section 2.4 and got a personal value of roughly 7—terrible! (Is a hypothesis test for Bayesianess self-defeating?) I’ll have to think some more about the paper and how it can be improved, but already I can see some issues.
First of all, there are many questions where I would need to update my predictions every day due to time decay, but I don’t. For example, if the question is “Will there be a regime change in Russia in 2023?” then obviously I should have some belief about when such a regime change might occur. Ideally Metaculus would give me a way to tell them this belief, and then they could do the updating for me. But that’s not possible right now, so my predictions on such a question will always be stale. This means the quadratic variation of my predictions will generally (but not necessarily) be an overestimate of my true beliefs.
Furthermore, any change in my beliefs might be a Bayesian update, but it might also be a change in my model of the world, and it’s hard to tell which. It would be nice if Metaculus had a Guesstimate-like feature you could use to keep track of your model, but alas there is no such feature.
Scott is running a forecasting contest. But by rewarding only the #1 forecaster in each category, plus a random person, he has given everyone a strong incentive to give dishonest forecasts. Several people in the comments pointed this out. Hopefully he has Metaculus handle the prizes next time.
In forecasting, as in life, 90% of success is just showing up. I’m more bearish on prediction markets lately. Partly this is because of Kalshi’s failure to set up election markets—or any interesting markets at all, really. But mostly it’s because of the prediction market enthusiasts who prefer to ignore the law rather than try to improve it. On the bright side, the world is becoming a bit more predictable now that the Ukraine war has slowed down and covid mania is over. Here’s hoping that trend continues this year.