The forecasting ecosystem is in a weird spot right now. The "traditional" approach began with large-scale experiments focusing on the wisdom of the crowds, market mechanisms, etc. People were inspired by the predictive power of financial markets and wanted to replicate their strengths in other domains—policy, diplomacy, war, disease, science.

We quickly began to see a divergence, starting with the "superforecaster" phenomenon: researchers noticed that certain people consistently outperformed the rest, and if you focused on their views you could outperform both the experts and the crowd. Competitive prediction aggregation/market platforms also have serious issues with information sharing between participants, which is a big part of why teams of top forecasters outperform markets as a whole. It has taken a long time for this movement to play out, but I feel it has been accelerating lately and wanted to write down a few comments on where I think forecasting is headed.

One issue is identifying superforecasters. If you don't have access to them (they are GJO's moat in a way), then you need to find them on your own (or at least find a way to reach out to them and attract them). Thus other projects like CSET/INFER ran crowd forecasting platforms and then picked out the top predictors for their "pro team", for example. Recent AI forecasting efforts have also tried to pick out a small number of top forecasters. And then you have groups like Swift and Samotsvety (as Scott Alexander says, "If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”."). Why pay tens of thousands for a prediction market (which takes time and effort to organize) when you can just give a couple of grand to Nuño and get better answers, faster?

Others have tried to do away with the market mechanism even without having access to top forecasting talent. The DARPA SCORE program (which I've written about before) had two separate components for prediction, one market-based (Replication Markets) and another which used a structured group discussion format to arrive at estimates (RepliCATS). The results aren't out yet, but my understanding is that RepliCATS outperformed the markets.

Personally, I find the shift from open markets and various fancy scoring and incentive mechanisms to "put a handful of smart dudes in a room and ask them questions" a bit disappointing. Why did we even need the markets and forecasting platforms in the first place? To identify the smart dudes, of course—but is that all there is to it? As the top forecasters abandon markets and start competing against them, they are (in a way) pulling up the ladder behind them. We need the public tournaments to identify the talent in the first place, but if the money just goes straight into Samotsvety's pockets instead of open tournaments, new people can't join the ecosystem any more. Where is the next Samotsvety going to come from? Part of the problem is that there's a positive externality to identifying forecasting talent and it's difficult to capture that value, so we end up with a bit of a market failure.

Perhaps the only way to make markets competitive is to make them lucrative enough that it's worthwhile to form hedge fund-like teams which can generate internal benefits from information-sharing and deploy those onto the market, with the added benefit of honing them through competition. But that seems unlikely at the moment, the money just isn't there.

In one of the possible worlds ahead of us, the endpoint of this process will be the re-creation of the consulting firm—except for real this time. With the right kind of marketing angle I could easily see Samotsvety becoming a kind of 21st century McKinsey for the hip SV crowd that wants to signal that it needs actual advice rather than political cover. Could the forecasters avoid the pitfalls of the consultancy world?

What are the limits to forecasting accuracy? Eli Lifland is skeptical about the possibility of improving his abilities, but I'm not sure I buy that line entirely. We're still very early on, and many obvious low-hanging fruits have yet to be tried. If the forecasting-group-as-consultancy takes off, I would expect to see many serious attempts at improvement, starting with things like teaching domain experts forecasting and then putting them in close collaboration with top-tier generalists and forecasters.

What worries me is that this is a movement away from objective scoring and back towards reputation-based systems of trust. Once you leave the world of open markets and platforms, you become disconnected from their inescapable, public, and powerful error-correcting mechanisms—weak arguments can once again be laundered in the dirty soapwater of prominence and influence. Perhaps the current crop of top forecasters have the integrity to avoid going down that path, but how can that be maintained in the long run, with a powerful headwind of incentives and entryism blowing against us?