-
Notifications
You must be signed in to change notification settings - Fork 0
/
forecasters.html
309 lines (308 loc) · 17.5 KB
/
forecasters.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
<html><head><title>niplav</title>
<link href="./favicon.png" rel="shortcut icon" type="image/png"/>
<link href="main.css" rel="stylesheet" type="text/css"/>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>
<!DOCTYPE HTML>
<style type="text/css">
code.has-jax {font: inherit; font-size: 100%; background: inherit; border: inherit;}
</style>
<script async="" src="./mathjax/latest.js?config=TeX-MML-AM_CHTML" type="text/javascript">
</script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
processEscapes: true,
skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
<script>
document.addEventListener('DOMContentLoaded', function () {
// Change the title to the h1 header
var title = document.querySelector('h1')
if(title) {
var title_elem = document.querySelector('title')
title_elem.textContent=title.textContent + " – niplav"
}
});
</script>
</head><body><h2 id="home"><a href="./index.html">home</a></h2>
<p><em>author: niplav, created: 2022-04-04, modified: 2024-04-06, language: english, status: notes, importance: 6, confidence: highly likely</em></p>
<blockquote>
<p><strong>Beginnings of a research agenda about <a href="https://en.wikipedia.org/wiki/Forecasting#Judgmental_methods">judgmental
forecasting</a>.</strong></p>
</blockquote><div class="toc"><div class="toc-title">Contents</div><ul><li><a href="#The_Five_Horsemen_of_Hard_Forecasting">The Five Horsemen of Hard Forecasting</a><ul></ul></li><li><a href="#How_Good_Are_We_At_Forecasting">How Good Are We At Forecasting?</a><ul></ul></li><li><a href="#How_Can_We_Become_Better_At_Forecasting">How Can We Become Better At Forecasting?</a><ul><li><a href="#Scoring_Rules">Scoring Rules</a><ul></ul></li><li><a href="#Difficult_Types_of_Questions">Difficult Types of Questions</a><ul></ul></li><li><a href="#Forecasting_Techniques">Forecasting Techniques</a><ul><li><a href="#Question_Decomposition">Question Decomposition</a><ul></ul></li></ul></li></ul></li><li><a href="#How_Can_We_Ask_Better_Forecasting_Questions">How Can We Ask Better Forecasting Questions?</a><ul></ul></li><li><a href="#Other_Questions">Other Questions</a><ul></ul></li><li><a href="#See_Also">See Also</a><ul></ul></li></ul></div>
<h1 id="Forecasters_What_Do_They_Know_Do_They_Know_Things_Lets_Find_Out"><a class="hanchor" href="#Forecasters_What_Do_They_Know_Do_They_Know_Things_Lets_Find_Out">Forecasters: What Do They Know? Do They Know Things?? Let's Find Out!</a></h1>
<p>Judgmental forecasting is a fairly recent and (in my humble opinion)
under-researched & under-appreciated human endeavour and field of
research, with some low-hanging fruit (which are getting picked almost
as fast as I can write them up).</p>
<h2 id="The_Five_Horsemen_of_Hard_Forecasting"><a class="hanchor" href="#The_Five_Horsemen_of_Hard_Forecasting">The Five Horsemen of Hard Forecasting</a></h2>
<p>In general, judgmental forecasting methods operate best in areas with
fast feedback loops, large existing datasets (or at least good reference
classes for base rates) and continuous historical trends.</p>
<p>We can therefore identify the five horsemen of hard forecasting:</p>
<ul>
<li><strong>Long time horizons</strong>: Because most forecasters and traders
<a href="https://en.wikipedia.org/wiki/Discounting">discount</a> the
future (either due to rewards further in the future being less
certain, or because whatever investment is bound up in a bet
could be used in the mean term, or because they actually weigh
the future lower), and because long term thinking activates <a href="https://www.overcomingbias.com/2010/06/near-far-summary.html">far
mode</a>
from <a href="https://en.wikipedia.org/wiki/Construal_level_theory">construal level
theory</a>,
the incentives to perform well on long-term questions are weaker
than on short-term questions. Additionally, forecasters receive
much more & better feedback on short-term questions. One would
expect long-term questions to receive less accurate forecasts
because of this, and the evidence points to this being the case (<a href="https://rethinkpriorities.org/publications/data-on-forecasting-accuracy-across-different-time-horizons">Dillon
2021</a>,
<a href="https://rethinkpriorities.org/publications/data-on-forecasting-accuracy-across-different-time-horizons">Niplav
2022</a>).
But we're often especially interested in long-term questions: How can
we incentivize or create good forecasts on those questions?</li>
<li><strong>Reward-correlated predictions</strong>: The clearest examples of this problem
are questions on extinction events: If you forecast doom, you're never
going to get rewarded for it, because the resolution happens <em>only</em> in
worlds where the bad outcome didn't occur. Forecasters are <a href="https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh">embedded
agents</a> in the world
they are predicting on, and there is no Cartesian boundary. This can
happen with prediction markets as well: when making predictions on the
outcome of a decision, with the payout of the prediction market being
in a currency that is affected by the decision (for example devaluing
it respective to other currencies), the market might choose the "worse"
decision (according to the metric used for scoring it) because it prevents
the currency from being devalued as much.</li>
<li><strong>Low probability events</strong>: Some events are very important, but
have a low probability (extreme stock market crashes, extinction
events, rare diseases, encounters with aliens etc.). But low
probability events are maybe even harder to forecast than long
time horizon events: they often don't have good reference classes,
while long time horizon questions do (that's why we have history and
time series data!), and forecasters very rarely encounter them. We
might just round all probabilities <1% to 0%, lest we get <a href="https://en.wikipedia.org/wiki/Pascal's_mugging">Pascal's
mugged</a>, but in doing
so we close our eyes to possible dangers (and prizes) out there, the
<a href="https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb">Talebian</a> approach
of erring on the side of caution by "rounding them <em>up</em>" condemns us to
eternal overcaution and conservatism, so as a first step we definitely
want our probabilities to be as accurate as possible.</li>
<li><strong>Out-of-distribution situations</strong>: Whenever things with no
clear existing reference class occur, such as novel technologies
(social media, the internet in general, nuclear weapons,
international shipping logistics, and in the future potentially
genetic engineering or self-driving cars), forecasters struggle to
anticipate the consequences (or foresee those shifts). This isn't
limited to forecasters and prediction markets: if regular people,
pundits and domain experts on average do worse than top forecasters
(though as a counterpoint to forecasters>experts see <a href="https://forum.effectivealtruism.org/posts/qZqvBLvR5hX9sEkjR/comparing-top-forecasters-and-domain-experts">Leech & Yagudin
2022</a>),
then we wouldn't expect them to do much better specifically in very
novel & unforeseen situations (reasons why this could still happen:
experts might have detailed causal models that are outperformed by
simple heuristics in the modal case, but as we go outside of the normal
course of events, those causal & theoretical models break down much more
gracefully than simple surface heuristics).</li>
<li><strong>Hard-to-specify events</strong>: Maybe we are slicing up forecasting
the wrong way: as the old adage goes, the hard part is not coming
up with the answer, it is coming up with the right question to
ask. Similarly, for forecasting, we often run into the problem of
specifying exactly <em>what</em> we want to know about: Too broad and
you drive away forecasters and traders <a href="https://www.lesswrong.com/posts/a4jRN9nbD79PAhWTB/prediction-markets-when-do-they-work#I__Well_Defined">who don't want to waste
their time on predicting the whims of whoever resolves the market in the
end</a>,
too narrow and you miss what you actually care about or invite
<a href="https://www.lesswrong.com/tag/goodhart-s-law">Goodharting</a>. An
additional layer of complexity is added when hobbyists do your
forecasting, in which case narrow questions just <em>aren't very interesting
to do predictions on</em>. This could be seen with the <a href="https://www.metaculus.com/questions/3061/animal-welfare-series-clean-meat/">Metaculus clean meat
tournament</a>:
many questions were just different combinatorial variations on
each other, with maybe five being interesting to predict on,
but not all fourteen, leading to many questions receiving less
than 100 predictions during the tournament. But "interestingness"
and "specifiability" appear to be tugging in opposite directions:
hobbyists are probably most interested in making broad claims that flow
from their worldview, instead of finding minutiae for very specific
questions. Finding ways to create more specific questions on events
(or avoid doing so with clever tricks while still receiving accurate
forecasts) is important and difficult. <a href="https://www.lesswrong.com/posts/ufW5LvcwDuL6qjdBT/latent-variables-for-prediction-markets-motivation-technical">Latent variable prediction
markets</a>
offer one approach—how easy are they to implement with acceptable UX?</li>
</ul>
<p>We can use these categories as guideposts: How bad are these as
problems? What approaches have been proposed/tried/implemented so far? If
we can improve one of them without harming our ability to perform well
on the others, we have made progress, if we improve several in tandem,
that's even better.</p>
<h2 id="How_Good_Are_We_At_Forecasting"><a class="hanchor" href="#How_Good_Are_We_At_Forecasting">How Good Are We At Forecasting?</a></h2>
<ul>
<li>How good are long-term forecasts?
<ul>
<li>How quickly does our forecasting ability decrease with increasing range of the question/forecast?
<ul>
<li>Does it decrease at all, or just oscillate wildly?</li>
<li>How quickly does performance degrade in different categories of questions (finance, meteorology, global economics, technological development) and by different forecasters (prediction markets, superforecasters & teams)?</li>
</ul></li>
<li>Are there people who are better long-term forecasters and people who are better short-term forecasters?
<ul>
<li>See <a href="http://nitter.poast.org/Simeon_CPS/status/1655277260524453892">here</a></li>
</ul></li>
</ul></li>
<li>How good are our forecasts on low-probability events?</li>
<li>How good are our forecasts on extinction events?</li>
<li>How good are our forecasts in situations where we have historical discontinuities?</li>
<li>How quickly/slowly do our forecasts converge to the final answer?
<ul>
<li>When don't they converge?</li>
<li>Can we classify convergence/divergence/oscillation behaviors?</li>
</ul></li>
<li>How do prediction markets, professional forecasting teams, internet enthusiasts and large language models compare?
<ul>
<li><a href="https://forum.effectivealtruism.org/posts/qZqvBLvR5hX9sEkjR/comparing-top-forecasters-and-domain-experts">Arb 2022</a></li>
<li><a href="https://github.com/MperorM/gpt3-metaculus">GPT-3 forecasting ability</a></li>
</ul></li>
<li>What is a good formalization of the idea of a forecaster being accurate at a level of n%?
<ul>
<li>See <a href="./precision.html">Precision of Sets of Forecasts</a></li>
<li>Are better short-term forecasters also better long-term forecasters?</li>
<li>Do forecasters become better at forecasting over time?
<ul>
<li>How quickly?</li>
<li>Over time/over more forecasts</li>
</ul></li>
<li>How much does forecaster quantity affect forecast quality on continuous questions? (i.e., extend <a href="https://rethinkpriorities.org/publications/how-does-forecast-quantity-impact-forecast-quality-on-metaculus">Dillon 2021</a> to continuous data)
<ul>
<li>How much does forecasting time affect forecast quality? That is, what is the relation of accuracy of prediction to the time spent on refining that prediction?
<ul>
<li>Generally, scaling laws for forecasting would be interesting/cool to see.</li>
</ul></li>
<li>How much do number of resolutions/forecasts matter for forecast quality/learning?</li>
</ul></li>
</ul></li>
<li>Do laypeople/pundits/domain experts perform better than forecasters/superforecasters/forecasting teams/prediction markets <em>specifically</em> under novel & unforeseen situations?</li>
<li>Are more extreme views or more conservative views more accurate?
<ul>
<li>Question originally asked in <a href="https://www.overcomingbias.com/2007/02/is_truth_in_the.html">Hanson 2007</a></li>
<li>Are there people who are better long-term forecasters and people who are better short-term forecasters?
<ul>
<li>See <a href="http://nitter.poast.org/Simeon_CPS/status/1655277260524453892">here</a></li>
</ul></li>
</ul></li>
<li>How well does forecasting expertise in one domain transfer to another?
<ul>
<li>That is, if a forecaster starts by forecasting in some domain <code>$D$</code>, and after a while switches to domain <code>$D'$</code>, how much better is the forecaster than if he'd started out in <code>$D'$</code> without any other experience?</li>
<li>This would be even more interesting when also having a metric for the difference between <code>$D$</code> and <code>$D'$</code>.</li>
</ul></li>
</ul>
<h2 id="How_Can_We_Become_Better_At_Forecasting"><a class="hanchor" href="#How_Can_We_Become_Better_At_Forecasting">How Can We Become Better At Forecasting?</a></h2>
<h3 id="Scoring_Rules"><a class="hanchor" href="#Scoring_Rules">Scoring Rules</a></h3>
<ul>
<li>What possible forecasting scoring rules could we develop?
<ul>
<li>Taking into account:
<ul>
<li>Accuracy compared to others</li>
<li>Importance of question</li>
</ul></li>
<li>That incentivize collaboration and positive-sum interactions instead of information-hiding
<ul>
<li>The literature on information elicitation could be useful here</li>
</ul></li>
</ul></li>
<li>How can we compare the skill and reliability of forecasters to one another?
<ul>
<li>Metaculus at the moment does this by "who writes good comments". That seems inadequate.</li>
<li>Taking into account:
<ul>
<li>Number of questions each forecaster predicted on</li>
<li>Calibration</li>
<li>Resolution</li>
<li>Importance of questions</li>
</ul></li>
<li>Two boundary methods:
<ul>
<li>Compare using a scoring rule on any question the forecasters predicted on</li>
<li>Compare using a scoring rule on the intersection of the questions the forecasters predicted on</li>
</ul></li>
<li>Two functions of scoring rules: Rewarding or comparing forecasters</li>
<li>Related field: honest reporting and information elicitation
<ul>
<li>See also: Section 27.4.2 from Algorithmic Game Theory (Nisan et al. 2007)</li>
</ul></li>
</ul></li>
</ul>
<h3 id="Difficult_Types_of_Questions"><a class="hanchor" href="#Difficult_Types_of_Questions">Difficult Types of Questions</a></h3>
<ul>
<li>How can we deal with questions with unclear resolution criteria?
<ul>
<li>Collect Metaculus experiments on this</li>
</ul></li>
<li>How do we incentivise good predictions on long-term questions?
<ul>
<li>Ideas:
<ul>
<li>chained temporal forecasts</li>
</ul></li>
</ul></li>
<li>How do we incentivise good predictions on low-probability events?
<ul>
<li>Ideas:
<ul>
<li>chained conditional forecasts</li>
</ul></li>
</ul></li>
<li>Is there any conceivable way of incentivizing good predictions on extinction events?</li>
</ul>
<h3 id="Forecasting_Techniques"><a class="hanchor" href="#Forecasting_Techniques">Forecasting Techniques</a></h3>
<h4 id="Question_Decomposition"><a class="hanchor" href="#Question_Decomposition">Question Decomposition</a></h4>
<p>Moved <a href="./decompose.html">here</a>.</p>
<h2 id="How_Can_We_Ask_Better_Forecasting_Questions"><a class="hanchor" href="#How_Can_We_Ask_Better_Forecasting_Questions">How Can We Ask Better Forecasting Questions?</a></h2>
<ul>
<li>What are methods of scoring/defining how good a question was?</li>
<li>How many questions resolve due to technicalities in the resolution criteria?
<ul>
<li>Are the ratios here different across different question categories?</li>
<li>How does this ratio develop as one puts more effort into specifying resolution criteria?</li>
<li>This might be studied qualitatively/semi-quantitatively.</li>
</ul></li>
</ul>
<h2 id="Other_Questions"><a class="hanchor" href="#Other_Questions">Other Questions</a></h2>
<ul>
<li>Where are the big datasets of past judgmental forecasts?</li>
<li>What is the rate of positive resolution by range?</li>
<li>How good a predictor is forecasting performance of intra-individual cognitive performance?</li>
<li>How difficult is it to manipulate real existing prediction platforms?
<ul>
<li>Markets
<ul>
<li>PredictIt</li>
<li>BetFair</li>
</ul></li>
<li>Hobbyist sites
<ul>
<li>Metaculus</li>
<li>PredictionBook</li>
</ul></li>
</ul></li>
<li>How can we develop better forecast aggregation methods?
<ul>
<li>Use momentum of past forecasts</li>
<li>Use the <a href="https://en.wikipedia.org/wiki/Generalized_mean">generalized mean</a> with changing <code>$p$</code> as the time to question resolution shrinks
<ul>
<li>Should <code>$p$</code> be increasing/decreasing/following a more complicated pattern?</li>
<li>Can we do something cool with the <a href="https://en.wikipedia.org/wiki/Quasi-arithmetic_mean">quasi-arithmetic mean</a>?</li>
</ul></li>
</ul></li>
</ul>
<h2 id="See_Also"><a class="hanchor" href="#See_Also">See Also</a></h2>
<ul>
<li><a href="https://forecasting.quarto.pub/book/index.html">Forecasting: Lecture Notes</a> by Jacob Steinhardt</li>
</ul>
</body></html>