-
Notifications
You must be signed in to change notification settings - Fork 0
/
notes_on_ethics.html
373 lines (356 loc) · 25.5 KB
/
notes_on_ethics.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
<html><head><title>niplav</title>
<link href="./favicon.png" rel="shortcut icon" type="image/png"/>
<link href="main.css" rel="stylesheet" type="text/css"/>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>
<!DOCTYPE HTML>
<style type="text/css">
code.has-jax {font: inherit; font-size: 100%; background: inherit; border: inherit;}
</style>
<script async="" src="./mathjax/latest.js?config=TeX-MML-AM_CHTML" type="text/javascript">
</script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
processEscapes: true,
skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
<script>
document.addEventListener('DOMContentLoaded', function () {
// Change the title to the h1 header
var title = document.querySelector('h1')
if(title) {
var title_elem = document.querySelector('title')
title_elem.textContent=title.textContent + " – niplav"
}
});
</script>
</head><body><h2 id="home"><a href="./index.html">home</a></h2>
<p><em>author: niplav, created: 2021-03-31, modified: 2022-03-15, language: english, status: notes, importance: 3, confidence: highly unlikely</em></p>
<blockquote>
<p><strong>This page contains my notes on ethics, separated from my regular
notes to retain some structure to the notes.</strong></p>
</blockquote><div class="toc"><div class="toc-title">Contents</div><ul><li><a href="#Converging_Preference_Utilitarianism">Converging Preference Utilitarianism</a><ul><li><a href="#Method">Method</a><ul></ul></li><li><a href="#Variations">Variations</a><ul></ul></li><li><a href="#Problems">Problems</a><ul><li><a href="#Assumptions">Assumptions</a><ul></ul></li><li><a href="#Sentient_Simulations">Sentient Simulations</a><ul></ul></li><li><a href="#Genuinely_Selfish_Agents">Genuinely Selfish Agents</a><ul></ul></li><li><a href="#Lacking_Brain_Power">Lacking Brain Power</a><ul></ul></li></ul></li><li><a href="#See_Also">See Also</a><ul></ul></li></ul></li><li><a href="#Humans_Implement_Ethics_Discovery">Humans Implement Ethics Discovery</a><ul></ul></li><li><a href="#See_Also_1">See Also</a><ul></ul></li><li><a href="#I_Care_About_Ethical_Decision_Procedures">I Care About Ethical Decision Procedures</a><ul></ul></li><li><a href="#Deference_Attractors_of_Ethical_Agents">Deference Attractors of Ethical Agents</a><ul><li><a href="#Deceptive_DeferenceAttractors">Deceptive Deference-Attractors?</a><ul></ul></li></ul></li><li><a href="#Arguments_Against_Preference_Utilitarianism">Arguments Against Preference Utilitarianism</a><ul></ul></li><li><a href="#Stating_the_Result_of_An_Impossibility_Theorem_for_Welfarist_Axiologies">Stating the Result of “An Impossibility Theorem for Welfarist Axiologies”</a><ul><li><a href="#Requirements">Requirements</a><ul></ul></li><li><a href="#Conclusions">Conclusions</a><ul></ul></li></ul></li><li><a href="#Possible_Surprising_Implications_of_Moral_Uncertanity">Possible Surprising Implications of Moral Uncertanity</a><ul><li><a href="#We_Should_Kill_All_Mosquitoes">We Should Kill All Mosquitoes</a><ul></ul></li></ul></li></ul></div>
<h1 id="Notes_on_Ethics"><a class="hanchor" href="#Notes_on_Ethics">Notes on Ethics</a></h1>
<blockquote>
<p>Aber was wollen denn die Fragen, ich bin ja mit ihnen gescheitert,
wahrscheinlich sind meine Genossen viel klüger als ich und wenden
ganz andere vortreffliche Mittel an, um dieses Leben zu ertragen,
Mittel freilich, die, wie ich aus eigenem hinzufüge, vielleicht ihnen
zur Not helfen, beruhigen, einschläfern, artverwandelnd wirken, aber
in der Allgemeinheit ebenso ohnmächtig sind, wie die meinen, denn,
soviel ich auch ausschaue, einen Erfolg sehe ich nicht.</p>
</blockquote>
<p><em>— <a href="https://en.wikipedia.org/wiki/Franz_Kafka">Frank Kafka</a>, “Forschungen eines Hundes”, 1922</em></p>
<p>My general ethical outlook is one of high <a href="./doc/philosophy/ethics/moral_uncertainty_macaskill_et_al_2020.pdf" title="Moral Uncertainty (William MacAskill/Krister Bykvist/Toby Ord, 2020)">moral
uncertainty</a>,
with my favourite theory being consequentialism. I furthermore favour
hedonic, negative-leaning, and act-based consequentialisms.</p>
<p>However, most notes on this page don't depend on these assumptions.</p>
<p>Note that while I am interested in ethics, I haven't read as much about
the topic as I would like. This probably leads to me re-inventing a large
amount of jargon, and making well-known (and already refuted) arguments.</p>
<h2 id="Converging_Preference_Utilitarianism"><a class="hanchor" href="#Converging_Preference_Utilitarianism">Converging Preference Utilitarianism</a></h2>
<p>One problem with <a href="https://en.wikipedia.org/wiki/Preference_utilitarianism">preference
utilitarianism</a>
is the difficulty of aggregating and comparing preferences
interpersonally, as well as a critique that some persons have very
altruistic and others very egoistic preferences.</p>
<h3 id="Method"><a class="hanchor" href="#Method">Method</a></h3>
<p>A possible method of trying to resolve this is to try to hypothetically
calculate the aggregate preferences of all persons in the following
way: For every existing person pₐ, this person learns about the
preferences of all other persons pₙ. For each pₙ, pₐ learns about
their preferences and experiences pₙ's past sensory inputs. pₐ then
updates their preferences according to this information. This process
is repeated until the maximal difference between preferences has shrunk
to a certain threshold.</p>
<h3 id="Variations"><a class="hanchor" href="#Variations">Variations</a></h3>
<p>One possible variation in the procedure is between retaining knowledge
about the identity of pₐ, the person aggregating the preferences. If
this were not done, the result would be very akin to the <a href="https://en.wikipedia.org/wiki/Veil_of_ignorance">Harsanyian
Veil of Ignorance</a>.</p>
<p>Another possible variation could be not attempting to achieve convergence,
but only simply iterating the method for a finite amount of times. Since
it's not clear that more iterations would contribute towards further
convergence, maybe 1 iteration is desirable.</p>
<h3 id="Problems"><a class="hanchor" href="#Problems">Problems</a></h3>
<p>This method has a lot of ethical and practical problems.</p>
<h4 id="Assumptions"><a class="hanchor" href="#Assumptions">Assumptions</a></h4>
<p>The method assumes a bunch of practical and theoretical premises,
for example that preferences would necessarily converge upon
experiencing and knowing other persons qualia and preferences.
It also assumes that it is in principle possible to make a person
experience other persons qualia.</p>
<h4 id="Sentient_Simulations"><a class="hanchor" href="#Sentient_Simulations">Sentient Simulations</a></h4>
<p>Since each negative experience would be experienced by every
person at least one time, and negative experiences could
considered to have negative value, calculating the converging
preferences would be unethical in practice (just as <a href="https://foundational-research.org/risks-of-astronomical-future-suffering/#Sentient_simulations">simulating the
experience</a>
over and over).</p>
<h4 id="Genuinely_Selfish_Agents"><a class="hanchor" href="#Genuinely_Selfish_Agents">Genuinely Selfish Agents</a></h4>
<p>If an agent is genuinely selfish (has no explicit term for the welfare of
another agent in its preferences), it might not adjust its own preferences
upon experiencing other lifes. It might even be able to circumvent the
veil of ignorance to locate itself.</p>
<h4 id="Lacking_Brain_Power"><a class="hanchor" href="#Lacking_Brain_Power">Lacking Brain Power</a></h4>
<p>Some agents might lack the intelligence to process all the information
other agents perceive. For example, an ant would probably not be able
to understand the importance humans give to art.</p>
<h3 id="See_Also"><a class="hanchor" href="#See_Also">See Also</a></h3>
<ul>
<li><a href="./doc/cs/ai/alignment/cev/coherent_extrapolated_volition_yudkowsky_2004.pdf" title="Coherent Extrapolated Volition">Yudkowsky 2004</a></li>
</ul>
<h2 id="Humans_Implement_Ethics_Discovery"><a class="hanchor" href="#Humans_Implement_Ethics_Discovery">Humans Implement Ethics Discovery</a></h2>
<p>Humans sometimes change their minds about what they consider to be good,
both on a individual and on a collective scale. One obvious example is
slavery in western countries: although our wealth would make us more
prone to admitting slavery (high difference between wages & costs of
keeping slaves alive), we have nearly no slaves. This used to be different,
in the 18th and 19th century, slavery was a common practice.</p>
<p>This process seems to come partially from learning new facts about
the world (e.g., which ethical patients respond to noxious stimuli,
how different ethical patients/agents are biologically related to each
other, etc.), let's call this the <em>model-updating process</em>. But there also
seems to be an aspect of humans genuinely re-weighting their values when
they receive new information, which could be called the <em>value-updating
process</em>. There also seems to be a third value-related process
happening, which is more concerned with determining inconsistencies
within ethical theories by applying them in thought-experiments (e.g. by
discovering problems in population axiology, see for example <a href="./doc/philosophy/ethics/population/overpopulation_and_the_quality_of_life_parfit_1986.pdf" title="Overpopulation and the Quality of Life">Parfit
1986</a>).
This process might be called the <em>value-inference process</em>.</p>
<p>One could say that humans implement the <em>value-updating</em>
and the <em>value-inference</em> process—when they think about
ethics, there is an underlying algorithm that weighs trade-offs,
considers points for and against specific details in theories,
and searches for maxima. As far as is publicly known, there is no crisp
formalization of this process (initial attempts are <a href="https://plato.stanford.edu/entries/reflective-equilibrium/">reflective
equilibrium</a>
and <a href="./doc/cs/ai/alignment/cev/coherent_extrapolated_volition_yudkowsky_2004.pdf">coherent extrapolated
volition</a> "Coherent Extrapolated Volition").</p>
<p>If we accept the <a href="https://arbital.com/p/complexity_of_value/">complexity of human
values</a> hypothesis, this
absence of a crisp formalism is not surprising: the algorithm for
<em>value-updating</em> and <em>value-inference</em> is probably too complex to
write down.</p>
<p>However, since we know that humans are existing implementations of this
process, we're not completely out of luck: if we can preserve humans
"as they are" (and many of the notes on this page try to get at what
this fuzzy notion of "as they are" would mean), we have a way to further
update and infer values.</p>
<p>This view emphasizes several conclusions: preserving humans "as they
currently are" becomes very important, perhaps even to the extent of
misallowing self-modification, the loss of human cultural artifacts
(literature, languages, art) becomes more of a tragedy than before
(potential loss of information about what human values are), and making
irreversible decisions becomes worse than before.</p>
<!--Often, change in values seems forseeable. Why? How?-->
<h2 id="See_Also_1"><a class="hanchor" href="#See_Also_1">See Also</a></h2>
<ul>
<li><a href="https://arbital.com/p/meta_unsolved/" title="Meta-rules for (narrow) value learning are still unsolved">Yudkowsky 2017</a></li>
</ul>
<h2 id="I_Care_About_Ethical_Decision_Procedures"><a class="hanchor" href="#I_Care_About_Ethical_Decision_Procedures">I Care About Ethical Decision Procedures</a></h2>
<p>Or, why virtue ethics alone feels misguided.</p>
<p>In general, ethical theories want to describe what is good and what
is bad. Some ethical theories also provide a decision-procedure: what
to do in which situations. One can then differentiate between ethical
theories that give recommendations for action in every possible situation
(we might call those <em>complete theories</em>), and ethical theories that
give recommendations for action in a subset of all possible situations
(one might name these <em>incomplete theories</em>, although the name might be
considered unfair by proponents of such theories).</p>
<!--Add stuff about partial orderings of actions, with multiple maximal elements?-->
<p>It is important to clarify that incomplete theories are not necessarily
indifferent between different choices for action in situations they have
no result for, but that they just don't provide a recommendation for action.</p>
<p>Prima facie, complete theories seem more desirable than incomplete
theories—advice in the form of "you oughtn't be in this situation
in the first place" is not very helpful if you are confronted with such
a situation!</p>
<p>Virtue ethics strikes me as being such a theory—it defines what is
good, but provides no decision-procedure for acting in most situations.</p>
<p>At best, it could be interpreted as a method for developing such a
decision-procedure for each individual agent, recognizing that an attempt
at formalizing an ethical decision-procedure is a futile goal, and instead
focussing on the value-updating and value-inference process itself.</p>
<h2 id="Deference_Attractors_of_Ethical_Agents"><a class="hanchor" href="#Deference_Attractors_of_Ethical_Agents">Deference Attractors of Ethical Agents</a></h2>
<p>When I'm angry or stressed (or tired, very horny, high, etc), I would
prefer to have another version of myself make my decisions in that
moment—ideally a version that is well rested, is thinking clearly,
and is not under very heavy pressure. One reason for this is that my
rested & clear-headed self is in general better at making decisions –
it is likely better at playing chess, programming a computer, having
a mutually beneficial discussion etc. But another reason is that even
when I'm in a very turbulent state, I usually still find the <em>values</em>
of my relaxed and level-headed self (let's call that self the <strong>deferee
self</strong>) better than my current values. So in some way, my values in
that stressful moment are not <a href="https://arbital.com/p/reflective_stability/">reflectively stable</a>.</p>
<p>Similarly, even when I'm relaxed, I usually still can imagine a
version of myself with even more desired values—more altruistic,
less time-discounting, less parochial. Similarly, that version of
myself likely wants to be even more altruistic! This is a <a href="https://www.lesswrong.com/posts/SdkAesHBt4tsivEKe/gandhi-murder-pills-and-mental-illness" title="Gandhi, murder pills, and mental illness">Murder-Ghandi
problem</a>:
It likely leads to a perfectly altruistic, universalist version of myself
that just wants to be itself and keep its own values. Let's call that
self a <strong>deference attractor</strong>.</p>
<p>But I don't always have the same deferee self. Sometimes I actually want
to be more egoistic, more parochial, perhaps even more myopic (even
though I haven't encountered that specific case yet. The deferee self
likely also wants to be even more egoistic, parochial and (maybe?) myopic.
This version of myself is again a deference attractor.</p>
<p>These chains of deference are embedded in a <a href="https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)">directed
graph</a>
of selves, many of which are likely reflectively stable. Some
aren't, and perhaps form such chains/paths which either form
<a href="https://en.wikipedia.org/wiki/Cycle_(graph_theory)">cycles</a>, or lead
to attractors.</p>
<h3 id="Deceptive_DeferenceAttractors"><a class="hanchor" href="#Deceptive_DeferenceAttractors">Deceptive Deference-Attractors?</a></h3>
<p>These graphs don't have to be
<a href="https://en.wikipedia.org/wiki/Transitivity_(mathematics)">transitive</a>,
so a deference attractor of myself now could look extremely unappealing
to me. Could one be mistaken about such a judgement, and if yes, when
would one be?</p>
<p>That is, when one would judge a deference attractor to be undesirable,
could it be in fact desirable? Or, if one were to judge in desirable,
could it in fact be undesirable?</p>
<h2 id="Arguments_Against_Preference_Utilitarianism"><a class="hanchor" href="#Arguments_Against_Preference_Utilitarianism">Arguments Against Preference Utilitarianism</a></h2>
<p>Moved <a href="./preference.html">here</a>.</p>
<h2 id="Stating_the_Result_of_An_Impossibility_Theorem_for_Welfarist_Axiologies"><a class="hanchor" href="#Stating_the_Result_of_An_Impossibility_Theorem_for_Welfarist_Axiologies">Stating the Result of “An Impossibility Theorem for Welfarist Axiologies”</a></h2>
<!--TODO: More impossibility theorems?-->
<p><a href="./doc/philosophy/ethics/population/an_impossibility_theorem_for_welfarist_axiologies_arrhenius_2000.pdf" title="An Impossibility Theorem for Welfarist Axiologies">Arrhenius
2000</a>
gives a proof that basically states that the type of population axiology
we want to construct is impossible. However, the natural-language
statement of his result is scattered throughout the paper.</p>
<blockquote>
<p>The primary claim of this paper is that any axiology that satisfies the
Dominance, the Addition, and the Minimal Non-Extreme Priority Principle
implies the Repugnant, the Anti-Egalitarian, or the Sadistic Conclusion.</p>
</blockquote>
<p><em>— <a href="https://www.iffs.se/en/research/researchers/gustaf-arrhenius/">Gustaf Arrhenius</a>, <a href="./doc/philosophy/ethics/population/an_impossibility_theorem_for_welfarist_axiologies_arrhenius_2000.pdf">“An Impossibility Theorem for Welfarist Axiologies”</a> p. 15, 2000</em></p>
<h3 id="Requirements"><a class="hanchor" href="#Requirements">Requirements</a></h3>
<blockquote>
<p>The Dominance Principle: If population A contains the same number of
people as population B, and every person in A has higher welfare than
any person in B, then A is better than B.</p>
</blockquote>
<p><em>— <a href="https://www.iffs.se/en/research/researchers/gustaf-arrhenius/">Gustaf Arrhenius</a>, <a href="./doc/philosophy/ethics/population/an_impossibility_theorem_for_welfarist_axiologies_arrhenius_2000.pdf">“An Impossibility Theorem for Welfarist Axiologies”</a> p. 11, 2000</em></p>
<blockquote>
<p>The Addition Principle: If it is bad to add a number of people, all
with welfare lower than the original people, then it is at least as bad
to add a greater number of people, all with even lower welfare than the
original people.</p>
</blockquote>
<p><em>— <a href="https://www.iffs.se/en/research/researchers/gustaf-arrhenius/">Gustaf Arrhenius</a>, <a href="./doc/philosophy/ethics/population/an_impossibility_theorem_for_welfarist_axiologies_arrhenius_2000.pdf">“An Impossibility Theorem for Welfarist Axiologies”</a> p. 11, 2000</em></p>
<blockquote>
<p>The Minimal Non-Extreme Priority Principle: There is a number n such
that an addition of n people very high welfare and a single person with
slightly negative welfare is at least as good as an addition of the same
number of people but with very low positive welfare.</p>
</blockquote>
<p><em>— <a href="https://www.iffs.se/en/research/researchers/gustaf-arrhenius/">Gustaf Arrhenius</a>, <a href="./doc/philosophy/ethics/population/an_impossibility_theorem_for_welfarist_axiologies_arrhenius_2000.pdf">“An Impossibility Theorem for Welfarist Axiologies”</a> p. 11, 2000</em></p>
<h3 id="Conclusions"><a class="hanchor" href="#Conclusions">Conclusions</a></h3>
<blockquote>
<p>The Repugnant Conclusion: For any perfectly equal population
with very high positive value, there is a population with very
low positive welfare which is better.</p>
</blockquote>
<p><em>— <a href="https://www.iffs.se/en/research/researchers/gustaf-arrhenius/">Gustaf Arrhenius</a>, <a href="./doc/philosophy/ethics/population/an_impossibility_theorem_for_welfarist_axiologies_arrhenius_2000.pdf">“An Impossibility Theorem for Welfarist Axiologies”</a> p. 2, 2000</em></p>
<blockquote>
<p>The Anti-Egalitarian Conclusion: A population with perfect equality can
be worse than a population with the same number of people, inequality,
and lower average (and thus lower total) positive welfare.</p>
</blockquote>
<p><em>— <a href="https://www.iffs.se/en/research/researchers/gustaf-arrhenius/">Gustaf Arrhenius</a>, <a href="./doc/philosophy/ethics/population/an_impossibility_theorem_for_welfarist_axiologies_arrhenius_2000.pdf">“An Impossibility Theorem for Welfarist Axiologies”</a> p. 12, 2000</em></p>
<blockquote>
<p>The Sadistic Conclusion: When adding people without
affecting the original people's welfare, it can be better to
add people with negative welfare than positive welfare.</p>
</blockquote>
<p><em>— <a href="https://www.iffs.se/en/research/researchers/gustaf-arrhenius/">Gustaf Arrhenius</a>, <a href="./doc/philosophy/ethics/population/an_impossibility_theorem_for_welfarist_axiologies_arrhenius_2000.pdf">“An Impossibility Theorem for Welfarist Axiologies”</a> p. 5, 2000</em></p>
<p>All of these are stated more mathematically on page 15.</p>
<h2 id="Possible_Surprising_Implications_of_Moral_Uncertanity"><a class="hanchor" href="#Possible_Surprising_Implications_of_Moral_Uncertanity">Possible Surprising Implications of Moral Uncertanity</a></h2>
<p>Preserving languages & biospheres might be really important, if the
continuity of such processes is morally relevant.</p>
<p>We should try to be careful about self-modification, lest we fall into
a molochian attractor state we don't want to get out of. Leave a line
of retreat in ideology-space!</p>
<p>For a rough attempt to formalize this, see <a href="https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/6DuJxY8X45Sco4bS2" title="Seeking Power is Often Robustly Instrumental in MDPs">TurnTrout & elriggs
2019</a>.</p>
<h3 id="We_Should_Kill_All_Mosquitoes"><a class="hanchor" href="#We_Should_Kill_All_Mosquitoes">We Should Kill All Mosquitoes</a></h3>
<p>If we assign a non-miniscule amount of credence to <a href="https://en.wikipedia.org/wiki/Retributive_justice">retributive theories
of justice</a>
that include invertebrates as culpable agents, humanity might
have an (additional) duty to exterminate mosquitoes. Between
<a href="https://en.wikipedia.org/wiki/Mosquito">5% and 50%</a><!--TODO:
this is incorrect, change!--> of all humans that have ever
lived have been killed by mosquito-born diseases—if humanity
wants to restore justice for all past humans that have died at the
<a href="https://en.wikipedia.org/wiki/Proposcis">proboscis</a> of mosquito, the
most sensible course of action is to exterminate some or all species of
mosquito that feed on human blood and transmit diseases.</p>
<p>There are of course also additional reasons to exterminate some species
of mosquito: 700k humans die per year from mosquito-borne diseases, and
it might be better for mosquitos themselves to not exist at all (with
<a href="https://reducing-suffering.org/will-gene-drives-reduce-wild-animal-suffering/" title="Will Gene Drives Reduce Wild-Animal Suffering?">gene drives being an effective method of driving them to
extinction</a>, see <a href="https://foundational-research.org/the-importance-of-wild-animal-suffering/" title="The Importance of Wild-Animal Suffering">Tomasik
2017</a>
and <a href="https://reducing-suffering.org/the-importance-of-insect-suffering/" title="The Importance of Insect Suffering">Tomasik
2016</a>
as introductions):</p>
<blockquote>
<p>the cost-effectiveness of the \$1 million campaign to eliminate
mosquitoes would be (7.5 * 10¹⁴ insect-years prevented) *
(0.0025) / \$1 million = 1.9 * 10⁶ insect-years prevented per
dollar [by increasing human population]. As one might expect,
this is much bigger than the impact on mosquito populations
directly as calculated in the previous section.</p>
</blockquote>
<p><em>— <a href="https://reducing-suffering.org">Brian Tomasik</a>, <a href="https://reducing-suffering.org/will-gene-drives-reduce-wild-animal-suffering/">Will Gene Drives Reduce Wild-Animal Suffering?</a>, 2018</em></p>
<p>A mild counterpoint to this view is that we have an obligation to help
species that thrive on mosquitoes, since they have helped humanity
throughout the ages, but we'd hurt them by taking away one of their
food sources.</p>
<!--
Why Death is Bad
-----------------
Under moral uncertainty with evolving preferences, you want to keep
options open, but death closes all options but one, potentially losing
a lot of future value.
In a sense, it's unfair towards all other ethical systems you embody
to kill yourself.
Monotonic Convergence or Not of Moral Discovery Process
--------------------------------------------------------
Conditions for Neither Repugnant nor Monstrous Utilitarianism
--------------------------------------------------------------
Diminishing or increasing returns on investments for well-being
of a single agent?
If we're really lucky, initially there are increasing returns, but at
some point they start diminishing.
What Use Ethics For?
---------------------
Everyday life, or problems that arise in the limit?
C.f. High Energy Ethics.
The Two Urgent Problems are: How Don't We Die and How Do We Become Happy?
--------------------------------------------------------------------------
A Very Subjective Ranking of Types of Ethical Theories
-------------------------------------------------------
Consequentialism, Contractualism, Deontology, Virtue Ethics
What Is Wrong With the Unwilling Organ-Donor Thought Experiment?
-----------------------------------------------------------------
Problems with game-theoretical ethical intuitions.
Better framing: Create universe, make decision, destroy universe after
payoff time.
For most people, there's a point where they kill the unwilling organ
donor, so we're basically haggling over the price. Maybe just a Sorites
paradox?
-->
</body></html>