Comment author: Denkenberger 26 November 2017 07:15:05PM 4 points [-]

Thanks for letting us know about this great opportunity! While I'm waiting to be approved for the Facebook group, is there any way to find out how much money is going to be chasing the $2 million match? Since this is not just EAs and there appear to be hundreds of charities listed, it could easily be $100 million, so then do you think we will have something like 3 minutes or 3 seconds to do the donation before the $2 million match limit is reached?

Comment author: WilliamKiely 27 November 2017 12:37:31AM *  4 points [-]

Facebook saw over 100,000 people donate to thousands of fundraisers that raised $6.79 million on Giving Tuesday across the United States. (Source)

This year I expect it to be more, though I'm not well-informed on how much more. Perhaps $10-$20MM is a reasonable expectation. https://en.wikipedia.org/wiki/Giving_Tuesday

Also, the match last year was for $500K instead of $2MM. From the same source:

After the initial match of $500,000 was reached within hours, The Bill & Melinda Gates Foundation increased their pledge to $900,000 total to match more Giving Tuesday Fundraisers on Facebook.

Note that last year's matching campaign was also announced in advance.

So I think 3 minutes is overkill. While apriori I would be expecting people to take advantage of this such that $2MM in donations are made in the first ~3 minutes, I think that last year shows that this is unlikely to happen. I would be surprised if the $2MM match is reached in less than 30 minutes. I'll assign a 20% probability to that happening somewhat arbitrarily. And maybe a 5% chance to it being reached in less than 10 minutes. My median estimate would be around 9:30 EST (1.5 hours). And maybe a 20% chance that it takes more than 3 hours. Although I don't really know, so my suggestion is to donate ASAP. If you're donating more than just a small amount it's worth it even if it's inconvenient.

I intend to make all of my donations ASAP after 8:00 AM EST. (I am going to try to make 10 separate $1,000 donations before 8:10 AM EST).

13

#GivingTuesday: Counter-Factual Donation Matching is the Lowest-Hanging Fruit in Effective Giving

Self-described effective altruists in the latest EA Survey reported $9.8 million in donations in 2016. However, most of these donations were not matched counter-factually. That is, most of the donations did not generate matching funds representing new money towards effective nonprofits as a whole . Given the existence of counter-factual... Read More
Comment author: Ben_West  (EA Profile) 21 July 2017 11:08:32PM 3 points [-]

Yeah, it would change the meaning.

My assumption was that, if things monotonically improve, then in the long run (perhaps the very, very long run) we will get to net positive. You are proposing that we might instead asymptote at some negative value, even though we are still always improving?

Comment author: WilliamKiely 22 July 2017 07:57:08PM 2 points [-]

I wasn't proposing that (I in fact think the present is already good), but rather was just trying to better understand what you meant.

Your comment clarified my understanding.

Comment author: WilliamKiely 21 July 2017 12:58:27AM *  2 points [-]

7 - Therefore, the future will contain less net suffering

8 - Therefore, the future will be good

Could this be rewritten as "8. Therefore, the future will be better than the present" or would that change its meaning?

If it would change the meaning, then what do you mean by "good"? (Note: If you're confused about why I'm confused about this, then note that it seems to me that 8 does not follow from 7 for the meaning of "good" I usually hear from EAs (something like "net positive utility").)

Comment author: MichaelDickens  (EA Profile) 27 August 2016 03:49:40AM 3 points [-]

Even if you discount insects that heavily (which I believe is wrong), there's still a strong case to be made for trying to prevent wild vertebrates from suffering.

Comment author: WilliamKiely 27 August 2016 04:35:37AM 0 points [-]

Hmm. I do believe I discount vertebrates much less than I discount insects, however I also think there's a huge difference between say chickens and chimpanzees or chimpanzees and humans. Even among humans (who have quite similar brains to one another compared to inter-comparisons), I think that the top 10% of Americans probably live lives that I value inherently (by which I mean ignoring the effects that they have on other things and only counting the quality of their conscious life experience) at least one order of magnitude (if not several) more than the bottom 10% of Americans. I believe this is an unpopular view also, but one consideration I might be able to give in support of it is if you reflect on how much you value your own conscious experience during some parts of your life compared to others you may find as I do that some moments or short periods seem to be of much greater value than others of equal duration.

An exercise I tried recently was making a plot of "value realized / time" vs "time" for my own conscious life experience (so again: not including the effects of my actions, which is the vast majority of what I value) and found that there were some years I valued multiple times more than other years and some moments I valued many times more than all years on net. The graph was also all positive value and trending upwards. Sleeping much less than awake. (I don't think I have very vivid dreams relative to others, but even if I did, I would probably still tend to value waking moments much more than sleeping ones.) Also, remembering or reflecting on great moments fondly can be of high value too in my evaluation. There's also the problem of not knowing now what certain experiences were like in the past to actually experience them since I'm relying on my memory of what they were like, which for all I know could be faulty. I think in general I choose to value experiences based on how I remember them being rather than how I think they were when I lived them (if there is a discrepancy between the two).

Also note that I'm a moral anti-realist and so I don't think there are correct answers, so to a certain extent how much I value some periods of my conscious life experience relative to others is a choice, since I don't believe that there are completely defined definite values that are mine that I can discover either.

A general thing I'd be really interested in seeing is peoples' estimates of how much they value (whether positively or negatively) the total life experiences of say, mosquitoes, X, Y, Z, chickens, cows, humans (and what that distribution looks like), oneself over time, a typical human over time, etc. And also "What would a graph of (value realized per unit time) vs (time) look like for Earth's history?" which would answer the question "How much value has been realized since life began on Earth?" (note: I'd ignore estimates of value realized elsewhere in the universe, which may actually be quite significant, for the sake of the question). If you'd like to indulge me on your own views on an of this I would be very interested, but of course no need if you don't want to. I'll estimate and write my own answers up sometime.

Comment author: WilliamKiely 27 August 2016 03:35:48AM *  0 points [-]

How many painful mosquito deaths would you have to be offered to prevent to choose that over causing one new human life (of quality equal to that of a typical person today) to be lived (all instrumental effects / consequences aside)?[1][2][3] (For my answer see [2].)

What would the distribution of EAs' answers look like? College graduates' answers? Everyone's answers?

What range of answers does the OP assume?

Or more broadly, for what range of moral theories can a case be made that WAS should be prioritized?

I ask these questions because, while I find the OP argument intriguing, my current values (or my current beliefs about my values, depending on how you want to think about it) are such that preventing mosquito suffering is very insignificant relative to many other things (e.g. there being more humans that live good lives, or humans living better lives) and is therefore far from being a high priority for me.

While I haven't dived deeply into arguments for negative utilitarianism or other arguments that could conceivably change my view significantly, I think it's unlikely (~10%, reported in [2]) that doing so would lead me to change my view significantly.[4]

It seems to me that the most probable way that my view could be changed to believe that (e.g.) OPP ought to prioritize WAS would be to persuade me that I should adopt a certain view on how to deal with moral uncertainty that would, if adopted, imply that OPP ought to prioritize WAS even given my current beliefs about how much I value the suffering of mosquitoes relative to other things (e.g. the lives of humans).

Is there a case to be made for prioritizing WAS if one assigns even a small probability (e.g. 1%) to a negative utilitarian-like view being correct given that they also subscribe to certain plausible views on moral uncertainty?

My views on how to deal with moral uncertainty are very underdeveloped. I think I currently have a tendency to evaluate situations or decide on actions on the basis of the moral view I deem most probable, however as the linked LessWrong wiki article points out, this has potential problems. (I'm also not aware of a less problematic view, so I will probably continue to do this until I encounter something else that appeals to me more. Bostrom's parliamentary model seems like a reasonable candidate, although I'm unsure how this negotiation process works exactly or would play out. Would have to think about it more.

Lastly, let me just note that I don't challenge the non-normative factual claims of the OP. Rather, I'm simply stating that my hesitation to take the view that OPP should prioritize WAS comes from my belief that I value things significantly differently than I would have to in order for WAS to be something that OPP should prioritize.


{1] A similar question was asked in the Effective Altruism Facebook group. My version gets at how much one values the life of a typical person today relative to the life of a typical mosquito rather than how much one values extreme pleasure relative to extreme suffering.

[2] Since I'm asking for others' answers, I should estimate my own answer. Hmm. If I had to make the decision right now I would choose to create the new human life, even if the number of painful mosquito deaths I was offered to prevent was infinite. Although note that I am not completely confident in this view, perhaps only ~60%. Then maybe ~30% to 10^10-infinity and ~10% to <10^10 mosquitoes, where practically all of that 10% uncertainty comes from the possibility that a more enlightened version of myself would undergo a paradigm shift or significant change in my fundamental values / moral views. In other words, I'm pretty uncertain (~40/60) about whether mosquitoes are net negative or not, but I'm pretty certain (~75%=30%/40%) that if I do value them negatively that the magnitude of their negative value is quite small (e.g. relative to the positive value I place on (the conscious experience of) human life).

[3] Knowing that my view is controversial among EAs (see the link at [1]), perhaps I should meta-update significantly towards the consensus view that not only is the existence of suffering inherently bad, but it's also a much greater magnitude bad than I think in the ~30% scenario that it is. I'll refrain from doing this for now, or figuring out how much I should update if I only think there's an X% that it's proper to update. (I'm also not sure how much my intuitions / current reported estimates already take into account others estimates or not.)

[4] The basis of my view that the goodness of a human life is much greater than the possible (~40% in my view) badness of a mosquito's suffering or painful death (and the basis of more general versions of this view) is my intuition. Thinking about the question from different angles I have been unable to shift my view significantly towards placing substantially more value on mosquitoes' significance or preventing mosquito suffering.

Comment author: WilliamKiely 04 July 2016 11:23:11PM -1 points [-]

Noting that I didn't find this essay useful (although I'm not giving it a thumbs-down vote).

In fact I found it counter-productive because it lead me to spend (waste, IMO) more time thinking about this topic. Of course that's not your fault, but I just wanted to mention this. (I have an imperfect brain. Even though my better judgment says that I shouldn't spend more time thinking about this topic I am often lured in by it when I come across it and am unable to resist thinking about it.) Moral realism has always seemed obviously wrong to me, and my lack of success historically at understanding why many smart people apparently do think that there is a One True Moral Standard that magically compels people to do things independent of their desires/values/preferences has caused much frustration for me. I find it frustrating not only because I have been unable to make progress understanding why moral realists believe moral realism is true, but also because if I am right that moral realism is simply wrong then time spent thinking and writing about the topic (including the time that you spent writing the OP essay) is largely wasted AND if moral realism is true then it still seems to me that the time spent discussing the topic is largely wasted time since I'm still going to go on caring about what I care about and acting to effectively achieve what I value rather than adjust my actions to adhere to the particular Moral Standard that is apparently somehow "correct."

Comment author: WilliamKiely 04 July 2016 10:04:17PM *  0 points [-]

Comment 2:

My one criticism to offer after reading this is in regard to the way you choose to answer "Yes" to the question of whether people "have obligations" (which I put in quotes to communicate the fact that the phrase could be interpreted in different ways such that the correct answer to the question could be either yes or no depending on the interpretation):

"So am I obligated to do anything?

"Yes. You have legal obligations to follow the laws, have epistemic obligations to believe the truth, have deontological obligations not to lie under any circumstance, have utilitarian obligations to donate as much of your income as you can manage, etc… You’re under millions of potential obligations – one for each possible standard that can evaluate actions. Some of these may be nonsensical, like an anti-utilitarian obligation to maximize suffering or an obligation to cook spaghetti for each meal. But all of these obligations are there, even if they’re contradictory. Chances are you just don’t care about most of them."

While I can see how this way of defining what it means to have an obligation can definitely be useful when discussing moral philosophy and bring clarity to said discussions, I think it's worth pointing out that how it could potentially be quite confusing when talking with people who aren't familiar with your specific definition / the specific meaning you use.

For example, if you ask most people, "Am I obligated to not commit murder?" they would say, "Yes, of course." And if you ask them, "Am I obligated to commit murder?" they would say, "No, of course not."

You would answer yes to both, saying that you are obligated to not commit murder by (or according to) some moral standards/theories and are obligated to commit murder by some others.

To most people (who are not familiar with how you are using the language), this would appear contradictory (again: to say that you are obligated both to do and not to do X).

And the second note is that when laypeople say, "No, I am not obligated to commit murder," you wouldn't be inclined to say that they are wrong (because you don't interpret what they are trying to say so uncharitably), but rather would see that clearly they meant something else than the meaning that you explained in the article above that you would assign to these words.

My interpretation of their statement that they are not obligated to commit murder would be (said in one way) that they do not care about any of the moral standards that obligate them to commit murder. Said differently, they are saying that in order to fulfill or achieve their values, people shouldn't murder others (at least in general), because murdering people would actually be a counter-productive way to cause what they desire to happen to happen.

Comment author: WilliamKiely 04 July 2016 10:03:49PM *  0 points [-]

I hold the same view as yours described here (assuming of course that I understand you correctly, which I believe I do).

FWIW I would label this view "moral anti-realist" rather than "moral realist," although of course whether it actually qualifies as "anti-realism" or "realism" depends on what one means by those phrases, as you pointed out.

Here are two revealing statements of yours that would have lead me to strongly update my view towards you being a moral anti-realist without having to read your whole article (emphasis added):

(1) "that firm conviction is the “expressive assertivism” we talked about earlier, not a magic force of morality."

(2) "I disagree that there is One True Moral Standard." "I disagree that these obligations have some sort of compelling force independent of desire."

Comment author: WilliamKiely 10 June 2016 11:27:17PM 0 points [-]

Related: http://effective-altruism.com/ea/ss/the_importantneglectedtractable_framework_needs/ "The Important/Neglected/Tractable framework needs to be applied with care"

View more: Next