Comment author: saulius  (EA Profile) 13 June 2018 08:03:27PM *  1 point [-]

I don't know, we simply didn't talk about that at all. My guess is that 4 days is not too long. EA globals sometimes last 3 days, if you include the social on Friday. I believe that a recent group organisers' retreat lasted an entire week. An AI camp lasted 10 days. These latter two events are not quite the same, but I guess you could ask Remmelt Ellen whether they felt too long, I believe he was present in both of them. Hmm, the fact that your event is during winter could matter a bit though, because going outside is usually a refreshing change of atmoshpere during such things.

By the way, this was not a retreat, we did it in an office in London and people slept elsewhere.

Comment author: SiebeRozendal 17 June 2018 01:20:20PM 0 points [-]

Alright thanks! :) Remmelt is also organizing this retreat, so we have that info!

Comment author: SiebeRozendal 13 June 2018 01:28:44PM 0 points [-]

What did you think about the length of the retreat? Would people have liked to stay longer? We're planning to organize a retreat from Thursday to Sunday in The Netherlands in between Christmas and New Year's.

Comment author: SiebeRozendal 07 June 2018 10:10:53AM 0 points [-]

I really admire that you did a study about this, but I think that this study shows much less than you claim to. First of all, you studied support for effective giving (EG), which is different from effective altruism as a whole. I would suspect at least the following three factors to really be different between EG and EA:

  • Support for cause impartiality, both moral impartiality (measuring each being according to their innate characteristics like sentience or intelligence, rather than personal closeness) and means impartiality (being indifferent between different means to an end, e.g. donating money or choosing a career with direct impact
  • Dedication. I believe that making career changes or pledging at least 10% of your income to donate is quite a high bar and much fewer people would be inclined to that.
  • Involvement in the community. As you wrote the community is quite idiosyncratic. Openness to (some of) its ideas does not imply people will like the movement.

Of course, not all of this implies that the study is worthless, that getting people to donate their 1 or 2% more effectively is useless, or that we shouldn't try to make the movement more diverse and welcoming (if this can be done without compromising core values such as epistemic rigor). I think there is a debate to be held how to differentiate effective giving from EA as a whole, so that we can decide whether or not to promote effective giving seperately and if so, how.

Comment author: SiebeRozendal 29 April 2018 11:16:56AM 1 point [-]

I suppose question 56 is meant to measure something like "inclination to deliberate", but 3 out of 5 questions are materialistic contexts (shopping, dining) that EA's might not think much about, even if a lot of other contexts would trigger a lot of deliberation. Especially for dining there are often just a few vegetarian and even fewer vegan options on the menu, so that item especially doesn't capture the intended concept I'm afraid.

Comment author: RandomEA 25 April 2018 08:19:31PM 12 points [-]

I was planning to give some feedback on the 2017 survey instrument after the last post in that series, which I had assumed would finish before the 2018 survey was released. Since my assumption was wrong (sorry!), I'll just post my feedback here to be considered for the 2019 survey:

  1. One major aspect of EA is the regularly produced online content on this forum and elsewhere. It might be useful to ask about the average number of hours a week people spend reading EA content as that could help people evaluate the value of producing online content.

  2. You could also ask people whether they've attended an EA Global conference. The responses could be used as a proxy to distinguish more involved and less involved EAs, which could be used in analyzing other issues like cause area preferences.

  3. For the question about career path, you could add advocacy as a fourth option. (80,000 Hours treats it as one of the four broad options.)

  4. For the same reasons that race was included in the 2017 survey, it could be useful to ask about parental education (as a proxy for socioeconomic background).

  5. You could ask people how many of their acquaintances they have seriously attempted to persuade to join EA and how many of those did join. This could provide useful data on the effectiveness of personal outreach.

  6. Another question that may be worth asking: "Have you ever seriously considered leaving EA?" For those that answer yes, you could ask them for their reasons.

  7. I think it could be useful to have data on the percent of EAs who are living organ donors and the percent of EAs who intend to become living organ donors. The major downside is that it may cause people to think that being a living organ donor is part of EA.

  8. Borrowing from Peter Singer, I propose asking: "Has effective altruism given you a greater sense of meaning and purpose in your life?"

  9. You could also ask about systemic change: "How much do you think the EA community currently focuses on systemic change (on a scale of 1 to 10)?" and "How much do you think the EA community should focus on systemic change (on a scale of 1 to 10)?" You could include a box for people to explain their answers.

  10. Lastly, you could ask questions about values. A) "Do you believe that preventing the suffering of a person living in your own country is more important than preventing an equal amount of suffering of a person living in a different country? Assume that there is no instrumental value to preventing the suffering of either and that in both cases the suffering is being prevented by means other than preventing existence or causing death." B) "Do you believe that preventing the suffering of a human is more important than preventing an equal amount of suffering of a non-human animal? Assume that there is no instrumental value to preventing the suffering of either and that in both cases the suffering is being prevented by means other than preventing existence or causing death." C) "Do you believe that preventing the suffering of a person living in the present is more important than preventing an equal amount of suffering of a person living several centuries from now? Assume that there is no instrumental value to preventing the suffering of either and that in both cases the suffering is being prevented by means other than preventing existence or causing death." D) "Do you believe that it is bad if a person who would live a happy life is not brought into existence?"

Comment author: SiebeRozendal 29 April 2018 11:10:47AM 0 points [-]

B) "Do you believe that preventing the suffering of a human is more important than preventing an equal amount of suffering of a non-human animal?

This is an important question! I have the suspicion that many people value animals at a rate that should make them focus their resources (at least their donations) towards animal charities, but that they are unaware of this.

However, the question is somewhat ambiguous.Some people believe humans can suffer more than animals ever can, such that preventing the suffering of a human may be 100 times more important than that of a non-human animal. On the other hand, with the original question you capture the degree of speciecism. In that case, I would add "assume some non-human animals can suffer as much as humans can" so you're sure they interpret it in the way you want.

Borrowing from Peter Singer, I propose asking: "Has effective altruism given you a greater sense of meaning and purpose in your life?"

Nice one! Relatedly, capturing whether we actually are sacrificing our own utility: "On net, has effective altruism increased or decreased your overall well-being?"

Comment author: SiebeRozendal 25 April 2018 12:27:06PM 1 point [-]

Excellent post! I think value drift is one of the largest challenges of local groups: many people who seemed enthusiastic don't show up after a couple of times and it's hard to keep them motivated to keep going for the highest expected value in the long-term option.

The thing is, how do you communicate about the risk of value drift to others who are at risk? There is the problem of base rate neglect/bias blind spot: that people think the risk does not apply to them. For example, multiple people have expressed they don't understand that I took the giving pledge to commit my future self to this, while I believe I might otherwise not act on my (current) values.

Comment author: SiebeRozendal 17 April 2018 03:14:56PM *  1 point [-]

Thus the trends in factual basis become more salient. One example is the ongoing demographic transition, and the consequently older population give smaller values of life-years saved if protected from extinction in the future. This would probably make the expected cost-effectiveness somewhat (but not dramatically) worse.

I think this is largely compensated by a rise in average life-expectancy.

I'd also like to remark Bostrom's point in Astronomical Waste that extinction could prevent current people from living billions of years, and that this gives enough reason for person-affecting utilitarians to prioritize x-risk reduction.

From Bostrom (2003):

"[..] we ought to assign a non-negligible probability to some current people surviving long enough to reap the benefits of a cosmic diaspora. A so-called technological “singularity” might occur in our natural lifetime, or there could be a breakthrough in life-extension [..] Clearly, avoiding existential calamities is important, not just because it would truncate the natural lifespan of six billion or so people, but also [..] because it would extinguish the chance that current people have of reaping the enormous benefits of eventual colonization."

Comment author: MichaelPlant 13 April 2018 09:17:56PM *  0 points [-]

Thanks for doing this. I definitely worry about the cause-selection fallacy where we go "X is the top cause if you believe theory T; I don't believe T, therefore X can't be my top cause".

A couple of points.

As you've noted in the comments, you model this as $1bn total, rather than $1bn a year. Ignoring the fact that the person affecting advocate (PAA) only cares about present people (at time of initial decision to spend), if the cost-effectivnenes is even 10 lower then it probably no long counts as a good buy.

Other person affecting views consider people who will necessarily exist (however cashed out) rather than whether they happen to exist now (planting a bomb with a timer of 1000 years is still accrues person-affecting harm). In a 'extinction in 100 years' scenario, this view would still count the harm of everyone alive then who dies, although still discount the foregone benefit of people who 'could have been' subsequently in the moral calculus

This is true, although whatever money you put towards the extinction project is likely to change all the identities, thus necessary people are effectively the same as present people. Even telling people "hey, we're working on this X-risk project" is enough to change all future identities.

If you wanted to pump up the numbers, you could claim that advances in aging will mean present people will live a lot longer - 200 years rather than 70. This strikes me as reasonable, at least when presented as an alternative, more optimistic calculation.

You're implicitly using the life-comparative account of the badness of death - the badness of your death is equal to amount of happiness you would have had if you'd lived. On this view, it's much more valuable to save the lives of very young people, i.e. whenever they count as a person, say 6 months after conception, or something. However, most PAAs, as far I can tell, take the Time-Relative Interest Account (TRIA) of the badness of death, which holds it's better to save a 20-year old than a 2-year old because the the 2-year old doesn't yet have interests in continuing to live. On TRIA, abortion isn't a problem, whereas it's a big loss on the life-comparative (assuming the foetus is terminated after personhood). This interests stuff is usually cashed out, at least by Jeff McMahan, in terms of Parfitian ideas about personal identity (apologies to those who aren't familiar with this shorthand). On TRIA, the value of saving a life is the happiness it would have had times the psychological continuity with one's future self. Very young people, e.g. babies, have basically no psychological continuity so saving their lives isn't important. But people keep changing over time: 20-year old is quite psychological distinct from the 80-year old. On TRIA, we need to factor that in too. This fact seems to be overlooked in the literature, but on TRIA you apply a discount to the future based on this change in psychological continuity. To push the point, suppose we say that everyone's psychology totally changes over the course of 10 years. Then TRIA advocates won't care what happens in 10 years time. Hence PAAs who like TRIA, which, as I say, seems to be most of them, will discount the value of the future much more steeply than PAA who endores the life-comparative account. Upshot: if someone takes TRIA seriously - which no one should btw - and knows what it implies, you'll really struggle to convince them X-risk is important on your estimate.

Finally, anyone who endorses the procreative asymmetry - creating happy people is neutral, creating unhappy people is bad - will want to try to increase x-risk and blow up the world. Why? Well, the future can only be bad: the happy lives don't count as good, and the unhappy lives will count as bad. Halstead discusses this here, if I recall correctly. It's true, on the asymmetry, avoiding x-risk would be good regarding current people, but increasing x-risk will be good regarding future people, as it will stop their being any of them. And as X-risk (reduction) enthusiasts are keen to point out, there is potentially a lot of future still to come.

Comment author: SiebeRozendal 17 April 2018 02:55:20PM 1 point [-]

You're implicitly using the life-comparative account of the badness of death - the badness of your death is equal to amount of happiness you would have had if you'd lived.

I have heard surprisingly many non-philosophers argue for the Epicurean view: that death is not bad for the individual because there's no one for it to be bad for. They would argue that death is only bad because others will have grief and other negative consequences. However, in a painless extinction event this would not be bad at all.

This is all to say that one's conception of the badness of death indeed matters a lot for the negative value of extinction.

Comment author: SiebeRozendal 16 April 2018 01:00:47PM 0 points [-]

There is a fair number of people who don’t work in their top pick cause area or even cause areas they are much less convinced of than their peers, but currently they don’t advertise this fact.

I think this is likely to be correct. However, I seriously wonder if the distribution is uniform; i.e. are there as much people working on international development while it's not their top pick as on AI Safety? I would say not.

The next question is whether we should update towards the causes where everyone who works in it is convinced it's top priority, or whether there are other explanations for this hypothesis. I'm not sure how to approach this problem.

Comment author: SiebeRozendal 27 March 2018 06:10:56PM 1 point [-]

Is there a standard contract length you will offer RA's if there is no trial period?

View more: Next