Comment author: RandomEA 16 August 2018 02:29:59AM 1 point [-]

If you do an action that does not look cause impartial (say EA Funds mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.

Do you mean EA Grants? The allocation of EA Funds across cause areas is outside of CEA's control since there's a separate fund for each cause area.

Comment author: casebash 16 August 2018 01:27:20AM *  -1 points [-]

EA grants seems like it should be in between in terms of being prescriptive vs. descriptive. If I had to pull a number out of a hat, then perhaps half the grants could be in the areas CEA considers most important and the other half could be more open.

Comment author: weeatquince  (EA Profile) 15 August 2018 11:50:28PM *  6 points [-]

We would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

Here is my two cents. I hope it is constructive:


1.

The policy is excellent but the challenge lies in implementation.

Firstly I want to say that this post is fantastic. I think you have got the policy correct: that CEA should be cause-impartial, but not cause-agnostic and CEA’s work should be cause-general.

However I do not think it looks, from the outside, like CEA is following this policy. Some examples:

  • EA London staff had concerns that they would need to be more focused on the far future in order to receive funding from CEA.

  • You explicitly say on your website: "We put most of our credence in a worldview that says what happens in the long-term future is most of what matters. We are therefore more optimistic about others who roughly share this worldview."[1]

  • The example you give of the new EA handbook

  • There is a close association with 80000 Hours who are explicitly focusing much of their effort on the far future.

These are all quite subtle things, but collectively they give an impression that CEA is not cause impartial (that it is x-risk focused). Of course this is a difficult thing to get correct. It is difficult to draw the line between saying: 'our staff members believe cause_ is important' (a useful factoid that should definitely be said), whilst also putting across a strong front of cause impartiality.


2.

Suggestion: CEA should actively champion cause impartiality

If you genuinely want to be cause impartial I think most of the solutions to this are around being super vigilant about how CEA comes across. Eg:

  • Have a clear internal style guide that sets out to staff good and bad ways to talk about causes

  • Have 'cause impartiality' as a staff value

  • If you do an action that does not look cause impartial (say EA Funds mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.

  • Public posts like this one setting out what CEA believes

  • If you want to do lots of "prescriptive" actions split them off into a sub project or a separate institution.

  • Apply the above retroactively (remove lines from your website that make it look like you are only future focused)

Beyond that, if you really want to champion cause impartiality you may also consider extra things like:

  • More focus on cause prioritisation research.

  • Hiring people who value cause impartiality / cause prioritisation research / community building, above people who have strong views on what causes are important.


3.

Being representative is about making people feel listened too.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Things like the EA handbook should (as a lower bound) have enough of a diversity of causes mentioned that the broader EA community does not feel misrepresented but (as an upper bound) not so much that CEA staff [2] feel like it is misrepresenting them. Anything within this range seems fine to me. (Eg. with the EA handbook both groups should feel comfortable handing this book to a friend.) Although I do feel a bit like I have just typed 'just do the thing that makes everyone happy' which is easier said than done.

I also think that "representativeness" is not quite the right issue any way. The important thing is that people in the EA community feel listened too and feel like what CEA is doing represents them. The % of content on different topics is only part of that. The other parts of the solution are:

  • Coming across like you listen: see the aforementioned points on championing cause impartiality. Also expressing uncertainty, mentioning that there are opposing views, giving two sides to a debate, etc.

  • Listening -- ie. consulting publicly (or with trusted parties) wherever possible.

If anything getting these two things correct is more important than getting the exact percentage of your work to be representative.


Sam :-)


[1] https://www.centreforeffectivealtruism.org/a-three-factor-model-of-community-building

[2] Unless you have reason to think that there is a systematic bias in staff, eg if you actively hired people because of the cause they cared about.

Comment author: M_Allcock 15 August 2018 09:26:04PM 1 point [-]

That's for the inspiring story, and to all the previous commenters.

My memory is one of my most failing attributes, but this post has encouraged me to contribute to this forum for the first time after being an occasional spectator, so here goes.

In early 2016, I was playing table-tennis with my older brother. Table-tennis is a unique game because it takes a lot of concentration, yet it is possible to have a fully engaged conversation with your opponent at the same time. Back and forth, we talked. I said something like this:

“I’m gonna start a PhD soon. I’ll earn a small stipend. Based on how much living has cost over the last few years, I’ll still have a lot left over. Then I’ll graduate and earn more money, and have more left over. Other people surely need this surplus money more than me. So I should give some of it away.”

At the same time, I was trying to make better financial decisions, and was dabbling in financial investment. So I asked my brother a question:

“With this leftover money, is it best to give it to charity now or to invest it and give it later?”

We threw around some ideas about compound interest, duty to help now, problems now being worse than problems later, and that we might learn more later to give in a more impactful way. We didn’t get very far with the discussion, so I Googled it. The words “Effective Altruism” kept cropping up, and I found my way to Peter Singer’s TED talk.

I was already convinced by Peter Singer’s anti-speciesist arguments against eating animal products, as I had been vegan for about a year, so it was intriguing to hear about other philosophical stances he argues for. His talk piqued my interest in EA, which, with the aid of our local EA group, has slowly drawn me further and further into its logical and compassionate underworld.

I had dabbled with various social and intellectual movements previously, but had always felt something not quite right. Some were to insular, some too tribal, some lacking in good intellectual practices, with misaligned incentives. What I loves about EA, that other groups I have dabbled with in the past, is its ability to self criticise, to learn, to change its mind, to accept that we might be wrong, and that other people might be right, but to try to listen to all parties, and to get as close to the right answers as we can be. I also was drawn in by the idea that this movement is not be bound to an arbitrary cause, rather it is about finding correct answers to one of the most fundamental questions we face: how can we do the most good?

Since then, I have met some of the best people I know, I have changed my mind a lot, and I have improved in many areas of my life. I am now the chair of my city's local group, I volunteer for an EA-aligned organisation, and I am planning on moving from researching mathematics to something closer to what the world needs once I finish my PhD. I can’t wait!

Comment author: richard_ngo 15 August 2018 09:23:16PM *  1 point [-]

I think it's a mischaracterisation to think of virtue ethics in terms of choosing the most virtuous actions (in fact, one common objection to virtue ethics is that it doesn't help very much in choosing actions). I think virtue ethics is probably more about being the most virtuous. There's a difference: e.g. you might not be virtuous if you choose normally-virtuous actions for the wrong reasons.

For similar reasons, I disagree with cole_haus that virtue ethicists choose actions to produce the most virtuous outcomes (although there is at least one school of virtue ethics which seems vaguely consequentialist, the eudaimonists. See https://plato.stanford.edu/entries/ethics-virtue). Note however that I haven't actually looked into virtue ethics in much detail.

Edit: contractarianism is a fourth approach which doesn't fit neatly into either division

Comment author: cole_haus 15 August 2018 09:13:59PM *  2 points [-]

I think there's a certain prima facie plausibility to the traditional tripartite division. If you just think about the world in general, each of actors, actions, and states seem salient. It wouldn't take much to convince me that--appropriately defined--actors, actions, and states are mutually exclusive and collectively exhaustive in some metaphysical sense.

Once you accept the actors, actions, states division, it makes sense to have ethical theories revolving around each. These corresponds to virtue ethics, deontology and consequentialism.

Comment author: Kerry_Vaughan 15 August 2018 09:06:50PM *  2 points [-]

We're asking for feedback on who we should consult with in general, not just for EA Global.

In particular, the usual process of seeking advice from people we know and trust is probably producing a distortion where we aren't hearing from a true cross-section of the community, so figuring out a different process might be useful.

Comment author: cole_haus 15 August 2018 09:06:45PM *  1 point [-]

I think you could fairly convincingly bucket virtue ethics in 'Ends' if you wanted to adopt this schema. A virtue ethicist could be someone who chooses the action that produces the best outcome in terms of personal virtue. They are (sort of) a utilitarian that optimizes for virtue rather than utility and restricts their attention to only themselves rather than the whole world.

Comment author: Kerry_Vaughan 15 August 2018 09:01:16PM 3 points [-]

The biggest open questions are:

1) In general, how can we build a community that is both cause impartial and also representative? 2) If we want to aim for representativeness, what reference class should we target?

Comment author: Kerry_Vaughan 15 August 2018 08:58:48PM 1 point [-]

At the moment our mainline plan is this post with a request for feedback.

I've been talking with Joey Savoie and Tee Barnett about the issue. I intend to consult others as well, but I don't have a concrete plan for who to contact.

Comment author: Joey 15 August 2018 08:48:21PM 8 points [-]

Just wanted to chip in on this. Although I do not think this addresses all the concerns I have with representativeness, I do think CEA has been making a more concerted and genuine effort at considering how to deal with these issues (not just this blog post, but also in some of the more recent conversations they have been having with a wider range of people in the EA movement). I think it's a tricky issue to get right (how to build a cause neutral EA movement when you think some causes are higher impact than others) and there is still a lot of thought to be done on the issue, but I am glad steps are happening in the right direction.

Comment author: Dunja 15 August 2018 07:17:16PM 0 points [-]

That would be great!

Comment author: Carl_Shulman 15 August 2018 06:03:42PM 2 points [-]

Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there

If that was the only change our century would still look special with regard to the possibility of lasting changes short of extinction, e.g. as discussed in this posts by Nick Beckstead. There is also the astronomical waste argument: delay in interstellar colonization by 1 year means losing out on all the galaxies reachable (before separation by the expansion of the universe) by colonization begun in year n-1 instead of n. The population of our century is vanishingly small compared to future centuries, so the ability of people today to affect the colonized volume is accordingly vastly greater on a per capita basis, and the loss of reachable galaxies to delayed colonization is irreplaceable as such.

So we would still be in a very special and irreplaceable position, but less so.

For our low-population generation to really not be in a special position, especially per capita, it would have to be the case that none of our actions have effects on much more populous futures as a whole. That would be very strange, but if it were true then there wouldn't be any large expected impacts of actions on the welfare of future people.

But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

I'm not sure I understand the scenario. This sounds like a case where an action to do X makes no difference because future people will do X (and are more numerous and richer). In terms of Singer's drowning child analogy, that would be like a case where many people are trying to save the child and extras don't make the child more likely to be saved, i.e. extra attempts at helping have no counterfactual impact. In that case there's no point in helping (although it may be worth trying if there is enough of a chance that extra help will turn out to be necessary after all).

So we could consider a case where there are many children in the pond, say 20, and other people are gathered around the pond and will save 10 without your help, but 12 with your help. There are also bystanders who won't help regardless. However, there is also a child on land who needs CPR, and you are the only one who knows how to provide it. If you provide the CPR instead of pulling children from the pond, then 10+1=11 children will be saved instead of 12. I think in that case you should save the two children from drowning instead of the one child with CPR, even though your ability to help with CPR is more unique, since it is less effective.

Likewise, it seems to me that if we have special reason to help current people at the expense of much greater losses to future generations, it would be because of flow-through effects, or some kind of partiality (like favoring family over strangers), or some other reason to think the result is good (at least by our lights), rather than just that future generations cannot act now (by the same token, billions of people could but don't intervene to save those dying of malaria or suffering in factory farms today).

Comment author: Peter_Hurford  (EA Profile) 15 August 2018 05:27:28PM 0 points [-]

We haven't posted a gender breakdown by group yet. I can see if there may be ways to follow this up as part of our forthcoming 2018 EA Survey work.

Comment author: Milan_Griffes 15 August 2018 04:17:46PM *  4 points [-]

it’s unclear what reference class we should be using when making our work more representative... The best solution is likely some hybrid approach, but it’s unclear precisely how such an approach might work.

Could you say more about what CEA is planning to do to get more clarity about who it should represent?

Comment author: pmelchor  (EA Profile) 15 August 2018 02:46:04PM 0 points [-]

Thanks, Carl. I fully agree: if we are convinced it is essential that we act now to counter existential risks, we must definitely do that.

My question is more theoretical (feel free to not continue the exchange if you find this less interesting). Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there. An argument in favour of weighting the long-term future heavily could still be valid (there could be many more people alive in the future and therefore a great potential for either flourishing or suffering). But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

Comment author: remmelt  (EA Profile) 15 August 2018 02:36:30PM 6 points [-]

What are some open questions that you’d like to get input on here (preferably of course from people who have enough background knowledge)?

This post reads to me like an explanation of why your current approach makes sense (which I find mostly convincing). I’d be interested in what assumptions you think should be tested the most here.

Comment author: Khorton 15 August 2018 02:00:04PM 2 points [-]

We do however recognize that when consulting others it’s easy to end up selecting for people with similar views and this can leave us with blind spots in particular areas. We are thinking about how to expand the range of people we get advice from. While we cannot promise to enact all suggestions, we would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

It seems like you currently only consult people for EA Global content. Do you want to get advice on how to have a wider range of consultants for EA Global content, or are you asking for something else?

In response to The Value of a Life
Comment author: Toni_Hoffmann 15 August 2018 01:41:18PM 1 point [-]

This article really helped me getting these things clear in my head! Thank you :)

Comment author: Dunja 15 August 2018 08:30:45AM *  0 points [-]

Oh damn :-/ I was just gonna ask for the info (been traveling and could reply only now). That's really interesting, is this info published somewhere online? If not, it would maybe be worthwhile to make a post on this here and discuss both the reasons for the predominantly male community, as well as ideas for how to make it more gender-balanced.

I'd be very interested in possible relations between the lack of gender balance and the topic of representation discussed in another recent thread. For instance, it'd be interesting to see whether non-male EAs find the forum insufficiently focused on causes which they find more important, or largely focused on issues that they do not find as important.

View more: Next