Comment author: Alex_Barry 12 November 2017 01:22:05AM *  13 points [-]

Thanks for writing this, I had not seen any public posts on this topic before, and the loss of productivity considerations etc. are novel arguments to me.

I have no object level comments, but a few meta level ones:

  1. As Denise mentioned, this post is very long, and I think would probably benefit from being split into multiple shorter posts.

    In particular the two strands of 'preventing sexual violence within EA' and 'preventing sexual violence in the rest of the world' seem suitably different in both arguments for their importance and calls to action that clearly splitting them into two posts might add clarity. (Although they clearly share some backbone in the discussion of the effects and severity of sexual violence).

  2. I found the post structure not especially clear, and on multiple occasions was somewhat confused about what exactly was being discussed (an example of which is the "Observations about sexual violence in the EA network" section). I also found the formatting a bit confusing and this made reading somewhat more challenging.

    I find writing lengthy posts like this very challenging, and I am not trying to claim any objective problems, just that I often found it difficult to keep track. (Note, since I read the post a table of contents has been added, which should help).

  3. Whilst you were very careful to try and discuss the uncertainty when numbers were first introduced, I think you occasionally later used them in more 'soundbite' form without sufficient qualifiers (or at least less than I would feel comfortable with). (Examples are the 'Inside EA: A 1:6 ratio means 7 rapes per 6 women on average.' section and the "rough estimate of 103 - 607 male rapists in EA" quote, when these depend strongly on assumptions about the relationships of demographics and criminality etc.).

    This may just be a matter of taste, as as I said you do already go to lengths to discuss the uncertainty, and I seem to to favour much more discussion/labeling of uncertainty than average.

I think 2&3 might somewhat explain why you seem to have felt that other commenters had not read the post.

In response to S-risk FAQ
Comment author: Alex_Barry 19 September 2017 08:37:16PM 7 points [-]

Thanks for writing this up! Having resources like this explaining ideas seems pretty uncontroversially good.

Comment author: MichaelPlant 17 August 2017 01:53:57PM 4 points [-]

This is sort of a meta-comment, but there's loads of important stuff here, each of which could have it's own thread. Could I suggest someone (else), organises a (small) conference to discuss some of these things?

I've got quite a few things to add on the ITN framework but nothing I can say in a few words. Relatedly, I've also been working on a method for 'cause search' - a ways of finding all the big causes in a given domain - which is the step before cause prio, but that's not something I can write out succinctly either (yet, anyway).

Comment author: Alex_Barry 18 August 2017 06:24:32PM 5 points [-]

I have organized retreat/conferency things before and would probably be up for organizing something like this if there was interest. I can contact some people and see if they think it would be worthwhile, I am not sure what I expect attendance to be like though (would 20 people be optimistic?)

Comment author: Larks 27 July 2017 11:19:51PM *  3 points [-]
  1. C^^ is better than C^, which is better than C;
  2. C^^ is better than B;
  3. B is better than C and C^.

But these three rankings are inconsistent, and one of them should go. To endorse all of them means to breach transitivity. Is EA committed to rejecting transitivity? This view is very controversial, and if EA required it, this would need serious inquiry and defence.

These rankings do not seem inconsistent to me? C^^ > B > C^ > C

edit: substituted with '^' due to formatting issues.

Comment author: Alex_Barry 28 July 2017 10:42:09AM 1 point [-]

I cannot see the inconsistency there either, the whole section seems a bit strange as his "no death" example starts also containing death again about half way through.

(Note your first line seems to be missing some *'s)

Comment author: MichaelPlant 18 July 2017 10:09:01PM 1 point [-]

Hello Alex,

Thanks for the comments. FWIW, when I was thinking inclusive I had in mind 1) the websites of EA orgs and 2) introductory pitches at (student) events, rather than the talks involved in running a student group. I have no views on student groups being inclusive in their full roster of talks, not least because I doubt the groups would cohere enough to push a particular moral theory.

I agree that lots of people don't have strong moral views and I think EA should be a place where they figure out what they think, rather than a place where various orgs push them substantially in one direction or another. As I stress, I think even the perception of a 'right' answer is bad for truth seeking. Bed Todd doesn't seem to have responded to my comments on this, so I'm not really sure what he thinks.

And, again FWIW, survivorship bias is a concern. Anecdataly, I know a bunch of people that decided EA weirdness, particularly with reference to the far future, was want made them decide not to come back.

Comment author: Alex_Barry 18 July 2017 11:37:22PM *  0 points [-]

(Distinct comment on survivorship bias as it seems like a pretty separate topic)

I currently think good knowledge about what drives people away from EA would be valuable, although obviously fairly hard to collect, and can't remember ever seeing a particularly large collection of reasons given.

I am unsure as to how much we should try and respond to some kinds of complaints though, for things such as people being driven away by weirdness for instance, it is not clear to me that there is much we can do to make EA more inclusive to them without losing a lot of the value of EA (pursuing arguments even if they lead to strange conclusions etc.)

In particular do you know of anyone who left because they only cared about e.g. global poverty and did not want to engage with the far future stuff, who you think would have stayed if EA had been presented to them as including far future stuff from the start? It seems like it might just bring the point when they are put off earlier.

Comment author: MichaelPlant 18 July 2017 10:09:01PM 1 point [-]

Hello Alex,

Thanks for the comments. FWIW, when I was thinking inclusive I had in mind 1) the websites of EA orgs and 2) introductory pitches at (student) events, rather than the talks involved in running a student group. I have no views on student groups being inclusive in their full roster of talks, not least because I doubt the groups would cohere enough to push a particular moral theory.

I agree that lots of people don't have strong moral views and I think EA should be a place where they figure out what they think, rather than a place where various orgs push them substantially in one direction or another. As I stress, I think even the perception of a 'right' answer is bad for truth seeking. Bed Todd doesn't seem to have responded to my comments on this, so I'm not really sure what he thinks.

And, again FWIW, survivorship bias is a concern. Anecdataly, I know a bunch of people that decided EA weirdness, particularly with reference to the far future, was want made them decide not to come back.

Comment author: Alex_Barry 18 July 2017 11:30:04PM *  0 points [-]

Ah ok, I think I generally agree with your points then (that intro events and websites should be morally inclusive and explain (to some degree) the diversity of EA. My current impression is that this is not much of a problem at the moment. From talking to people working at EA orgs and the reading the advice given to students running into events I think people do advocate for honesty and moral inclusiveness, and when/if it is lacking this is more due to a lack of time/honest mistakes as opposed to conscious planning. (Although possibly we should try to dedicate much more time to it to try and ensure it is never neglected?)

In particular I associate the whole 'moral uncertainty' thing pretty strongly with EA, and in particular CEA and GWWC (but this might just be due to Toby and Will's work on it) which strikes fairly strongly against part 3 in your main post.

How much of a problem do you think this currently is? The title and tone (use of plea etc.) in your post makes me think you feel we are currently in pretty dire straights.

I also think that generally student run talks (and not specific intro to EA events) are the way most people initially hear about EA (although could be very wrong about this) and so actually the majority of the confusion about what EA is really about would not get addressed by people fully embracing the recommendations in your post. (Although I may just be heavily biased towards how the EA societies I have been involved with have worked).

Comment author: Alex_Barry 18 July 2017 05:04:02PM 0 points [-]

Hey Michael, sorry I am slightly late with my comment.

To start I broadly agree that we should not be misleading about EA in conversation, however my impression is that this is not a large problem (although we might have very different samples).

I am unsure where I stand on moral inclusivity/exclusivity, although as I discuss later I think this is not actually a particularly major problem, as most people do not have a set moral theory.

I am wondering what your ideal inclusive effective altruism outreach looks like?

I am finding it hard to build up a cohesive picture from your post and comments, and I think some of your different points don't quite gel together in my head (or at least not in an inconvenient possible world).

You give an example of this of beginning a conversation with global poverty before transitioning to explaining the diversity of EA views by:

Point out people understand this in different ways because of their philosophical beliefs about what matters: some focus on helping humans alive today, others on animals, others on trying to make sure humanity doesn't accidentally wipe itself out, etc.

For those worried about how to ‘sell’ AI in particular, I recently heard Peter Singer give a talk when he said something like (can't remember exactly): "some people are very worried about about the risks from artificial intelligence. As Nick Bostrom, a philosopher at the University of Oxford pointed out to me, it's probably not a very good idea, from an evolutionary point of view, to build something smarter than ourselves." At which point the audience chuckled. I thought it was a nice, very disarming way to make the point.

However trying to make this match the style of an event a student group could actually run, it seems like the closet match (other than a straightforward into to EA event) would be a talk on effective global poverty charity, follow by an addendum on EA being more broad at the end. (I think this due to a variety of practical concerns, such a there being far more good speakers and big names in global poverty, and it providing many concrete examples of how to apply EA concepts etc.)

I am however skeptical that a addendum on the end of a way would create nearly as strong an impression as the subject matter of the talk itself, and people would still leave with a much stronger impression of EA as being about global poverty than e.g. x-risk.

You might say a more diverse approach would be to have talks etc. roughly in proportion to what EAs actually believe is important, so if to make things simple, a third of EAs thought Global poverty was most important, a third x-risk and a third animal suffering, then a third of the talks should be a global poverty, a third on x-risk etc. Each of these could then end with this explanation of EA being more broad etc.

However if people's current perception that global poverty events is best way to get new people into EA is in fact right (at least in the short term) either by having better attendance or conversion ratios this approach could still lead to the majority of new EAs first introduction to EA being through a global poverty talk.

This due to the previous problem of the addendum not really changing peoples impressions enough we could still end up with the situation you say we should want to avoid where:

People should not feel surprised about what EAs value when they get more involved in the movement.

I am approaching this all more from the student group perspective, and so don't have strong views on the website stuff, although I will note that my impression was that 80k does a good job on being inclusive, and GWWC is more of an issue with a lack of updates than anything.

One thing you don't particularly seem to be considering is that almost all people don't actually have strongly formed moral views that conform to one of the common families (utilitarian, virtue ethics etc.) so I doubt (but could be wrong, as there would probably be a lot of survivor bias in this) that a high percentage of newcomers to EA feel excluded by the current implicit assumptions that might often be made of e.g. future people matter.

Comment author: Jeff_Kaufman 08 February 2017 02:46:14AM 1 point [-]

"I reused the diet questions in my plan from MFA 2013 study on leafleting"

In my view, this study asked way too much. When you try to ask too much detail people drop out. Additionally, it asks about things like diet change, but to pick up on changes we should be comparing the experimental and control groups, not comparing one group with its (reported) earlier self.

What I'd like to see is just "do you eat meat" along with a few distractor questions:

  1. Are you religious?
  2. Is English your native language?
  3. Do you eat meat?
  4. Do you own a car?

Yes, we'd like to know way more detail than this, and in practice people are weird about how they use "meat", but the main issue here is getting enough responses to be able to see any difference at all between the two groups.

Comment author: Alex_Barry 08 February 2017 04:56:42PM *  0 points [-]

"I reused the diet questions in my plan from MFA 2013 study on leafleting"

Ah sorry again I was not quite clear, what I meant by this was the question about diet is one I had copied from the MFA study, not that I would reuse all of them. The questions I list in 2.1 are the only ones I would ask (probably with a bonus distractor question and maybe some extra options as suggested by jimrandomh).

Additionally, it asks about things like diet change, but to pick up on changes we should be comparing the experimental and control groups, not comparing one group with its (reported) earlier self.

Asking about 'change in diet' vs just diet generally is basically required to get sufficient statistical power, as the base rate of people saying yes to "have you become vegetarian in the last two weeks" is much much lower than "are you vegetarian" but the effect size we are looking for in each case is the same. One can then compare the control and experimental groups on this metric.

To illustrate the size of this effect, in the post I calculate that with a sample of 5000, by asking about change in the last 2 weeks you would have a 90% chance to find an effect of 1/124 leaflets creating one vegetarian, but if you just asked "are you vegetarian?" you would only be able to find a 1/24 effect at the same power. (Assuming a 20% base rate of vegetarianism, and using this calculator ).

When you try to ask too much detail people drop out.

I agree about using as few questions as possible, and that the MFA study asked far too much (although I think it was administered by volunteers as opposed to online, which would hopefully counteract the drop out effect in their case).

Comment author: Jeff_Kaufman 08 February 2017 02:36:55AM 1 point [-]

"A question to determine if they were leafleted or not, without directly asking."

People who were leafleted but ignored it and don't remember enough to answer this one accurately is a problem here.

What would you think of: at a college that allows students to mass pigeonhole directly, put experiment leaflets in odd mailboxes and control ones in even boxes. Then later put surveys in the boxes, with different links for odd and even boxes.

Instead of having the links be example.com/a and example.com/b it would be better for them all to look like example.com/gwfr so people don't know what's going on. You could generate two piles of follow-up links and use one for the odd boxes and the other for even. QR codes might be good to add so people have the option not to type.

Comment author: Alex_Barry 08 February 2017 04:48:36PM 0 points [-]

People who were leafleted but ignored it and don't remember enough to answer this one accurately is a problem here.

Ah my wording might not be quite clear in that section, ideally the question would rely on the surveyors knowledge of who was leafleted and who was not, without the students having to remember if they were leafleted. E.g. if the leafleting was split by college, asking what college they were from.

Then later put surveys in the boxes, with different links for odd and even boxes.

A reason I would push for trying to get a survey emailed out is that a previous study on leafleting that attempted to use QR codes or links on leaflets got an extremely low (about 2% if I remember correctly) response rate.

I am not sure if giving out the surveys desperately later would boost the response rate, I think the added inconvenience of having to type in a web address or scan a QR code would significantly damage the response rate.

Still it would be worth bearing in mind as strategy that could currently be implemented at any university that allowed mass-leafleting, without being able to email all students.

Comment author: Bernadette_Young 06 February 2017 08:34:49PM 0 points [-]

Ethics approval would probably depend on not collecting identifying data like name, so it would be important to build that into your design. College name would work, but pseudo-randomising by leafleting some colleges would introduce significant confounding, because colleges frequently differ in their make up and culture.

Comment author: Alex_Barry 06 February 2017 09:17:15PM *  1 point [-]

Yes I mention the issues associated with college-based randomization in section 3.1.

Good point about not collecting identifying data, it should just be possible to ask for whatever information was used to decide who to leaflet, such as first letter of last name, which should avoid this issue.

View more: Prev | Next