Comment author: Evan_Gaensbauer 08 May 2018 10:28:54AM 2 points [-]

Yes, but I think the current process isn't inclusive of input from as many EA organizations as it could or should be. It appears it might be as simple as the CEA having offices in Berkeley and Oxford meaning they receive a disproportionate amount of input on EA from those organizations, as opposed to EA organizations whose staff are geographically distributed and/or don't have an office presence near the CEA. I think the CEA should still be at the centre of making these decisions, and after recent feedback from Max Dalton from the CEA on the EA Handbook 2.0, I expect they will make a more inclusive process for feedback on outreach materials.

Comment author: Alex_Barry 08 May 2018 01:11:40PM *  2 points [-]

I'm not quite sure what argument you are trying to make with this comment.

I interpreted your original comment as arguing for something like: "Although most of the relevant employees at central coordinator organisations are not sure about the sign of outreach, most EAs think it is likely to be positive, thus it is likely to in fact be positive".

Where I agree with first two points but not the conclusion, as I think we should consider the staff at the 'coordinator organizations' to be the relevant expert class and mostly defer to their judgement.

Its possible you were instead arguing "The increased concern about downside risk has also made it much harder to ‘use up’ your dedication" is not in fact a concern faced by most EAs, since they still think outreach is clearly positive, so this is not a discouraging factor.

I somewhat agree with this point, but based on your response to cafelow I do not think it is very likely to be the point you were trying to make.

Comment author: Evan_Gaensbauer 06 May 2018 07:31:56PM 1 point [-]

It's my impression it's a handful of coordinator organizations in EA who think it's not clear the sign of outreach is positive. It's my impression from most individual effective altruists I talk to, and I expect this would extend to their opinion as a bloc, the sign of outreach, even after taking into account the possibility of rogue/unilateral actors, is moderately positive.

Comment author: Alex_Barry 08 May 2018 09:10:38AM 1 point [-]

But should we not expect coordinator organizations to be the ones best placed to have considered the issue?

My impression is that they have developed their view over a fairly long time period after a lot of thought and experience.

Comment author: RandomEA 04 May 2018 04:31:38AM 2 points [-]

Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.

I agree that there are substantial differences between global poverty and farm animal welfare (with global poverty being more clearly Type 1). But it seems to me that those differences are more differences of degree, while the differences between global poverty/farm animal welfare and biosecurity/AI alignment are more differences of kind.

Comment author: Alex_Barry 04 May 2018 04:45:42PM *  0 points [-]

Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.

Ah I see. for some reason I got the other sense from reading your comment, but looking back at it I think that was just a failing of reading comprehension on my part.

I agree that the differences between global poverty and animal welfare are more matters of degree, but I also think they are larger than people seem to expect.

Comment author: RandomEA 03 May 2018 06:30:24AM *  10 points [-]

The shift from Doing Good Better to this handbook reinforces my sense that there are two types of EA:

Type 1:

  1. Causes: global health, farm animal welfare

  2. Moral patienthood is hard to seriously dispute

  3. Evidence is more direct (RCTs, corporate pledges)

  4. Charity evaluators exist (because evidence is more direct)

  5. Earning to give is a way to contribute

  6. Direct work can be done by people with general competence

  7. Economic reasoning is more important (partly due to donations being more important)

  8. More emotionally appealing (partly due to being more able to feel your impact)

  9. Some public knowledge about the problem

  10. More private funding and a larger preexisting community

Type 2:

  1. Causes: AI alignment, biosecurity

  2. Moral patienthood can be plausibly disputed (if you're relying on the benefits to the long term future; however, these causes are arguably important even without considering the long term future)

  3. Evidence is more speculative (making prediction more important)

  4. Charity evaluation is more difficult (because impact is harder to measure)

  5. Direct work is the way to contribute

  6. Direct work seems to benefit greatly from specific skills/graduate education

  7. Game theory reasoning is more important (of course, game theory is technically part of economics)

  8. Less emotionally appealing (partly due to being less able to feel your impact)

  9. Little public knowledge about the problem

  10. Less private funding and a smaller preexisting community

Comment author: Alex_Barry 03 May 2018 03:52:01PM *  2 points [-]

I am somewhat confused by the framing of this comment, you start by saying "there are two types of EA" but the points seem to all be about the properties of different causes.

I don't think there are 'two kinds' of EAs in the sense you could easily tell which group people were going to fall into in advance, but that all of your characteristics just follow as practical considerations resulting from how important people find the longtermist view. (But I do think "A longtermist viewpoint leads to very different approach" is correct.)

I'm also not sure how similar the global poverty and farm animal welfare groups actually are. There seem to be significant differences in terms of the quality of evidence used and how established they are as areas. Points 3, 4, 7, 9 and 10 seem to have pretty noticeable differences between global poverty and farm animal welfare.

Comment author: Alex_Barry 02 May 2018 03:59:23PM *  4 points [-]

As far as I can tell none of the links that look like this instead of http://effective-altruism.com work in the pdf version.

Comment author: kbog  (EA Profile) 25 April 2018 08:02:55PM *  7 points [-]

I didn't notice the community survey until I saw your comment. I had to retake the survey (answering "no my answers are not accurate") to get to it.

I think there will be selection bias when the survey is optional and difficult to access like this.

Comment author: Alex_Barry 25 April 2018 09:04:00PM 0 points [-]

I also missed it the first time through

Comment author: james_aung 25 April 2018 04:00:54PM *  3 points [-]

Thanks for the comment JoshP!

I've spoken a lot with the Cambridge lot about this. I guess the cruxes of my disagreement with their approach are:

1) I think their committee model selects more for willingness to do menial tasks for the prestige of being in the committee, rather than actual enthusiasm for effective altruism. So something like what you described happens where "a section become more high-fidelity later, and it ends up not making that much difference", as people who aren't actually interested drop out. But it comes at the cost of more engaged people spending time on management.

2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events. But my model of someone who ends up being very engaged in EA is that excitement about the content drives most of the motivation, rather than external commitment devices. So I suppose roles only play a limited role in committing people to engage, but comes at the cost of people spending X hours on admin, when they could have spent X hours on learning more about EA.

It's worth noting that I think Cambridge have recently been thinking hard about this, and also I expect their models for how their committee provides value to be much more nuanced than I present. Nevertheless, I think (1) and (2) capture useful points of disagreement I've had with them in the past.

Comment author: Alex_Barry 25 April 2018 07:19:53PM *  1 point [-]

as people who aren't actually interested drop out.

This depends on what you mean by 'drop out'. Only around 10% (~5) of our committee dropped out during last year, although maybe 1/3rd chose not to rejoin the committee this year (and about another 1/3rd are graduating)

2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events.

This does not ring especially true to me, see my reply to Josh.

Comment author: JoshP 25 April 2018 01:48:14PM 5 points [-]

Interesting stuff, thanks guys. I wanted to discuss one point:

  1. From conversations with James, I believe Cambridge has a pretty different model of how they run it- in particular, a much more hands on approach, which calls for formal commitment from more people e.g. giving everyone specific roles, which is the "excessive formalist" approach. Are there reasons you guys have access to which favour your model of outreach over theirs? Or alternate frame; what's the best argument in favour of the Cambridge model of giving everyone an explicit role, and why does that not succeed (if it doesn't)?

For example, is it possible that Cambridge get a significantly higher number of people involved, which then cancels out the effects of immediately high-fidelity models in due course (e.g. suppose lots of people are low fidelity while at Cam, but then a section become more high-fidelity later, and it ends up not making that much difference in the long run)? Or does the Cambridge model use roles as an effective commitment device? Or does one model ensure less movement drift, or less lost value from movement drift? (see here http://effective-altruism.com/ea/1ne/empirical_data_on_value_drift/?refresh=true) There's a comment from David Moss here suggesting there's an "open question" about the value of focussing on more engaged individuals, given the risks of attrition in large movements (assuming the value of the piece, which is subject to lots of methodological caveats).

The qs above might be contradictory- I'm not advocating any of the above, but instead clarifying whether there's anything missed by your suggestions.

Comment author: Alex_Barry 25 April 2018 07:17:57PM *  2 points [-]

To jump in as the ex-co-president of EA: Cambridge from last year:

I think the differences mostly come in things which were omitted from this post, as opposed to the explicit points made, which I mostly agree with.

There is a fairly wide distinction between the EA community in Cambridge and the EA: Cam committee, and we don't try to force people from the former into the latter (although we hope for the reverse!).

I largely view a big formal committee (ours was over 40 people last year) as an addition to the attempts to build a community as outlined in this post. A formal committee in my mind significantly improves the ability to get stuff done vs the 'conspirators' approach.

The getting stuff done can then translate to things such as an increased campus presence, and generally a lot more chances to get people into the first stage of the 'funnel'. Last year we ran around 8 events a week, with several of them aimed at engaging and on-boarding new interested people (Those being hosting 1 or 2 speakers a week, running outreach focused socials, introductionary discussion groups and careers workshops.) This large organisational capacity also let us run ~4 community focused events a week.

I think it is mostly these mechanisms that make the large committee helpful, as opposed to most of the committee members becoming 'core EAs' (I think conversion ratio is perhaps 1/5 or 1/10). There is also some sense in which the above allow us to form a campus presence that helps people hear about us, and I think perhaps makes us more attractive to high-achieving people, although I am pretty uncertain about this.

I think EA: Cam is a significant outlier in terms of EA student groups, and if a group is starting out it probably makes more sense to stick to the kind of advice given in this article. However I think in the long term Community + Big formal committee is probably better than just a community with an informal committee.

Comment author: tylermjohn 20 April 2018 08:43:29PM 0 points [-]

thanks, gregory. it's valuable to have numbers on this but i have some concerns about this argument and the spirit in which it is made:

1) most arguments for x-risk reduction make the controversial assumption that the future is very positive in expectation. this argument makes the (to my mind even more) controversial assumption that an arbitrary life-year added to a presently-existing person is very positive, on average. while it might be that many relatively wealthy euro-american EAs have life-years that are very positive, on average, it's highly questionable whether the average human has life-years that are on average positive at all, let alone very positive.

2) many global catastrophic risks and extinction risks would affect not only humans but also many other sentient beings. insofar as these x-risks are risks of the extinction of not only humans but also nonhuman animals, to make a determination of the person-affecting value of deterring x-risks we must sum the value of preventing human death with the value of preventing nonhuman death. on the widely held assumption that farmed animals and wild animals have bad lives on average, and given the population of tens of billions of presently existing farmed animals and 10^13-10^22 presently existing wild animals, the value of the extinction of presently living nonhuman beings would likely swamp the (supposedly) negative value of the extinction of presently existing human beings. many of these animals would live a short period of time, sure, but their total life-years still vastly outnumber the remaining life-years of presently existing humans. moreover, most people who accept a largely person-affecting axiology also think that it is bad when we cause people with miserable lives to exist. so on most person-affecting axiologies, we would also need to sum the disvalue of the existence of future farmed and wild animals with the person-affecting value of human extinction. this may make the person-affecting value of preventing extinction extremely negative in expectation.

3) i'm concerned about this result being touted as a finding of a "highly effective" cause. $9,600/life-year is vanishingly small in comparison to many poverty interventions, let alone animal welfare interventions (where ACE estimates that this much money could save 100k+ animals from factory farming). why does $9,600/life-year suddenly make for a highly effective when we're talking about x-risk reduction, when it isn't highly effective when we're talking about other domains?

Comment author: Alex_Barry 20 April 2018 09:28:26PM *  3 points [-]

I'm surprised by your last point, since the article says:

Although it seems unlikely x-risk reduction is the best buy from the lights of the total view (we should be suspicious if it were), given $13000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.

This seems a far cry from the impression you seem to have gotten from the article. In fact your quote of "highly effective" is only used once, in the introduction, as a hypothetical motivation for crunching the numbers. (Since, a-priori, it could have turned out the cost effectiveness was 100 times higher, which would have been very cost effective).

On your first two points, my (admittedly not very justified) impression is the 'default' opinons people typically have is that almost all human lives are positive, and that animal lives are extremely unimportant compared to humans. Whilst one can question the truth of these claims, writing an article aimed at the majority seems reasonable.

It might be that actually within EA the average opinion is closer to yours, and in any case I agree the assumptions should have been clearly stated somewhere, along with the fact he is taking the symmetric as opposed to asymmetric view etc.

Comment author: turchin 18 April 2018 05:13:13PM 0 points [-]

How could it explain that diabetics lived longer than healthy people?

Anyway, we need a direct test on healthy people to know if it works or not.

Comment author: Alex_Barry 20 April 2018 04:08:11PM 1 point [-]

How could it explain that diabetics lived longer than healthy people?

If all of the sickest diabetics are switched to other drugs, then the only people taking metformin are the 'healthy diabetics', and it is possible that the average healthy diabetic lives longer than the average person (who may be healthy or unhealthy).

This would give the observed effective without metformin having any effect on longevity.

View more: Next