Comment author: EricHerboso  (EA Profile) 21 June 2018 04:06:23AM 0 points [-]

Not all EAs are on board with AI risk, but it would be rude for this EA hotel to commit to funding general AI research on the side. Whether all EAs are on board with effective animal advocacy isn't the key point when deciding whether the hotel's provided meals are vegan.

An EA who doesn't care about veganism will be mildly put off if the hotel doesn't serve meat. But an EA who believes that veganism is important would be very strongly put off if the hotel served meat. The relative difference in how disturbed the latter person would be is presumably at least 5 times as strong as the minor inconvenience that the former person would feel. This means that even if only 20% of EAs are vegan, the expected value from keeping meals vegan would beat out the convenience factor of including meat for nonvegans.

Comment author: Gregory_Lewis 21 June 2018 08:59:28AM *  1 point [-]

I'm getting tired of the 'veganism is only a minor inconvenience' point being made:

  • V*ganism shows very high 'recidivism' rates in the general population. Most people who try to stop eating meat/animal products usually end up returning to eat these things before long.
  • The general public health literature on behaviour/lifestyle change seldom says these things are easy/straightforward to effect.
  • When this point is made by EAAs, there is almost always lots of EAs who they say, 'No, actually, I found going v*gan really hard', or, 'I tried it but I struggled so much I felt I had to switch back'.
  • (The selection effect that could explain why 'ongoing v*gans' found the change only a minor convenience is left as an exercise to the reader).

I don't know many times we need to rehearse this such that people stop saying 'V*ganism is a minor inconvenience'. But I do know it has happened enough times that other people in previous discussions have also wondered how many times this needs to be rehearsed such that people stop saying this.

Of course, even if it is a major inconvenience (FWIW, I'm a vegetarian, and I'd find the relatively small 'step further' to be exclusively vegan a major inconvenience), this could still be outweighed by other factors across the scales (there's discussion to be had 'relative aversion', some second-order stuff about appropriate cooperative norms, etc. etc.). Yet discussions of the cost-benefit proceed better if the costs are not wrongly dismissed.

Comment author: Gregory_Lewis 05 June 2018 10:42:02PM 9 points [-]

Bravo!

I'm not so sure whether this is targetting the narrowest constraint for developing human capital in EA, but I'm glad this is being thrashed out in reality rather than by the medium of internet commentary.

A more proximal worry is this. The project seems to rely on finding a good hotel manager. On the face of it, this looks like a pretty unattractive role for an EA to take on: it seems the sort of thing that demands quite a lot of operations skill, altready in short supply - further, 20k is just over half the pay of similar roles in the private sector (and below many unis typical grad starting salary), I imagine trying to run a hotel (even an atypical one) is hard and uninspiring work with less of the upsides the guests will enjoy, and you're in a depressed seaside town.

Obviously, if there's already good applicants, good for them (and us!), and best of luck going forward.

Comment author: Gregory_Lewis 28 May 2018 07:41:42PM 6 points [-]

I'd be hesitant to recommend direct efforts for the purpose of membership retention, and I don't think considerations on these lines should play a role in whether a group should 'do' direct work projects. My understanding is many charities use unskilled volunteering opportunities principally as a means to secure subsequent donations, rather than the object level value of the work being done. If so, this strikes me as unpleasantly disingenuous.

I think similar sentiments would apply if groups offered 'direct work opportunities' to their membership in the knowledge they are ineffective but for their impact on recruitment and retention (or at least, if they are going to do so, they should be transparent about the motivation). If (say) it just is the case the prototypical EA undergraduate is better served reallocating their time from (e.g.) birthday fundraisers to 'inward looking' efforts to improve their human capital, we should be candid about this. I don't think we should regret cases where able and morally laudable people are 'put off' EA because they resiliently disagree with things we think are actually true - if anything, this seems better for both parties.

Whether the 'standard view' expressed in the introduction is true (i.e. "undergrads generally are cash- and expertise- poor compared to professionals, and so their main focus should be on self-development rather than direct work") is open to question. There are definitely exceptions for individuals: I can think of a few undergraduates in my 'field' who are making extremely helpful contributions.

Yet this depends on a particular background or skill set which would not be in common among a local group. Perhaps the forthcoming post will persuade me otherwise, but it seems to me that the 'bar' for making useful direct contributions is almost always higher than the 'bar' for joining an EA student group, and thus opportunities for corporate direct work which are better than standard view 'indirect' (e.g. recruitment) and 'bide your time' (e.g. train up in particular skills important to your comparative advantage) will be necessarily rare.

Directly: if a group like EA Oxford could fund-raise together to produce $100 000 for effective charities (double the donations reported across all groups in the LEAN survey), or they could work independently on their own development such that one of their members becomes a research analyst at a place at Open Phil in the future, I'd emphatically prefer they take the latter approach.

Comment author: RandomEA 05 May 2018 12:15:07PM 2 points [-]

I think your list undercounts the number of animal-focused EAs. For example, it excludes Sentience Politics, which provided updates through the EA newsletter in September 2016, January 2017, and July 2017. It also excludes the Good Food Institute, an organization which describes itself as "founded to apply the principles of effective altruism (EA) to change our food system." While GFI does not provide updates through the EA newsletter, its job openings are mentioned in the December 2017, January 2018, and March 2018 newsletters. Additionally, it excludes organizations like the Humane League, which while not explicitly EA, have been described as having a "largely utilitarian worldview." Though the Humane League does not provide updates through the EA newsletter, its job openings are mentioned in the April 2017 newsletters, February 2018, and March 2018.

Perhaps the argument for excluding GFI and the Humane League (while including direct work organizations in the long term future space) is that relatively few people in direct work animal organizations identify as EAs (while most people in direct work long term future organizations identify as EA). If this is the reason, I think it'd be good for someone to provide evidence for it. Also, if the idea behind this method of counting is to look at the revealed preference of EAs, then I think people earning to give have to be included, especially since earning to give appears to be more useful for farm animal welfare than for long term future causes.

(Most of the above also applies to global health organizations.)

Comment author: Gregory_Lewis 06 May 2018 12:43:37PM 1 point [-]

I picked the 'updates' purely in the interests of time (easier to skim), that it gives some sense of what orgs are considered 'EA orgs' rather than 'orgs doing EA work' (a distinction which I accept is imprecise: would a GW top charity 'count'?), and I (forlornly) hoped pointing to a method, however brief, would forestall suspicion about cherry-picking.

I meant the quick-and-dirty data gathering to be more an indicative sample than a census. I'd therefore expect significant margin of error (but not so significant as to change the bottom line). Other relevant candidate groups are also left out: BERI, Charity Science, Founder's Pledge, ?ALLFED. I'd expect there are more.

Comment author: Peter_Hurford  (EA Profile) 04 May 2018 01:40:32AM *  21 points [-]

I find it so interesting that people on the EA Facebook page have been a lot more generally critical about the content than people here on the EA Forum -- here it's all just typos and formatting issues.

I'll admit that I was one of the people who saw this here on the EA Forum first and was disappointed, but chose not to say anything out of a desire to not rock the boat. But now that I see others are concerned, I will echo my concerns too and magnify them here -- I don't feel like this handbook represents EA as I understand it.

By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.

I feel like I get it. I recognize that CEA and 80K have a right to have strong opinions about cause prioritization. I also recognize that they've worked hard to become such a strong central pillar of EA as they have. I also recognize that a lot of people that CEA and 80K are familiar with agree with them. But now I can't personally help but feel like CEA is using their position of relative strength to essentially dominate the conversation and claim it is the community consensus.

I agree the definition of "EA" here is itself the area of concern. It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare. But naturally I'd be biased toward using these results, and I'm definitely sympathetic to the idea that EA should be considered more narrowly, or we should weight the opinions of people working on it full-time more heavily. So I'm unsure. Even my opinions here are circular, by my own admission.

But I think if we're going to be claiming in a community space to talk about the community, we should be more thoughtful about who's opinions we're including and excluding. It seems pretty inexpensive to re-weigh the handbook to emphasize AI risk just as much without being as clearly jarring about it (e.g., dedicating three chapters instead of one or slanting so clearly toward AI risk throughout the "reasons not to prioritize this cause" sections).

Based on this, and the general sentiment, I'd echo Scott Weather's comment on the Facebook group that it’s pretty disingenuous to represent CEA’s views as the views of the entire community writ large, however you want to define that. I agree I would have preferred it called “CEA’s Guide to Effective Altruism” or something similar.

Comment author: Gregory_Lewis 05 May 2018 01:06:42AM *  7 points [-]

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment author: Gregory_Lewis 05 May 2018 12:12:16AM 2 points [-]

I think there is fair consensus that providing oneself financial security is desirable before making altruistic efforts (charitable or otherwise) (cf. 80k)

I think the question of whether it is good to give later is more controversial. There is some existing discussion on this topic usually under the heading of 'giving now versus giving later' (I particularly like Christiano's treatment). As Nelson says, there are social rates of return/haste considerations that favour earlier investment. I think received (albeit non-resilient) EA wisdom here is the best opportunities at least give impulses that outstrip typical market returns, and thus holding money to give later is a competitive strategy if one has opportunities to greatly 'beat the market'.

Comment author: Gregory_Lewis 02 May 2018 06:10:23PM 4 points [-]

Thanks for the even-handed explication of an interesting idea.

I appreciate the example you gave was more meant as illustration than proposal. I nonetheless wonder whether further examination of the underlying problem might lead to ideas drawn tighter to the proposed limitations.

You note this set of challenges:

  1. Open Phil targets larger grantees
  2. EA funds/grants have limited evaluation capacity
  3. Peripheral EAs tend to channel funding to more central groups
  4. Core groups may have trouble evaluating people, which is often an important factor in whether to fund projects.

The result is a good person (but not known to the right people) with a good small idea is nonetheless left out in the cold.

I'm less sure about #2 - or rather, whether this is the key limitation. Max Dalton wrote on one of the FB threads linked.

In the first round of EA Grants, we were somewhat limited by staff time and funding, but we were also limited by the number of projects we were excited about funding. For instance, time constraints were not the main limiting factor on the percentage of people we interviewed. We are currently hiring for a part-time grants evaluator to help us to run EA Grants this year[...]

FWIW (and non-resiliently), I don't look around and see lots of promising but funding starved projects. More relevantly, I don't review recent history and find lots of cases of stuff rejected by major funders then supported by more peripheral funders which are doing really exciting things.

If not, then the idea here (in essence, of crowd-sourcing evaluation to respected people in the community) could help. Yet it doesn't seem to address #3 or #4.

If most of the money (even from the community) ends up going through the 'core' funnel, then a competitive approach would be advocacy to these groups to change their strategy, instead of providing a parallel route and hoping funders will come.

More importantly, if funders generally want to 'find good people', the crowd-sourced project evaluation only helps so much. For people more on the periphery of the community, this uncertainty from funders will remain even the anonymised feedback on the project is very positive.

Per Michael, I'm not sure what this idea has over (say) posting a 'pitch' on this forum, doing a kickstarter, etc.

Comment author: Gregory_Lewis 24 April 2018 01:31:05AM 9 points [-]

Very interesting. As you say, this data is naturally rough, but it also roughly agrees with own available anecdata (my impression is somewhat more optimistic, although attenuated by likely biases). A note of caution:

The framing in the post generally implies value drift is essentially value decay (e.g. it is called a 'risk', the comparison of value drift to unwanted weight gain/poor diet/ etc.). If so, then value drift/decay should be something to guard against, and maybe precommitment strategies/'lashing oneself to the mast' seems a good idea, like how we might block social media, don't have sweets in the house, and so on.

I'd be slightly surprised if the account someone who 'drifted' would often fit well with the sort of thing you'd expect someone to say if (e.g.) they failed to give up smoking or lose weight. Taking the strongest example, I'd guess someone who dropped from 50% to 10ish% after marrying and starting a family would say something like, "I still think these EA things are important, but now I have other things I consider more morally important still (i.e. my spouse and my kids). So I need to allocate more of my efforts to these, thus I can provide proportionately less to EA matters".

It is much less clear whether this person would think they've made a mistake in allocating more of themselves away from EA, either at t2-now (they don't regret they now have a family which takes their attention away from EA things), or at t1-past (if their previous EA-self could forecast them being in this situation, they would not be disappointed in themselves). If so, these would not be options that their t1-self should be trying to shut off, as (all things considered) the option might be on balance good.

I am sure there are cases where 'life gets in the way' in a manner it is reasonable to regret. But I would be chary if the only story we can tell for why someone would be 'less EA' are essentially greater or lesser degrees of moral failure, disappointed if suspicion attaches to EAs starting a family or enjoying (conventional) professional success, and caution against pre-commitment strategies which involve closing off or greatly hobbling aspects of one's future which would be seen as desirable by common-sense morality.

Comment author: tylermjohn 20 April 2018 08:43:29PM 0 points [-]

thanks, gregory. it's valuable to have numbers on this but i have some concerns about this argument and the spirit in which it is made:

1) most arguments for x-risk reduction make the controversial assumption that the future is very positive in expectation. this argument makes the (to my mind even more) controversial assumption that an arbitrary life-year added to a presently-existing person is very positive, on average. while it might be that many relatively wealthy euro-american EAs have life-years that are very positive, on average, it's highly questionable whether the average human has life-years that are on average positive at all, let alone very positive.

2) many global catastrophic risks and extinction risks would affect not only humans but also many other sentient beings. insofar as these x-risks are risks of the extinction of not only humans but also nonhuman animals, to make a determination of the person-affecting value of deterring x-risks we must sum the value of preventing human death with the value of preventing nonhuman death. on the widely held assumption that farmed animals and wild animals have bad lives on average, and given the population of tens of billions of presently existing farmed animals and 10^13-10^22 presently existing wild animals, the value of the extinction of presently living nonhuman beings would likely swamp the (supposedly) negative value of the extinction of presently existing human beings. many of these animals would live a short period of time, sure, but their total life-years still vastly outnumber the remaining life-years of presently existing humans. moreover, most people who accept a largely person-affecting axiology also think that it is bad when we cause people with miserable lives to exist. so on most person-affecting axiologies, we would also need to sum the disvalue of the existence of future farmed and wild animals with the person-affecting value of human extinction. this may make the person-affecting value of preventing extinction extremely negative in expectation.

3) i'm concerned about this result being touted as a finding of a "highly effective" cause. $9,600/life-year is vanishingly small in comparison to many poverty interventions, let alone animal welfare interventions (where ACE estimates that this much money could save 100k+ animals from factory farming). why does $9,600/life-year suddenly make for a highly effective when we're talking about x-risk reduction, when it isn't highly effective when we're talking about other domains?

Comment author: Gregory_Lewis 20 April 2018 09:26:54PM *  3 points [-]

1) Happiness levels seem to trend strongly positive, given things like the world values survey (in the most recent wave - 2014, only Egypt had <50% of people reporting being either 'happy' or 'very happy', although in fairness there were a lot of poorer countries with missing data. The association between wealth and happiness is there, but pretty weak (e.g. Zimbabwe gets 80+%, Bulgaria 55%). Given this (and when you throw in implied preferences, commonsensical intuitions whereby we don't wonder about whether we should jump in the pond to save the child as we're genuinely uncertain it is good for them to extent their life), it seems the average human takes themselves to have a life worth living. (q.v.)

2) My understanding from essays by Shulman and Tomasik is that even intensive factory farming plausibly leads to a net reduction in animal populations, given a greater reduction in wild animals due to habitat reduction. So if human extinction leads to another ~100M years of wildlife, this looks pretty bad by asymmetric views.

Of course, these estimates are highly non-resilient even with respect to sign. Yet the objective of the essay wasn't to show the result was robust to all reasonable moral considerations, but that the value of x-risk reduction isn't wholly ablated on a popular view of population ethics - somewhat akin to how Givewell analysis on cash transfers don't try and factor in poor meat eater considerations.

3) I neither 'tout' - nor even state - this is a finding that 'xrisk reduction is highly effective for person-affecting views'. Indeed, I say the opposite:

Although it seems unlikely x-risk reduction is the best buy from the lights of the [ed: typo - as context suggests, meant 'person-affecting'] total view (we should be suspicious if it were), given $13000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.

Comment author: AGB 16 April 2018 01:58:51AM *  3 points [-]

I suspect that the motivation hacking you describe is significantly harder for researchers than for, say, operations, HR, software developers, etc. To take your language, I do not think that the cause area beliefs are generally 'prudentially useful' for these roles, whereas in research a large part of your job may on justifying, developing, and improving the accuracy of those exact beliefs.

Indeed, my gut says that most people who would be good fits for these many critical and under-staffed supporting roles don't need to have a particularly strong or well-reasoned opinion on which cause area is 'best' in order to do their job extremely well. At which point I expect factors like 'does the organisation need the particular skills I have', and even straightforward issues like geographical location, to dominate cause prioritisation.

I speculate that the only reason this fact hasn't permeated into these discussions is that many of the most active participants, including yourself and Denise, are in fact researchers or potential researchers and so naturally view the world through that lens.

Comment author: Gregory_Lewis 20 April 2018 12:17:35PM 0 points [-]

I'd hesitate to extrapolate my experience across to operational roles for the reasons you say. That said, my impression was operations folks place a similar emphasis on these things as I. Tanya Singh (one my colleagues) gave a talk on 'x risk/EA ops'. From the Q&A (with apologies to Roxanne and Tanya for my poor transcription):

One common retort we get about people who are interested in operations is maybe they don't need to be value-aligned. Surely we can just hire someone who has operations skills but doesn't also buy into the cause. How true do you think this claim is?

I am by no means an expert, but I have a very strong opinion. I think it is extremely important to be values aligned to the cause, because in my narrow slice of personal experience that has led to me being happy, being content, and that's made a big difference as to how I approach work. I'm not sure you can be a crucial piece of a big puzzle or a tightly knit group if you don't buy into the values that everyone is trying to push towards. So I think it's very very important.

View more: Next