Comment author: Gregory_Lewis 02 May 2018 06:10:23PM 4 points [-]

Thanks for the even-handed explication of an interesting idea.

I appreciate the example you gave was more meant as illustration than proposal. I nonetheless wonder whether further examination of the underlying problem might lead to ideas drawn tighter to the proposed limitations.

You note this set of challenges:

  1. Open Phil targets larger grantees
  2. EA funds/grants have limited evaluation capacity
  3. Peripheral EAs tend to channel funding to more central groups
  4. Core groups may have trouble evaluating people, which is often an important factor in whether to fund projects.

The result is a good person (but not known to the right people) with a good small idea is nonetheless left out in the cold.

I'm less sure about #2 - or rather, whether this is the key limitation. Max Dalton wrote on one of the FB threads linked.

In the first round of EA Grants, we were somewhat limited by staff time and funding, but we were also limited by the number of projects we were excited about funding. For instance, time constraints were not the main limiting factor on the percentage of people we interviewed. We are currently hiring for a part-time grants evaluator to help us to run EA Grants this year[...]

FWIW (and non-resiliently), I don't look around and see lots of promising but funding starved projects. More relevantly, I don't review recent history and find lots of cases of stuff rejected by major funders then supported by more peripheral funders which are doing really exciting things.

If not, then the idea here (in essence, of crowd-sourcing evaluation to respected people in the community) could help. Yet it doesn't seem to address #3 or #4.

If most of the money (even from the community) ends up going through the 'core' funnel, then a competitive approach would be advocacy to these groups to change their strategy, instead of providing a parallel route and hoping funders will come.

More importantly, if funders generally want to 'find good people', the crowd-sourced project evaluation only helps so much. For people more on the periphery of the community, this uncertainty from funders will remain even the anonymised feedback on the project is very positive.

Per Michael, I'm not sure what this idea has over (say) posting a 'pitch' on this forum, doing a kickstarter, etc.

Comment author: AGB 03 May 2018 12:49:25PM *  2 points [-]

Some way of distributing money to risky ventures, including fundraising, in global poverty and animal welfare should probably exist.

I think it's pretty reasonable if CEA doesn't want to do this because (a) they take a longtermist view and (b) they have limited staff capacity so aren't willing to divert many resources from (a) to anything else. In fact, given CEA's stated views it would be a bit strange if they acted otherwise. I know less about Nick, but I'm guessing the story there is similar.

https://www.centreforeffectivealtruism.org/ceas-current-thinking/

I have a limited sense for what to do about this problem, and I don't know if the solution in the OP is actually a good idea, but recognising the disconnect between what people want and what we have is a start.

I may write more about this in the near future.

Comment author: Gregory_Lewis 12 April 2018 07:19:30AM 7 points [-]

Bravo!

FWIW I am one of the people doing something similar to what you advocate: I work in biorisk for comparative advantage reasons, although I think AI risk is a bigger deal.

That said, this sort of trading might be easier within broad cause areas than between them. My impression is received wisdom among the far future EAs is that both AI and bio are both 'big deals': AI might be (even more) important, yet bio (even more) neglected. For this reason even though I suspect most (myself included) would recommend a 'pluripotent far future EA' to look into AI first, it wouldn't take much to tilt the scales the other way (e.g. disposition, comparative advantage, and other things you cite). It also means individuals may not suffer a motivation hit if they are merely doing a very good thing rather than the very best thing by their lights. I think a similar thing applies to means that further a particular cause (whether to strike out on ones own versus looking for a role in an existing group, operations versus research, etc.)

When the issue is between cause areas, one needs to grapple with decisive considerations open chasms which are hard to cross with talent arbitrage. In the far future case, the usual story around astronomical waste etc. implies (pace Tomasik) that work on the far future is hugely more valuable than work in another cause area like animal welfare. Thus even if one is comparatively advantaged in animal welfare, one may still think their marginal effect is much greater in the far future cause area.

As you say, this could still be fertile ground for moral trade, and I also worry about more cynical reasons that explain this hasn't happened (cf. fairly limited donation trading so far). Nonetheless, I'd like to offer a few less cynical reasons that draw the balance of my credence.

As you say, although Allison and Bettina should think, "This is great, by doing this I get to have a better version of me do work on the cause I think is most important!" They might mutually recognise their cognitive foibles will mean they will struggle with their commitment to a cause they both consider objectively less important, and this term might outweigh their comparative advantage.

It also may be the case that developing considerable sympathy to a cause area may not be enough. Both intra- and outside EA, I generally salute well-intentioned efforts to make the world better: I wish folks working on animal welfare, global poverty, or (developed world) public health every success. Yet when I was doing the latter, despite finding it intrinsically valuable, I struggled considerably with motivation. I imagine the same would apply if I traded places with an 'animal-EA' for comparative advantage reasons.

It would been (prudentially) better if I could 'hack' my beliefs to find this work more intrinsically valuable. Yet people are (rightly) chary to try and hack prudentially useful beliefs (cf. Pascal's wager, where Pascal anticipated the 'I can't just change my belief in God' point, and recommended atheists go to church and other things which would encourage religious faith to take root), given it may have spillover into other domains where they take epistemic accuracy is very important. If cause area decisions mostly rely on these (which I hope they do), there may not be much opportunity to hack away this motivational bracken to provide fertile ground for moral trade. 'Attitude hacking' (e.g. I really like research, but I'd be better at ops, so I try to make myself more motivated by operations work) lacks this downside, and so looks much more promising.

Further, a better ex ante strategy across the EA community might be not to settle for moral trade, but instead discuss the merits of the different cause areas. Both Allison and Bettina take the balance of reason on their side, and so might hope either a) they get their counterpart to join them, or b) they realise they are mistaken and so migrate to something more important. Perhaps this implies an idealistic view of how likely people are to change their minds about these matters. Yet the track record of quite a lot people changing their minds about what cause areas are the most important (I am one example) gives some cause for hope.

Comment author: AGB 16 April 2018 02:01:56AM 1 point [-]

I agree with your last paragraph, but indeed think that you are being unreasonably idealistic :)

Comment author: Gregory_Lewis 12 April 2018 07:19:30AM 7 points [-]

Bravo!

FWIW I am one of the people doing something similar to what you advocate: I work in biorisk for comparative advantage reasons, although I think AI risk is a bigger deal.

That said, this sort of trading might be easier within broad cause areas than between them. My impression is received wisdom among the far future EAs is that both AI and bio are both 'big deals': AI might be (even more) important, yet bio (even more) neglected. For this reason even though I suspect most (myself included) would recommend a 'pluripotent far future EA' to look into AI first, it wouldn't take much to tilt the scales the other way (e.g. disposition, comparative advantage, and other things you cite). It also means individuals may not suffer a motivation hit if they are merely doing a very good thing rather than the very best thing by their lights. I think a similar thing applies to means that further a particular cause (whether to strike out on ones own versus looking for a role in an existing group, operations versus research, etc.)

When the issue is between cause areas, one needs to grapple with decisive considerations open chasms which are hard to cross with talent arbitrage. In the far future case, the usual story around astronomical waste etc. implies (pace Tomasik) that work on the far future is hugely more valuable than work in another cause area like animal welfare. Thus even if one is comparatively advantaged in animal welfare, one may still think their marginal effect is much greater in the far future cause area.

As you say, this could still be fertile ground for moral trade, and I also worry about more cynical reasons that explain this hasn't happened (cf. fairly limited donation trading so far). Nonetheless, I'd like to offer a few less cynical reasons that draw the balance of my credence.

As you say, although Allison and Bettina should think, "This is great, by doing this I get to have a better version of me do work on the cause I think is most important!" They might mutually recognise their cognitive foibles will mean they will struggle with their commitment to a cause they both consider objectively less important, and this term might outweigh their comparative advantage.

It also may be the case that developing considerable sympathy to a cause area may not be enough. Both intra- and outside EA, I generally salute well-intentioned efforts to make the world better: I wish folks working on animal welfare, global poverty, or (developed world) public health every success. Yet when I was doing the latter, despite finding it intrinsically valuable, I struggled considerably with motivation. I imagine the same would apply if I traded places with an 'animal-EA' for comparative advantage reasons.

It would been (prudentially) better if I could 'hack' my beliefs to find this work more intrinsically valuable. Yet people are (rightly) chary to try and hack prudentially useful beliefs (cf. Pascal's wager, where Pascal anticipated the 'I can't just change my belief in God' point, and recommended atheists go to church and other things which would encourage religious faith to take root), given it may have spillover into other domains where they take epistemic accuracy is very important. If cause area decisions mostly rely on these (which I hope they do), there may not be much opportunity to hack away this motivational bracken to provide fertile ground for moral trade. 'Attitude hacking' (e.g. I really like research, but I'd be better at ops, so I try to make myself more motivated by operations work) lacks this downside, and so looks much more promising.

Further, a better ex ante strategy across the EA community might be not to settle for moral trade, but instead discuss the merits of the different cause areas. Both Allison and Bettina take the balance of reason on their side, and so might hope either a) they get their counterpart to join them, or b) they realise they are mistaken and so migrate to something more important. Perhaps this implies an idealistic view of how likely people are to change their minds about these matters. Yet the track record of quite a lot people changing their minds about what cause areas are the most important (I am one example) gives some cause for hope.

Comment author: AGB 16 April 2018 01:58:51AM *  3 points [-]

I suspect that the motivation hacking you describe is significantly harder for researchers than for, say, operations, HR, software developers, etc. To take your language, I do not think that the cause area beliefs are generally 'prudentially useful' for these roles, whereas in research a large part of your job may on justifying, developing, and improving the accuracy of those exact beliefs.

Indeed, my gut says that most people who would be good fits for these many critical and under-staffed supporting roles don't need to have a particularly strong or well-reasoned opinion on which cause area is 'best' in order to do their job extremely well. At which point I expect factors like 'does the organisation need the particular skills I have', and even straightforward issues like geographical location, to dominate cause prioritisation.

I speculate that the only reason this fact hasn't permeated into these discussions is that many of the most active participants, including yourself and Denise, are in fact researchers or potential researchers and so naturally view the world through that lens.

Comment author: Ben_Todd 27 March 2018 04:35:31AM 2 points [-]

Surely rent is much higher than Oxford on average? It's possible to get a great place in Oxford for under £700 per month, while comparable in SF would be $1300+. Food also seems about 30% more expensive, and in Oxford you don't have to pay for a commute. My overall guess is that $80k p.a. in SF is equivalent to about £40k p.a. in Oxford.

Comment author: AGB 27 March 2018 05:39:24PM 0 points [-]

To chime in as someone who has very recently spent a lot of time in both London and SF, a 1.8:1 ratio (as in $1.8y is about the same as £y) is very roughly what I would have said for living costs between that pair, though living circumstances will vary significantly.

Pound to dollar exchange rates have moved a ton in the last few years, whereas I don't think local salaries or costs of living have moved nearly as much, so I expect that 1.8:1 heuristic to be more stable/useful than trying to do the same comparison including a currency conversion (depending what point in the last few years you picked/moved, that ratio would imply anywhere between a 1.05x increase and a 1.55x increase).

Comment author: Robin_Green 12 November 2017 10:19:24PM 2 points [-]

I find your comment slightly confusing, as it suggests - even on the most charitable reading of your comment I can muster - that if a sex partner is not enthusiastic, the sex must be ipso facto rape. Where does this leave men who start having sex and then lose their enthusiasm for whatever reason, whether physical or psychological hangups, I wonder... or does your definition of rape only apply to the woman's enthusiasm?

Comment author: AGB 13 November 2017 06:55:06AM 3 points [-]

(Disclaimer: I am Denise’s partner, have discussed this with her before, and so it’s unsurprising if I naturally interpreted her comment differently.)

Enthusiasm =! consent. I’m not sure where enthusiasm made it into your charitable reading.

Denise’s comment was deliberately non-gendered, and we would both guess (though without data) that once you move to the fuzzy ‘insufficient evidence of consent’ section of the spectrum there will be lots of women doing this, possibly even accounting for the majority of such cases in some environments.

Comment author: Habryka 26 October 2017 09:40:06PM *  34 points [-]

As a general note for the discussion: Given the current incentive landscape in the parts of society most EAs are part of, I expect opposition to this post to be strongly underrepresented in the comment section.

As a datapoint, I have many disagreements with this article, but based on negative experiences with similar discussions, I do not want to participate in a longer discussion around it. I don't think there is an easy fix for this, but it seems reasonable for people reading the comments to be aware that they might be getting a very selective set of opinions.

Comment author: AGB 29 October 2017 03:34:39PM 5 points [-]

So as a general principle, it's true that discussion of an issue filters out (underrepresents) people who find or have found the discussion itself unpleasant*. In this particular case I think that somewhat cuts both ways, since these discussions as they take place in wider society often aren't very pleasant in general, for either side. See this comic.

To put it more plainly, I could easily name a lot of people who will strongly agree with this post but won't comment for fear of criticism and/or backlash. Like you I don't think there is an easy fix for this.

*Ironically, this is part of what Kelly is driving at when she says that championing free speech can sometimes inhibit it.

Comment author: MichaelPlant 28 October 2017 12:54:40AM *  6 points [-]

So many different boxes to reply to! I'll do one reply for everything here.

My main reflection is that either 1. I really haven't personally had much discussion of inclusivity in my time in the EA movement (and this may just be an outlier/coincidence) or 2. I'm just much more receptive to this sort of chat than the average EA. I live among Oxford students and this probably gives me a different reference point (e.g. people do sometimes introduce themselves with their pronouns here). I forget how disconcertingly social justice-y I found the University when I first moved here.

Either way, the effect is I really haven't felt like I've had too many discussion in EA about diversity. It's not like it's my favourite topic or anything.

Comment author: AGB 29 October 2017 03:14:11PM 0 points [-]

Either way, the effect is I really haven't felt like I've had too many discussion in EA about diversity. It's not like it's my favourite topic or anything.

It's extremely hard to generalize here because different geographies have such different stories to tell, but my personal take is that the level of (public) discussion about diversity within EA has dipped somewhat over time.

When I wrote the Pandora's Box 2.5 years ago, I remember being sincerely worried that low-quality discussion of the issue would swamp a lot of good things that EA was accomplishing, and I wanted build some consensus before that got out of hand. I can't really imagine feeling that way now.

Comment author: BenHoffman 06 May 2017 08:28:31PM *  1 point [-]

There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

Yes! More clear descriptions of how people have changed their mind would be great. I think it's especially important to be able to identify which things we'd hoped would go well but didn't pan out - and then go back and make sure we're not still implicitly pitching that hope.

Comment author: AGB 07 May 2017 11:42:00AM 2 points [-]

I found the post, was struggling before because it's actually part of their career guide rather than a blog post.

Comment author: BenHoffman 01 May 2017 11:56:50PM *  5 points [-]

EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

If you click "Donate Effectively," you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I've said above is a good idea but a very large leap from the anti-Playpump pitch. "Trust friendly, sensible-seeming agents and empower them to do what they think is sensible" is a very, very different method than "check everything because it's easy to spend money on nice-sounding things of no value."

The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I've been advised that this is an old page pending an update). The GWWC Facebook page seems like it's mostly global poverty stuff, and some promotion of other CEA brands.

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

Comment author: AGB 06 May 2017 01:55:28PM *  1 point [-]

Thanks for digging up those examples.

EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

I think 'many methods of doing good fail' has wide applications outside of Global Poverty, but I acknowledge the wider point you're making.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general 'intro to EA' that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about 'which causes and which methods of doing good should we list given limited time', rather than 'which cause/method would provide the most generically effective pitch'.

We didn't want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the 'core' EAs in the room that was a very real risk.

Comment author: Fluttershy 23 April 2017 06:44:56PM 1 point [-]

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation.

I'd say this is correct. The EA Forum itself has such a selection effect, though it's weaker than the ones either of our friend groups have. One idea would be to do a survey, as Peter suggests, though this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others. A relevant factor here is that it sometimes takes people a fair bit of reading or reflection to develop a sense for why integrity is particularly valuable from a consequentialist's perspective, and then link this up to why EA Funds continuing has the consequence of showing people that projects others use relatively lower-integrity methods to report on and market can succeed despite (or even because?) of this.

I'd also agree that, at the time of Will's post, it would have been incorrect to say:

The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time

But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.

My view is further that the community's response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I'm positing that the public response to EA Funds would be more negative if we hadn't filtered certain people out of EA by having an integrity problem in the first place.

Comment author: AGB 30 April 2017 12:22:24PM 0 points [-]

(Sorry for the slower response, your last paragraph gave me pause and I wanted to think about it. I still don't feel like I have a satisfactory handle on it, but also feel I should reply at this point.)

this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others.

This makes total sense to me, and I do currently perceive something of an inverse correlation between how hard people have thought about the funds and how positively they feel about them. I agree this is a cause for concern. The way I would describe that situation from your perspective is not 'the funds have not been well-received', but rather 'the funds have been well-received but only because too many (most?) people are analysing the idea in a superficial way'. Maybe that is what you were aiming for originally and I just didn't read it that way.

But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.

True. That post was only a couple of months before this one though; not a lot of time for new data/arguments to emerge or opinions to change. The only major new data point I can think of since then is the funds raising ~$1m, which I think is mostly orthogonal to what we are discussing. I'm curious whether you personally a perceive a change (drop) in popularity in your circles?

My view is further that the community's response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I'm positing that the public response to EA Funds would be more negative if we hadn't filtered certain people out of EA by having an integrity problem in the first place.

This story sounds plausibly true. It's a difficult one to falsify though (I could flip all the language and get something that also sounds plausibly true), so turning it over in my head for the past few days I'm still not sure how much weight to put on it.

View more: Next