Comment author: weeatquince  (EA Profile) 15 August 2018 11:50:28PM *  6 points [-]

We would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

Here is my two cents. I hope it is constructive:


1.

The policy is excellent but the challenge lies in implementation.

Firstly I want to say that this post is fantastic. I think you have got the policy correct: that CEA should be cause-impartial, but not cause-agnostic and CEA’s work should be cause-general.

However I do not think it looks, from the outside, like CEA is following this policy. Some examples:

  • EA London staff had concerns that they would need to be more focused on the far future in order to receive funding from CEA.

  • You explicitly say on your website: "We put most of our credence in a worldview that says what happens in the long-term future is most of what matters. We are therefore more optimistic about others who roughly share this worldview."[1]

  • The example you give of the new EA handbook

  • There is a close association with 80000 Hours who are explicitly focusing much of their effort on the far future.

These are all quite subtle things, but collectively they give an impression that CEA is not cause impartial (that it is x-risk focused). Of course this is a difficult thing to get correct. It is difficult to draw the line between saying: 'our staff members believe cause_ is important' (a useful factoid that should definitely be said), whilst also putting across a strong front of cause impartiality.


2.

Suggestion: CEA should actively champion cause impartiality

If you genuinely want to be cause impartial I think most of the solutions to this are around being super vigilant about how CEA comes across. Eg:

  • Have a clear internal style guide that sets out to staff good and bad ways to talk about causes

  • Have 'cause impartiality' as a staff value

  • If you do an action that does not look cause impartial (say EA Funds mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.

  • Public posts like this one setting out what CEA believes

  • If you want to do lots of "prescriptive" actions split them off into a sub project or a separate institution.

  • Apply the above retroactively (remove lines from your website that make it look like you are only future focused)

Beyond that, if you really want to champion cause impartiality you may also consider extra things like:

  • More focus on cause prioritisation research.

  • Hiring people who value cause impartiality / cause prioritisation research / community building, above people who have strong views on what causes are important.


3.

Being representative is about making people feel listened too.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Things like the EA handbook should (as a lower bound) have enough of a diversity of causes mentioned that the broader EA community does not feel misrepresented but (as an upper bound) not so much that CEA staff [2] feel like it is misrepresenting them. Anything within this range seems fine to me. (Eg. with the EA handbook both groups should feel comfortable handing this book to a friend.) Although I do feel a bit like I have just typed 'just do the thing that makes everyone happy' which is easier said than done.

I also think that "representativeness" is not quite the right issue any way. The important thing is that people in the EA community feel listened too and feel like what CEA is doing represents them. The % of content on different topics is only part of that. The other parts of the solution are:

  • Coming across like you listen: see the aforementioned points on championing cause impartiality. Also expressing uncertainty, mentioning that there are opposing views, giving two sides to a debate, etc.

  • Listening -- ie. consulting publicly (or with trusted parties) wherever possible.

If anything getting these two things correct is more important than getting the exact percentage of your work to be representative.


Sam :-)


[1] https://www.centreforeffectivealtruism.org/a-three-factor-model-of-community-building

[2] Unless you have reason to think that there is a systematic bias in staff, eg if you actively hired people because of the cause they cared about.

Comment author: MarekDuda 08 August 2018 02:34:55PM 3 points [-]

Thanks Sam!

Yes, it is. We are currently focussing on operational robustness, but after that we see no reason not to expand the offering to cover most of the organisations EAs give to.

Comment author: weeatquince  (EA Profile) 08 August 2018 11:04:27PM *  0 points [-]

YAY <3

Comment author: weeatquince  (EA Profile) 08 August 2018 11:17:18AM 4 points [-]

Marek, well done on all of your hard work on this.

Separate from the managed funds. I really like the work that CEA is doing to help money be moved around the world to other EA charities. I would love to see more organisations on the list of places that donations can be made through the EA Funds platform. Eg, REG or Animal Charity Evaluators or Rethink Charity. Is this in the works?

https://app.effectivealtruism.org/donations/new/organizations

Comment author: Geoff_Anders 03 August 2018 04:15:03PM *  23 points [-]

Hi everyone,

I'd like to start off by apologizing. I realize that it has been hard to understand what Leverage has been doing, and I think that that's my fault. Last month Kerry Vaughan convinced me that I needed a new approach to PR and public engagement, and so I've been thinking about what to write and say about this. My plan, apart from the post here, was to post something over the next month. So I'll give a brief response to the points here and then make a longer main post early next week.

(1) I'm sorry for the lack of transparency and public engagement. We did a lot more of this in 2011-2012, but did not really succeed in getting people to understand us. After that, I decided we should just focus on our research. I think people expect more public engagement, even very early in the research process, and that I did not understand this.

(2) We do not consider ourselves an EA organization. We do not solicit funds from individual EAs. Instead, we are EA-friendly, in that (a) we employ many EAs, (b) we let people run EA projects, and (c) we contribute to EA causes, especially EA movement building. As noted in the post, we ran the EA Summit 2013 and EA Summit 2014. These were the precursors to the EA Global conferences. For a sense of what these were like, see the EA Summit 2013 video. We also ran the EA Retreat 2014 and helped out operationally with EA Global 2015. We also founded THINK, the first EA movement group network.

(3) We are probably not the correct donation choice for most EAs. We care about reason, evidence, and impact, but we are much higher variance than most EAs would like. We believe there is evidence that we are more effective than regular charities due to our contributions to movement building. These can be thought of as "impact offsets". (See (6) for more on the EV calculation.)

(4) We are also probably not the correct employment choice for most EAs. We are looking for people with particular skills and characteristics (e.g., ambition, dedication to reason and evidence, self-improvement). These make CFAR our largest EA competitor for talent, though in actual fact we have not ended up competing substantially with them. In general if people are excited about CEA or 80k or Charity Science or GiveWell or OPP, then we typically also expect that they are better served by working there.

(5) Despite (3) and (4), we are of course very interested in meeting EAs who would be good potential donors or recruits. We definitely recruit at EA events, though again we think that most EAs would be better served by working elsewhere.

(6) To do a full EV calculation on Leverage, it is important to take into account the counterfactual cost of employees who would work on other EA projects. We think that taking this into account, counting our research as 0 value, and using the movement building impact estimates from LEAN, we come out well on EV compared to an average charity. This is because of our contribution to EA movement building and because EA movement building is so valuable. (Rather than give a specific Fermi estimate, I will let readers make their own calculations.) Of course, under these assumptions donating to movement building alone is higher EV than donating to Leverage. Donors should only consider us if they assign greater than 0 value to our research.

I hope that that clarifies to some degree Leverage's relation to the EA movement. I'll respond to the specific points above later today.

As for the EA Summit 2018, we agree that everyone should talk with people they know before attending. This is true of any multi-day event. Time is valuable, and it's a good idea to get evidence of the value prior to attending.

(Leverage will not be officially presenting any content at the EA Summit 2018, so people who would like to learn more should contact us here. My own talk will be about how to plan ambitious projects.)

EDIT: I said in my earlier comment that I would write again this evening. I’ll just add a few things to my original post.

— Many of the things listed in the original post are simply good practice. Workshops should track participants to ensure the quality of their experience and that they are receiving value. CFAR also does this. Organizations engaged in recruitment should seek to proactively identify qualified candidates. I’ve spoken to the leaders of multiple organizations who do this.

— Part of what we do is help people to understand themselves better via introspection and psychological frameworks. Many people find this both interesting and useful. All of the mind mapping we did was with the full knowledge and consent of the person, at their request, typically with them watching and error-checking as we went. (I say “did” because we stopped making full mind maps in 2015.) This is just a normal part of showing people what we do. It makes sense for prospective recruits and donors to seek an in-depth look at our tools prior to becoming more involved. We also have strict privacy rules and do not share personal information from charting sessions without explicit permission from the person. This is true for everyone we work with, including prospective recruits and donors.

Comment author: weeatquince  (EA Profile) 05 August 2018 09:57:53PM *  23 points [-]

counting our research as 0 value, and using the movement building impact estimates from LEAN, we come out well on EV compared to an average charity ... I will let readers make their own calculations

Hi Geoff. I gave this a little thought and I am not sure it works. In fact it looks quite plausible that someone's EV (expected value) calculation on Leverage might actually come out as negative (ie. Leverage would be causing harm to the world).

This is because:

  • Most EA orgs calculate their counterfactual expected value by taking into account what the people in that organisation would be doing otherwise if they were not in that organisation and then deduct this from their impact. (I believe at least 80K, Charity Science and EA London do this)

  • Given Leverage's tendency to hire ambitious altruistic people and to look for people at EA events it is plausible that a significant proportion of Leverage staff might well have ended up at other EA organisations.

  • There is a talent gap at other EA organisations (see 80K on this)

  • Leverage does spend some time on movement building but I estimate that this is a tiny proportion of the time, >5%, best guess 3%, (based on having talked to people at leverage and also based on looking at your achievements to date compared it to the apparent 100 person-years figure)

  • Therefore if the amount of staff who could be expected to have found jobs in other EA organisations is thought to be above 3% (which seems reasonable) then Leverage is actually displacing EAs from productive action so the total EV of Leverage is negative

Of course this is all assuming the value of your research is 0. This is the assumption you set out in your post. Obviously in practice I don’t think the value of your research is 0 and as such I think it is possible that the total EV of Leverage is positive*. I think more transparency would help here. Given that almost no research is available I do think it would be reasonable for someone who is not at Leverage to give your research an EV of close to 0 and therefore conclude that Leverage is causing harm.

I hope this helps and maybe explains why Leverage gets a bad rep. I am excited to see a more transparency and a new approach to public engagement. Keep on fighting for a better world!

*sentence edited to better match views

Comment author: weeatquince  (EA Profile) 04 August 2018 07:34:22AM 12 points [-]

Hi Joey, thank you for writing this.

I think calling this a problem of representation is actually understating the problem here.

EA has (at least to me) always been a community that inspires encourages and supports people to use all the information and tools available to them (including their individual priors intuitions and sense of morality) to reach a conclusion about what causes and actions are most important for them to take to make a better world (and of course to then take those actions).

Even if 90% of experienced EAs / EA community leaders currently converge on the same conclusion as to where value lies, I would worry that a strong focus on that issue would be detrimental. We'd be at risk of losing the emphasis on cause prioritisation - arguably most useful insight that EA has provided to the world.

  • We'd risk losing an ability to support people though cause prioritisation (coaching, EA or otherwise, should not pre-empt the answers or have ulterior motives)
  • we risk creating a community that is less able to switch to focus on the most important thing
  • we risk stifling useful debate
  • we risk creating a community that does not benefits from collaboration by people working in different areas
  • etc

(Note: Probably worth adding that if 90% of experienced EAs / EA community leaders converged on the same conclusion on causes my intuitions would suggest that this is likley to be evidence of founder effects / group-think as much as it is evidence for that cause. I expect this is because I see a huge diversity in people's values and thinking and a difficulty in reaching strong conclusions in ethics and cause prioritisation)

In response to Open Thread #39
Comment author: LivBoeree 23 April 2018 12:41:10PM 6 points [-]

Hi all, Liv here (REG co-founder). I've just joined the forum for the first time and don't have enough karma to post in the main thread yet, but hopefully someone very well-versed in climate change intervention rankings will see this:

I'm looking for feedback on the following prioritisation list http://www.drawdown.org/solutions-summary-by-rank

This list is being referenced by a potentially very high impact and well-intentioned individual I'm in conversation with, but it IMO it contains a number of surprises and omissions. Does anyone have a more EA-vetted ranking of interventions they could direct me to? Feel free to PM me, thanks.

In response to comment by LivBoeree on Open Thread #39
Comment author: weeatquince  (EA Profile) 08 July 2018 10:18:18PM 0 points [-]

Hi, a little late, but did you get an answer to this? I am not an expert but can direct this to people in EA London who can maybe help.

My very initial (non-expert) thinking was:

  • this looks like a very useful list of how to mitigate climate consequences through further investment in existing technologies.

  • this looks like a list written by a scientist not a policy maker. Where do diplomatic interventions such as "subsidise China to encourage them not to mine as much coal" etc fall on this list. I would expect subsidies to prevent coal mining are likely to be effective.

  • "atmospheric carbon capture" is not on the list. My understanding is that "atmospheric carbon capture" may be a necessity for allowing us to mitigate climate change in the long run (by controlling CO2 levels) whereas everything else on this list is useful in the medium-short run none of these technologies are necessary.

Comment author: weeatquince  (EA Profile) 21 June 2018 11:23:30PM 3 points [-]

Greg this is awesome - go you!!! :-D :-D

To provide one extra relevant reference class: I have let EAs stay for free / donations at my place in London to work on EA projects and on the whole was very happy I did so. I think this is worthwhile and there is a need for it (with some caution as to both risky / harmful projects and well intentioned free-riders).

Good luck registering as a CIO - not easy. Get in touch with me if you are having trouble with the Charity Commission. Note: you might need Trustee's that are not going to live for free at the hotel (there's lots of rules against Trustees receiving any direct benefits from their charity).

Also if you think it could be useful for there to be a single room in London for Hotel guests to use for say business or conference attendance then get in touch.

Comment author: Ervin 04 April 2018 10:58:49PM 18 points [-]

Looking at the EA Community Fund as an especially tractable example (due to the limited field of charities it could fund):

  • Since its launch in early 2017 it appears to have collected $289,968, and not regranted any of it until a $83k grant to EA Sweden currently in progress. I am basing this on https://app.effectivealtruism.org/funds/ea-community - it may not be precisely right.

  • On the one hand, it's good that some money is being disbursed. On the other hand the only info we have is https://app.effectivealtruism.org/funds/ea-community/payouts/1EjFHdfk3GmIeIaqquWgQI . All we're told about the idea and why it was funded is that it's an "EA community building organization in Sweden" and Will McAskill recommended Nick Beckstead fund it "on the basis of (i) Markus's track record in EA community building at Cambridge and in Sweden and (ii) a conversation he had with Markus." Putting it piquantly (and over-strongly I'm sure, for effect), this sounds concerningly like an old boy's network: Markus > Will > Nick. (For those who don't know, Will and Nick were both involved in creating CEA.) It might not be, but the paucity of information doesn't let us reassure ourselves that it's not.

  • With $200k still unallocated, one would hope that the larger and more reputable EA movement building projects out there would have been funded, or we could at least see that they've been diligently considered. I may be leaving some out, but these would at least include the non-CEA movement building charities: EA Foundation (for their EA outreach projects), Rethink Charity and EA London. As best as I could get an answer from Rethink Charity at http://effective-altruism.com/ea/1ld/announcing_rethink_priorities/dir?context=3 this is not true in their case at least.

  • Meanwhile these charities can't make their case direct to movement building donors whose money has gone to the fund since its creation.

This is concerning, and sounds like it may have done harm.

Comment author: weeatquince  (EA Profile) 05 April 2018 11:46:57PM 4 points [-]

For information. EA London has neither been funded by the EA Community Fund nor diligently considered for funding by the EA Community Fund.

In December EA London was told that the EA Community Fund was not directly funding local groups as CEA would be doing that. (This seem to be happening, see: http://effective-altruism.com/ea/1l3/announcing_effective_altruism_community_building/)

Comment author: Halstead 23 March 2018 07:35:32PM *  1 point [-]

I discuss this in the paper under the heading of 'unknown risks'. I tend to deflate their significance because SAI has natural analogues - volcanoes, which haven't set off said catastrophic spirals. The massive 1991 pinatubo eruption reduced global temperatures by 0.5 degreesish. There is also already an enormous amount of tropospheric cooling due to industrial emissions of sulphur and other particulates. The effects of this could be very substantial - (from memory) at most cancelling out up to half of the total warming effect of all CO2 ever emitted. Due to concerns about air pollution, we are now reducing emissions of these tropospheric aerosols. This could have a very substantial warming effect.

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely). Estimates of the sensitivity of the climate to CO2 are also beset by model uncertainty. The main worry is the unprecedented warming effect from CO2 having unexpected runaway effects on the ecosystem. It is clear that SAI would allow us to reduce global temperatures and so would on average reduce the risk of heat-induced tipping points or runaway processes. Moreover, SAI is controllable on tight timescales - we get a response to our action within weeks - allowing us to respond if something weird starts happening as a result of GHGs or of SAI. The downside risk associated with model uncertainty about climate sensitivity to GHGs is much greater than that associated with the effects of SAI, in my opinion. SAI is insurance against this model uncertainty.

Comment author: weeatquince  (EA Profile) 25 March 2018 10:05:04AM *  0 points [-]

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely)

Good point. Agreed. Had not considered this

I tend to deflate their significance because SAI has natural analogues... volcanoes ... industrial emissions.

This seems like flawed thinking to me. Data from natural analogues should be built into predictive SAI models. Accepting that model uncertainty is a factor worth considering means questioning whether these analogues are actually good predictors of the full effects of SAI.

(Note: LHC also had natural analogues in atmospheric cosmic rays, I believe this was accounted for in FHI's work on the matter)

-

I think the main thing that model uncertainty suggests is that mitigation or less extreme forms of geoengineering should be prioritised much more.

Comment author: weeatquince  (EA Profile) 23 March 2018 06:54:35PM 6 points [-]

Hi, can you give an example or two of an "announcement of a personal nature". I cannot think I have seen any posts that would fall into that category at any point.

Cheers

View more: Next