Comment author: weeatquince  (EA Profile) 21 June 2018 11:23:30PM 0 points [-]

Greg this is awesome - go you!!! :-D :-D

To provide one extra relevant reference class: I have let EAs stay for free / donations at my place in London to work on EA projects and on the whole was very happy I did so. I think this is worthwhile and there is a need for it (with some caution as to both risky / harmful projects and well intentioned free-riders).

Good luck registering as a CIO - not easy. Get in touch with me if you are having trouble with the Charity Commission. Note: you might need Trustee's that are not going to live for free at the hotel (there's lots of rules against Trustees receiving any direct benefits from their charity).

Also if you think it could be useful for there to be a single room in London for Hotel guests to use for say business or conference attendance then get in touch.

Comment author: Ervin 04 April 2018 10:58:49PM 18 points [-]

Looking at the EA Community Fund as an especially tractable example (due to the limited field of charities it could fund):

  • Since its launch in early 2017 it appears to have collected $289,968, and not regranted any of it until a $83k grant to EA Sweden currently in progress. I am basing this on - it may not be precisely right.

  • On the one hand, it's good that some money is being disbursed. On the other hand the only info we have is . All we're told about the idea and why it was funded is that it's an "EA community building organization in Sweden" and Will McAskill recommended Nick Beckstead fund it "on the basis of (i) Markus's track record in EA community building at Cambridge and in Sweden and (ii) a conversation he had with Markus." Putting it piquantly (and over-strongly I'm sure, for effect), this sounds concerningly like an old boy's network: Markus > Will > Nick. (For those who don't know, Will and Nick were both involved in creating CEA.) It might not be, but the paucity of information doesn't let us reassure ourselves that it's not.

  • With $200k still unallocated, one would hope that the larger and more reputable EA movement building projects out there would have been funded, or we could at least see that they've been diligently considered. I may be leaving some out, but these would at least include the non-CEA movement building charities: EA Foundation (for their EA outreach projects), Rethink Charity and EA London. As best as I could get an answer from Rethink Charity at this is not true in their case at least.

  • Meanwhile these charities can't make their case direct to movement building donors whose money has gone to the fund since its creation.

This is concerning, and sounds like it may have done harm.

Comment author: weeatquince  (EA Profile) 05 April 2018 11:46:57PM 4 points [-]

For information. EA London has neither been funded by the EA Community Fund nor diligently considered for funding by the EA Community Fund.

In December EA London was told that the EA Community Fund was not directly funding local groups as CEA would be doing that. (This seem to be happening, see:

Comment author: Halstead 23 March 2018 07:35:32PM *  1 point [-]

I discuss this in the paper under the heading of 'unknown risks'. I tend to deflate their significance because SAI has natural analogues - volcanoes, which haven't set off said catastrophic spirals. The massive 1991 pinatubo eruption reduced global temperatures by 0.5 degreesish. There is also already an enormous amount of tropospheric cooling due to industrial emissions of sulphur and other particulates. The effects of this could be very substantial - (from memory) at most cancelling out up to half of the total warming effect of all CO2 ever emitted. Due to concerns about air pollution, we are now reducing emissions of these tropospheric aerosols. This could have a very substantial warming effect.

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely). Estimates of the sensitivity of the climate to CO2 are also beset by model uncertainty. The main worry is the unprecedented warming effect from CO2 having unexpected runaway effects on the ecosystem. It is clear that SAI would allow us to reduce global temperatures and so would on average reduce the risk of heat-induced tipping points or runaway processes. Moreover, SAI is controllable on tight timescales - we get a response to our action within weeks - allowing us to respond if something weird starts happening as a result of GHGs or of SAI. The downside risk associated with model uncertainty about climate sensitivity to GHGs is much greater than that associated with the effects of SAI, in my opinion. SAI is insurance against this model uncertainty.

Comment author: weeatquince  (EA Profile) 25 March 2018 10:05:04AM *  0 points [-]

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely)

Good point. Agreed. Had not considered this

I tend to deflate their significance because SAI has natural analogues... volcanoes ... industrial emissions.

This seems like flawed thinking to me. Data from natural analogues should be built into predictive SAI models. Accepting that model uncertainty is a factor worth considering means questioning whether these analogues are actually good predictors of the full effects of SAI.

(Note: LHC also had natural analogues in atmospheric cosmic rays, I believe this was accounted for in FHI's work on the matter)


I think the main thing that model uncertainty suggests is that mitigation or less extreme forms of geoengineering should be prioritised much more.

Comment author: weeatquince  (EA Profile) 23 March 2018 06:54:35PM 6 points [-]

Hi, can you give an example or two of an "announcement of a personal nature". I cannot think I have seen any posts that would fall into that category at any point.


Comment author: weeatquince  (EA Profile) 23 March 2018 06:50:45PM 1 point [-]

My very limited understanding of this topic is that climate models, especially of unusual phenomena. are highly uncertain and therefore there is a some chance that our models are incorrect. this means that SAI could go horribly wrong, not have the intended effects or make the climate spin out of control in some catastrophic way.

The chance of this might be small but if you are worried about existential risks it should definitely be considered. (In fact I thought this was the main x-risk associated with SAI and similar grand geo-engineering exercises).

I admit I have not read your article (only this post) but I was surprised this was not mentioned and I wanted to flag the matter.

For a similar case see the work of FHI researchers Toby Ord and Anders Sandberg on the risks of the Large Hadron Collider (LHC) here: and I am reasonably sure that SAI models are a lot more uncertain than the LHC physics.

Comment author: weeatquince  (EA Profile) 23 March 2018 06:36:21PM 4 points [-]

In general I would be very wary of taking definitions written for an academic philosophical audience and relying on them in other situations. Often the use of technical language by philosophers does not carry over well to other contexts

The definitions and explanations used here: and here: are in my mind, better and more useful than the quote above for almost any situation I have been in to date.

ADDITIONAL EVIDENCE FOR THE ABOVE For example I have a very vague memory of talking to Will on this and concluding that he had a slightly odd and quite broad definition of "welfarist", where "welfare" in this context just meant 'good for others' without any implications of fulfilling happiness / utility / preference / etc. This comes out in the linked paper, in the line "if we want to claim that one course of action is, as far as we know, the most effective way of increasing the welfare of all, we simply cannot avoid making philosophical assumptions. How should we value improving quality of life compared to saving lives? How should we value alleviating non-human animal suffering compared to alleviating human suffering? How should we value mitigating risks ...." etc

Comment author: weeatquince  (EA Profile) 11 March 2018 11:06:31PM *  2 points [-]

This sounds like a really good project. You clearly have a decent understanding of the local political issues, a clear ideas of how this project can map to other countries and prove beneficial globally. And a good understanding of how this plays a role in the wider EA community (I think it is good that this project is not branded as 'EA').

Here are a number of hopefully constructive thoughts I have to help you fine tune this work. These maybe things you thought about that did not make the post. I hope they help.




As far as I can tell the CCC seems to not care much about scenarios with a small chance of a very high impact. On the whole the EA community does care about these scenarios. My evidence for this comes from the EA communities concern for the extreme risks of climate change ( and x-risks whereas the CCC work on climate change that I have seen seems to have ignored these extreme risks. I am unsure why the discrepancy (Many EA researchers do not use a future discount rate for utility, does CCC?)

This could be problematic in terms of the cause prioritisation research being useful for EAs, for building a relationship with this project and EA advocacy work, EA funding, etc, etc.




Sometimes the most important priorities will not be the ones that public will latch onto. It is unclear from the post:

2.1 how you intend to find a balance between delivering the messages that are most likely to create change verses saying the things you most believe to be true. And

2.2 how the advocacy part of this work might differ from work that CCC has done in the past. My understanding is that to date the CCC has mostly tried to deliver true messages to an international policy maker audience. Your post however points to the public sentiment as a key driving factor for change. The advocacy methods and expertise used in CCC's international work are not obviously the best methods for this work.




For a prioritization research piece like I could imagine the researcher might dive straight into looking at the existing issues on the political agenda and prioritising between those based on some form of social rate of return. However I think there are a lot of very high level questions that I could be asked first like: • Is it more important to prevent the government making really bad decisions in some areas or to improve the quality of the good decisions • Is it more important to improve policy or to prevent a shift to harmful authoritarianism • How important is it to set policy that future political trends will not undo • How important is the acceptability among policy makers . public of the policy being suggested Are these covered in the research?

Also to what event will the research be looking at improving institutional decision making? To be honest I would genuinely be surprised if the conclusion of this project was that the most high impact policies were those designed to improve the functioning / decision making / checks and balances of the government. If you can cut corruption and change how government works for the better then the government will get more policies correct across the board in future. Is this your intuition too?


Finally to say I would be interested to be kept up-to-date with this project as it progresses. Is there a good way to do this? Looking forward to hearing more.

Comment author: Lukas_Gloor 25 February 2018 12:55:52PM 3 points [-]

EA London estimated with it's first year of a paid staff it had about 50% of the impact of a more established EA organisation such as GWWC or 80K per £ invested.

Are they mostly counting impact on Givewell-recommended charities? I'd imagine that for donors who are mostly interested in the long-term cause area, there'd be a perceived large difference between GWWC and 80k, which is why this sounds like a weird reference class to me. (Though maybe the difference is not huge because GWWC has become more cause neutral over the years?)

Comment author: weeatquince  (EA Profile) 11 March 2018 10:21:09PM *  1 point [-]

EA London estitated counterfactual "large behaviour changes" taken by community members. This includes taking the GWWC pledges and large career shifts (although a change to future career plans probably wouldn't cut it)

Comment author: Evan_Gaensbauer 06 March 2018 05:19:59PM 3 points [-]

It's my impression most policy efforts coming out of EA in most countries are from experienced, professional organizations which work with or hire policy experts. The Centre for Effective Altruism (CEA) has worked with university institutes at Cambridge and Oxford to produce policy reports of global catastrophic risks for European governments. The Effective Altruism Foundation (EAF) has in Germany and Switzerland done policy advocacy, initiated by philosophy post-docs and the like. Before involvement in EA, they weren't particularly experienced in philosophy, but their efforts haven't backfired in any sense. I haven't tracked what portion of their campaigns succeeded at the ballot box, but being able to start things like referendums on animal rights/welfare without opposition and backlash from the public could be considered successes in themselves.

There isn't centralization across the EA community worldwide for work in the policy sector, so technically a group some country could start doing policy work in the name of EA without any kind of external assessment. So a culture of pursuing policy work much more cautiously can definitely still be worth promoting within EA. I notice the examples I gave were about causes like animal advocacy and global catastrophic risks, compared to your example of international development. My examples are of sectors which aren't already as common in academia and policy. So the EA community has been able to effectively break a lot of new ground in policy research and advocacy regarding these causes. Fields like international development and others with a history of more extensive institutional support are more complicated. They require more specialization and expertise to do effective work upon.

Comment author: weeatquince  (EA Profile) 09 March 2018 03:24:45PM 4 points [-]

My point was not trying to pick up policy interventions specifically. I think more broadly there is too often an attitude of arrogance among EAs who think that because they can do cause prioritisation better than their peers they can also solve difficult problems better than experts in those fields. (I know I have been guilty of this at points).


In policy, I agree with you that EA policy projects fall across a large spectrum from highly professional to poorly thought-out.

That said I think that even at the better end of the spectrum there is a lack of professional lobbyists being employed by EA organisations and more of a do-it-ourselves attitude. EA orgs often prefer to hire enthusiastic EAs rather than expensive experts (which maybe a totally legitimate approach, I have no strong view on the matter).

Comment author: arikr 02 March 2018 06:10:34PM 1 point [-]

You note that CEA and 80K don't seem to be struggling for funds.

What makes you say that? (Not saying I don't agree, just am unsure)

Comment author: weeatquince  (EA Profile) 09 March 2018 02:57:56PM 2 points [-]

Unfortunately I do not have a single easily quotable source for this. Furthermore it is not always clear cut - funding needs change with time and additional funding might mean an ability to start extra projects (like EA Grants). However, unlike Rethink Charity or Charity Science Health, there is not a clear project that I can point to that will not get funded if CEA 80K do not get more funding this year.

If you are donating in the region of £10k+ and are concerned that the larger EA orgs have less need for funding, I would say get in touch with them. They are generally happy to talk to donors in person and give more detailed answers (and my comment on this matter has been shaped by talking to people who have done this).

View more: Next