Comment author: arrowind 22 October 2014 09:24:07PM *  9 points [-]

I remember that GWWC management asked us this in the pledgers' Facebook group, and that a lot of us expressed unhappiness about it, saying that it'd be a big rebranding, change the organisation from the one they joined, be unproductively vague, and we'd "perceive it as a big loss", etc. So I'm a bit surprised and disappointed to see the apparent determination to push this through regardless of our wishes. (I apologise if I'm wrong to perceive this, and there's a chance that GWWC will stay focused on the global poor.)

Comment author: Toby_Ord 23 October 2014 10:41:15AM 9 points [-]

I wouldn't see this as 'determination to push this through'. It is very much still in the information gathering stage.

Comment author: Phil_Thomas 23 October 2014 01:44:43AM 1 point [-]

Yeah, reading some of the other comments on here leads me to think I might have a misconception of what EA includes, or how others define EA. This may be because I come from the global health/ development economics side of things. I wasn't really sure what LessWrong or MIRI were for a long time, or how they related to the EA community. It might be pretty common for people to have an incomplete picture of what EA is depending on the intellectual route they took to get here. So even if the movement is more broad and inclusive, the public perception of the movement may turn people away.

Also, when I say "missing the bigger picture" I am not solely referring to x-risk, but also to approaches to global health and poverty reduction like R&D for infectious disease, infrastructure development, improving the business environment and generally trying to work around the edges of an economy to address market failures. It seems to me that there was a gap within the EA community for these types of solutions before the OPP, unless you include J-PAL and IPA.

Some of what I'm saying may not be specifically relevant to the above GWWC wording change, but reflect broader changes in the EA movement (or my understanding of it) that I am happy to see.

Comment author: Toby_Ord 23 October 2014 10:39:13AM 4 points [-]

Note that you could certainly include contributions to R&D for infectious diseases as part of the existing GWWC pledge. GWWC doesn't have any recommendations in that area, but we certainly see it as a plausibly very effective way of helping. The same is presumably true of your other examples. Anything J-PAL or IPA promote as effective is probably well worth looking into. I personally donate to both J-PAL and IPA themselves.

Comment author: pappubahry 23 October 2014 05:24:18AM 5 points [-]

I'm not a GWWC member, because I don't want to lock myself in to a pledge. (I've been comfortably over 10% for a few years, and expect that to continue, but I could imagine, e.g., needing expensive medical care in the future and cutting out my donations to pay for that.) For that reason I wouldn't take the pledge in either its current or its proposed form.

Comment author: Toby_Ord 23 October 2014 10:34:10AM 6 points [-]

I don't think this need stop you from taking the pledge. We think of it like making a promise to do something. It is perfectly reasonable to promise to do something (say to pick up a friend's children from school) even if there is a chance you will have to pull out (e.g. if you got sick). We don't usually think of small foreseeable chances of having to pull out as a reason not to make promises, so I wouldn't worry about that here. I think this is mentioned on our FAQ page -- if not, it should be.

Another approach is to make sure you have enough health insurance (possibly supplementing your country's public insurance, though I don't think that is needed in the UK), and maybe getting income insurance too. It should be possible to have enough of both kind and still donate 10%.

Comment author: Larks 17 October 2014 11:27:12PM 0 points [-]

I thought the reason was that we were looking for a term to replace "super-duper hardcore people", and settled on "effective altruists". This predated the facebook group by a considerable time. Indeed, initially I thought the facebook group was a bad idea, as 'EA' was not intended to be public facing at all.

Comment author: Toby_Ord 20 October 2014 07:37:14AM 4 points [-]

The term 'effective altruism' was created before the FB group, but I think Evan is referring to the fact that the FB group uses the 'ist' form rather than the 'ism' form and is the most prominent thing to do so. I think it would have been an improvement if it had used the 'ism' form (and it is not a co-incidence that this forum does).

In response to Open Thread 3
Comment author: HonoreDB 16 October 2014 11:07:17PM 7 points [-]

Cross-posting from Less Wrong.

General question: What's the best way to get (U.S.) legal advice on a weird, novel issue (one that would require research and cleverness to address well)? Paid or unpaid, in person or remotely.

Specific request (if you're interested in helping personally, please let me know at histocrat at gmail dot com !): I'm helping to set up an organization to divert money away from major party U.S. campaign funds and to efficient charities. The idea is that if I donate $100 to the Democratic Party, and you donate $200 to the Republican party (or to their nominees for President, say), the net marginal effect on the election is very similar to if you'd donated $100 and I've donated nothing; $100 from each of us is being canceled out. So we're going to make a site where people can donate to either of two opposing causes, we'll hold it in escrow for a little, and then at a preset time the money that would be canceling out goes to a GiveWell charity instead. So if we get $5000 in donations for the Democrats and $2000 for Republicans, the Democrats get $3000 and the neutral charity gets $4000. From an individual donor's point of view, each dollar you donate will either become a dollar for your side, or take away a dollar from the opposing side.

This obviously steps into a lot of election law, so that's probably the expertise I'll be looking for. We also need to figure out what type of organization(s) we need to be: it seems ideal to incorporate as a 501c(3) just so that people can make tax-deductible donations to us (whether donations made through us that end up going to charity can be tax-deductible is another issue). I think the spirit of the regulations should permit that, but I am not a lawyer and I've heard conflicting opinions on whether the letter of the law does.

And those issues aside, I feel like there could be more legal gotchas that I'm not anticipating to do with Handling Other People's Money.

In response to comment by HonoreDB on Open Thread 3
Comment author: Toby_Ord 18 October 2014 09:35:00PM 5 points [-]

If you go to and search for "repledge" you will find the legal opinion the FEC gave to the people behind Repledge. It was evenly split 3-3 over whether it would count as a conduit or intermediary for campaign donations (which seems to not be allowed). This seems to be what made the Repledge people decide to stop what seemed to be a very successful launch (look for them on Youtube for example). Looking at this opinion could be useful if trying to do something like this. If you are serious about it, you may want to contact the person behind Repledge (Eric Zolt) for more details.

You may also want to read my paper:

In response to Open Thread 3
Comment author: Evan_Gaensbauer 17 October 2014 09:42:34AM 0 points [-]

Among effective altruists who believe the cause most worthy of concern is that of existential risk reduction, ensuring A.I. technology doesn't one day destroy humanity is a priority. However, the oft-cited second greatest existential risk is that of a (genetically engineered) global pandemic.

An argument in favor of focusing upon developing safer A.I. technology, in particular from the Machine Intelligence Research Institute, is that a fully general A.I. which shared human values, and safeguarded us, would have the intelligence capacity to reduce existential risk better than the whole of humanity could anyway. Humanity would be building its own savior from itself, especially if a 'Friendly' A.I. could be built within the next century, when other existential risks might come to a head. For example, the threat of other potentially dangerous technologies would be neutralized when controlled by an A.G.I.(Artificial General Intelligence), the coordination problem of mitigating climate change damage will be solved by the A.G.I. peacefully. The A.G.I. could predict unforeseen events threatening humanity better than we could ourselves, and mitigate the threat. For example, a rogue solar storm that human scientists would be unable to detect given their current understanding, and state of technology, might be predicted by an A.G.I., who would recommend to humanity how to minimize loss.

However, the earliest predictions for when a A.G.I. could be completed are 2045. Given the state of biotechnology, and the rate of progress within the field, it seems plausible that a (genetically engineered) pathogen could go globally pandemic before humanity has a A.G.I. to act as the world's greatest epidemiology computer. Considering that reducing existential risk is such a vocal cause area (currently) within effective altruism, I'm wondering why they, nor the rest of us, are paying more attention to the risk of a global pandemic.

I mean, obviously the existential risk reduction community is so concerned about A.G.I. is because of the work of the Machine Intelligence Research Institute, Eliezer Yudkowsky, Nick Bostrom, and the Future of Humanity Institute. That's all fine, and my friends, and I can now cite predictions, and arguments, and explain with decent examples what this risk is all about.

However, I don't know nearly as much about the risk of a global pandemic, or genetically engineered pathogens. I don't know why we don't have this information, or where to get it, or even how much we should raise awareness of this issue, because I don't have that information, either. If there is at least one more existential risk at least having a cursory knowledge of, this seems to be the one.

I'm thinking about contacting Seth Baum of the Global Catastrophic Risks Institute about this on behalf of effective altruism to ask his opinion on this issue. Hopefully, him, or his colleagues, can help me find more information, give me an assessment of how experts rate this existential risk compared to others, and which organizations are doing research and/or are raising awareness about it. Maybe the GCRI will have a document we can share here, or they'd be willing to present one to effective altruism. If not, I'll wrote something on it for this forum myself. If anyone wants has feedback, comments, or an interest in getting involved in this investigation process, reply publicly here, or in a private message.

Comment author: Toby_Ord 18 October 2014 09:21:00PM 0 points [-]

We certainly talk about this a lot at FHI and do a fair amount of research and policy work on it. CSER is also interested in synthetic biology risk. I agree that it is talked about a lot less in wider EA circles though.

Comment author: Niel_Bowerman2 02 October 2014 01:49:16PM *  7 points [-]

I'm unsure whether these are the reasons why effective altruism started, or simply a compelling narrative, but I often think of EA as having come about as a result of advances in three different disciplines:

  1. The rise in evidence-based development aid, with the use of randomized controlled trials led by economists such as those at the Poverty Action Lab. These provide high-quality research about what works and what doesn’t in development aid.

  2. The development of the heuristics and biases literature by psychologists Daniel Kahneman and Amos Tversky. This literature shows the failures of human rationality, and thereby opens up the possibility of increasing one’s impact by deliberately countering these biases.

  3. The development of moral arguments, by Peter Singer and others, in favor of there being a duty to use a proportion of one’s resources to fight global poverty, and in favor of an ‘expanded moral circle‘ that gives moral weight to distant strangers, future people and non-human animals.

This gave rise to three communities: the rationalist (e.g. LessWrong), the philosophical (e.g. Giving What We Can), and the randomistas as they are often referred to (e.g. J-PAL and GiveWell)). These three communities merged to form effective altruism.

I wrote this up based on William MacAskill's arguments at but I would be interested to hear how much people think this explains.

Comment author: Toby_Ord 03 October 2014 01:29:49PM 4 points [-]

I agree with (1) and (3), but I don't think (2) played a large role. Regarding (1), I think that the conceptual development of QALYs (which DALYs largely copied) was as important as the randomisation, since it began to allow like for like comparisons across much wider areas.

Comment author: Diego_Caleiro 01 October 2014 05:22:13PM *  2 points [-]

I suppose there are some very different kinds of reputational costs, which backtracking will reach differently. So paying a reputational cost for the movement of appearing associated with a behavior that is considered morally incorrect in some cultures (for instance, being associated with substance abuse, or with unusual marital practices) might have significant social costs in the future, for the individual and the movement alike.

However, thinking of how people feel embarrassed that they may say a sentence wrong, blush at the wrong time, slip on some statement, I tend to think people are over-calibrated about these minor, non-moral types of embarrassment. This sort of embarrassment grows frequently out of status anxiety, and this kind does not feel particularly costly.

So I fully agree that as it grows larger, reputation should matter more, specially when it comes to reputation that mirrors our moral instincts.

Comment author: Toby_Ord 03 October 2014 01:23:39PM 0 points [-]

I agree with this. I should clarify that the types of thing I am generally concerned about is coming off as too abrasive, too negative, too amateurish, or too associated with legal but disliked ideas that aren't part of our core considerations.

In response to comment by Toby_Ord on Lawyering to Give
Comment author: Julia_Wise 30 September 2014 01:19:51AM 3 points [-]

I wrote to the Harvard EA people, and they didn't know anyone. I wrote to Bill Barlow, and he's drafting a response.

Comment author: Toby_Ord 30 September 2014 02:16:23PM 2 points [-]

Great work!

Comment author: Toby_Ord 30 September 2014 02:13:26PM *  3 points [-]

While I agree with a lot of what you wrote, I disagree about the 'all publicity is good if large enough' idea.

You are entirely correct that you can get some good help from people at one end of the curve, and at the start, this often feels like all that matters. For example, a company might think that if no-one currently knows about them, then all publicity is good as people can't reduce their purchasing from zero, but others might increase it. However if something is going to become reasonably big regardless of the coverage, then it can have bad effects. This is true even if one's own organisation is small, but the coverage can reflect badly on related organisations with similar goals (such as the rest of the EA movement).

Bad publicity and bad first impressions can last a long time, and people looking for sensation can quite easily trawl through past coverage looking for the one bad sensational thing. If something inadvertently damaged the reputation of effective altruism, that would be a bad effect. If the damage was very high, that would be a really terrible effect. Taking risks with this public good of the movement's reputation is something we should really discourage.

All of this means that as an organisation or movement starts to get bigger, it should become much more conservative about reputational issues like this, though exactly where to draw the line is unclear. For what its worth, your example of the Rhys Southan article seemed to me to be on the right side of the line, and the transhumanist one seemed to me to be roughly neutral.

View more: Prev | Next