Hide table of contents

Over the last year, I’ve given a lot of thought to the question of how the effective altruism community can stay true to its best elements and avoid problems that often bring movements down. Failure is the default outcome for a social movement, and so we should be proactive in investing time and attention to help the community as a whole flourish.

In a previous post, I noted that there’s very little in the way of self-governing infrastructure for the community. There’s very little to deal with people representing EA in ways that seem to be harmful; this means that the only response is community action, which is slow, unpleasant for all involved, and risks unfairness through lack of good process. In that post, I suggested we create two things: (i) a set of guiding principles agreed upon by all of EA; (ii) a community panel that could make recommendations to the community regarding violations of those principles.

There was healthy discussion of this idea, both on the forum and in feedback that we sought from people in the community. Some particularly important worries, it seemed to me, were: (i) the risk of consolidating too much influence over EA in any one organisation or panel; (ii) the risk of it being impossible to get agreement, leading to an increase in politicisation and squabbling; (iii) the risk of losing flexibility by enforcing what is an “EA view” or not (in a way that other broad movements don’t do*). I think these were important concerns. In response, we toned back the ambitions of the proposed ideas.

Instead of trying to create a document that we claim to represent all of EA, enforced by a community panel as I suggested, we’ve done two things:

 

(i)  Written down CEA’s understanding of EA (based in part on discussion with other community members), and invited other organisations to share and uphold that understanding if they found it matched their views. This will become a community-wide vision only to the extent that it resonates with the community.

(ii) Created a small advisory panel of community members that will provide input on important and potentially controversial community-relevant decisions that CEA might have to make (such as when we changed the Giving What We Can pledge to be cause-neutral). The initial panel members will be Alexander Gordon-Brown, Peter Hurford, Claire Zabel, and Julia Wise.

 

The panel, in particular, is quite different from my original proposal. In the original proposal, it was a way of EA self-regulating as a community. In this new form, it’s a way of ensuring that some of CEA’s decisions get appropriate input from the community. Julia Wise, who serves as community liaison at CEA, has put together the advisory panel and has written about this panel here. The rest of this post is about how CEA understands EA and what guiding principles it finds appropriate.

How CEA understands EA is given in its Guiding Principles document. I’ve also copied and pasted the contents of this document below. 

Even if few organisations or people were to endorse this understanding of EA, it would still have a useful role. It would: 

  • Help others to understand CEA’s mission better
  • Help volunteers who are helping to run CEA events to understand the values by which we’d like those events to be run
  • Create a shared language by which CEA can be held accountable by the community

However, we hope that the definition and values are broad enough that the large majority of the EA community will be on board with them. And indeed, a number of EA organisations (or leaders of EA organisations) have already endorsed this understanding (see the bottom of this post). If this understanding of EA were widely adopted, I think there could be a number of benefits. It could help newcomers, including academics and journalists, to get a sense of what EA is about. It could help avoid dilution of EA (such that donating $5/month to a charity with low overheads becomes ‘effective altruism’) or corruption of the idea EA (such as EA = earning to give to donate to RCT-backed charities, and nothing else). It might help create community cohesion by stating, in broad terms, what brings us all together (even if many of us focus on very different areas). And it might give us a shared language for discussing problematic events happening in the community. In general, I think if we all upheld these values, we’d create a very powerful force for good.

There is still a risk of having a widely-agreed-upon set of values, which is that effective altruism could ossify or become unduly narrow. However, I hope that the openness of the definition and values (and lack of enforcement mechanism beyond community norms) should minimise that risk.

 

Here is the text of the document:

The Centre for Effective Altruism’s understanding of effective altruism and its guiding principles


What is effective altruism?

Effective altruism is about using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.

What is the effective altruism community?

The effective altruism community is a global community of people who care deeply about the world, make benefiting others a significant part of their lives, and use evidence and reason to figure out how best to do so. 

Putting effective altruism into practice means acting in accordance with its core principles:

The guiding principles of effective altruism:


Commitment to Others: 

We take the well-being of others very seriously, and are willing to take significant personal action in order to benefit others. What this entails can vary from person to person, and it's ultimately up to individuals to figure out what significant personal action looks like for them. In each case, however, the most essential commitment of effective altruism is to actively try to make the world a better place.


Scientific Mindset:

We strive to base our actions on the best available evidence and reasoning about how the world works. We recognise how difficult it is to know how to do the most good, and therefore try to avoid overconfidence, to seek out informed critiques of our own views, to be open to unusual ideas, and to take alternative points of view seriously.


Openness: 

We are a community united by our commitment to these principles, not to a specific cause. Our goal is to do as much good as we can, and we evaluate ways to do that without committing ourselves at the outset to any particular cause. We are open to focusing our efforts on any group of beneficiaries, and to using any reasonable methods to help them. If good arguments or evidence show that our current plans are not the best way of helping, we will change our beliefs and actions.


Integrity: 

Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of good conduct that allow communities (and the people within them) to thrive. We also value the reputation of effective altruism, and recognize that our actions reflect on it.


Collaborative Spirit: 

We affirm a commitment to building a friendly, open, and welcoming environment in which many different approaches can flourish, and in which a wide range of perspectives can be evaluated on their merits. In order to encourage cooperation and collaboration between people with widely varying circumstances and ways of thinking, we resolve to treat people of different worldviews, values, backgrounds, and identities kindly and respectfully.

The following organizations wish to voice their support for these definitions and guiding principles:

  • .impact
  • 80,000 Hours
  • Animal Charity Evaluators
  • Charity Science
  • Effective Altruism Foundation
  • Foundational Research Institute
  • Future of Life Institute
  • Raising for Effective Giving
  • The Life You Can Save

Additionally, some individuals voice their support: 

  • Elie Hassenfeld of GiveWell and the Open Philanthropy Project
  • Holden Karnofsky of GiveWell and the Open Philanthropy Project
  • Toby Ord of the Future of Humanity Institute
  • Peter Singer
  • Nate Soares of the Machine Intelligence Research Institute
     

This doesn’t represent an exhaustive list of all organisations or people involved with effective altruism. We want to invite any other organisations to endorse the above guiding principles if they wish by writing us at hello@centreforeffectivealtruism.org.

Julia and I want to thank all the many people who helped develop this document, with particular thanks to Rob Bensinger, Jeff Alstott, and Hilary Mayhew who went above and beyond in providing comments and suggested wording.

 

Comments43
Sorted by Click to highlight new comments since: Today at 9:19 AM

In my previous post I wrote: “The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.” I now think that’s an incorrect statement. EA, currently, is all of the following: an idea/movement, a community, and a small group of organisations. On the ‘movement’ understanding of EA, analogies of EA don’t have a community panel similar to what I suggested, and only some have ‘guiding principles’. (Though communities and organisations, or groups of organisations, often do.)

Julia created a list of potential analogies here:

[https://docs.google.com/document/d/1aXQp_9pGauMK9rKES9W3Uk3soW6c1oSx68bhDmY73p4/edit?usp=sharing].

The closest analogy to what we want to do is given by the open source community: many but not all of the organisations within the open source community created their own codes of conduct, many of them very similar to each other.

William I wonder if EA is also, whether we accept this or not, a part of a wider/older historical effort?

I don't mean just people like Esther Duflo at the MIT Poverty Lab, health economists, bottom billion / development economists, Joseph Rowntree Foundation and its social wellbeing research, Oxfam, IDS Sussex, public health people, epidemiologists and (obviously) utilitarian philosophers ....

... but also older roots such as Quakers like prison reformer Elisabeth Fry and anti-slave trade groups, or various buddhists and Christians prioritising health care, the relief of poverty etc.

Of course many of these will have been less mathematical than many modern EAs, and we could identify other differences. However all of these were to significant degrees interested in evidenced improvements and policy improvement, and some still are.

By acknowledging and exchanging with their existing knowledge bases and experiences, wouldn't we be better placed to expand and mainstream the best that EA can offer? And be less ignorant about effective and altruistic work and research that has already been done, and lessons already learned, perhaps especially when it comes to creating and maintaining a movement?!

This is pretty much exactly what I was hoping for! Thank you!

[anonymous]7y4
0
0

Congrats on making this, it seems like a modest but still powerful way forward.

Have you thought about making it possible for community members to officially endorsement these principles?

We did think about it, but didn't come up with anything that seemed particularly good. (Happy to hear ideas.)

I'm thinking of other parallel documents: for example, when the Universal Declaration on Human Rights came out, there was a list of official signatories (nations). Then there were organizations that promoted the declaration on their own (without participation of the UN). Then there were individuals who supported it by telling their friends they approved of it, writing letters to the editor in support of it, etc.

The main way I would love to see individuals endorsing the principles is to refer to them when disagreements arise and it's unclear how people should behave or make decisions. That's where the rubber meets the road, much more than whose name is on what list.

Will this be publicly available on the internet? eg on https://www.effectivealtruism.org/?

Yes! Working on it.

https://www.centreforeffectivealtruism.org/ceas-guiding-principles/

I'd like to respond to your description of what some people's worries about your previous proposal were, and highlight how some of those worries could be addressed, hopefully without reducing how helpfully ambitious your initial proposal was. Here goes:

the risk of losing flexibility by enforcing what is an “EA view” or not

It seems to me like the primary goal of the panel in the original proposal was to address instances of people lowering the standard of trustworthiness within EA and imposing unreasonable costs (including unreasonable time costs) on individual EAs. I suspect that enumerating what sorts of things "count" as EA endeavors isn't a strictly necessary prerequisite for forming such a panel.

I can see why some people held this concern, partly because "defining what does and doesn't count as an EA endeavor" clusters in thing-space with "keeping an eye out for people acting in untrustworthy and non-cooperative ways towards EAs", but these two things don't have to go hand in hand.

the risk of consolidating too much influence over EA in any one organisation or panel

Fair enough. As with the last point, the panel would likely consolidate less unwanted influence over EA if it focused solely on calling out sufficiently dishonestly harmful behavior by anyone who self-identified as an EA, and made no claims as to whether any individuals or organizations "counted" as EAs.

the risk of it being impossible to get agreement, leading to an increase in politicisation and squabbling

This seems like a concern that's good, in that a bit harder for me to address satisfactorily. Hopefully, though, there would some clear-cut cases the panel could choose to consider, too; the case of Intentional Insights' poor behavior was eventually quite clear, for one. I would guess that the less clear cases would tend to be the ones where a clear resolution would be less impactful.

In response, we toned back the ambitions of the proposed ideas.

I'd have likely done the same. But that's the wrong thing to do.

In this case, the counterfactual to having some sort of panel to call out behavior which causes unreasonable amounts of harm to EAs is relying on the initiative of individuals to call out such behavior. This is not a sustainable solution. Your summary of your previous post puts it well:

There’s very little to deal with people representing EA in ways that seem to be harmful; this means that the only response is community action, which is slow, unpleasant for all involved, and risks unfairness through lack of good process.

Community action is all that we had before the Intentional Insights fiasco, and community action is all that we're back to having now.

I didn't get to watch the formation of the panel you discuss, but it seems like a nontrivial amount of momentum, which was riled up by the harm Intentional Insights caused EA, went into its creation. To the extent that that momentum is no longer available because some of it was channeled into the creation of this panel, we've lost a chance at building a tool to protect ourselves against agents and organizations who would impose costs on, and harm EAs and EA overall. Pending further developments, I have lowered my opinion of everyone directly involved accordingly.

FWIW, as someone who contributed to the InIn document, I approve of (and recommended during discussion) the less ambitious project this represents.

I like this, but I think collaborative spirit should be augmented by remembering the value of unity and solidarity, which is rather different than mere collaboration and cooperation. Curious why it didn't get included.

We recognize that there are major areas of disagreement between people who are committed to the core ideas of EA, and we don't want emphasis on "unity" to sound like "you have to agree with the majority on specific topics, or you should leave."

Ah, I see. Thanks for the response. I agree 100%.

I was framing it in more, well, tribalistic terms, almost to the opposite effect. Basically, if you're an EA trying to achieve your goals and have to deal with problems from outsiders, then regardless of whether we agree, I'm "on your team" so to speak.

I'm very much in favor of this

Those guiding principles are good. However, I wished you would include one that was against doing massive harm to the world. CEA endorses the “Foundational Research Institute,” a pseudo-think tank that promotes dangerous ideas of mass-termination of human and non-human life, not excluding extinction. By promoting this organization, CEA is promoting human, animal, and environmental terrorism on the grandest scale. Self-styled “effective altruists” try to pass themselves off as benevolent, but the reality is that they themselves are one of the biggest threats to the world by promoting terrorism and anti-spirituality under the cloak of altruism.

[anonymous]7y17
0
0

Fair point about not doing harm but I feel like you're giving the Foundational Research Institute a treatment which is both unfair and unnecessary to get your point across.

If it was the case that FRI was accurately characterized here, then do we know of other EA orgs that would promote mass termination of life? If not, then it it is a necessary example, plain and simple.

If it was the case that FRI was accurately characterized here, then do we know of other EA orgs that would promote mass termination of life?

Sure. MFA, ACE and other animal charities plan to drastically reduce or even eliminate entirely the population of farm animals. And poverty reduction charities drastically reduce the number of wild animals.

If not, then it it is a necessary example, plain and simple.

But it is not necessary - as you can see elsewhere in this thread, I raised an issue without providing an example at all.

The problem is that some EAs would have the amount of life in the universe reduced to zero permanently. (And don't downvote this unless you personally know this to be false - it is unfortunately true)

If not, then it it is a necessary example, plain and simple.

But it is not necessary - as you can see elsewhere in this thread, I raised an issue without providing an example at all.

"An issue"? Austen was referring to problems where an organization affiliates with particular organizations that cause terror risk, which you don't seem to have discussed anywhere. For this particular issue, FRI is an illustrative and irreplaceable example, although perhaps you could suggest an alternative way of raising this concern?

The problem is that some EAs would have the amount of life in the universe reduced to zero permanently. (And don't downvote this unless you personally know this to be false - it is unfortunately true)

It's a spurious standard. You seem to be drawing a line between mass termination of life and permanent mass termination of life just to make sure that FRI falls on the wrong side of a line. It seems like either could support 'terrorism'. Animal liberationists actually do have a track record of engaging in various acts of violence and disruption in the past. The fact that their interests aren't as comprehensive as some NUs' are doesn't change this.

"An issue"? Austen was referring to problems where an organization affiliates with particular organizations that cause terror risk, which you don't seem to have discussed anywhere.

I'm not sure why the fact that my comment didn't discuss terrorism implies that it fails to be a good example of raising a point without an example.

For this particular issue, FRI is an illustrative and irreplaceable example, although perhaps you could suggest an alternative way of raising this concern?

""Not causing harm" should be one of the EA values?" Though it probably falls perfectly well under commitment to others anyway.

It's the only negative utilitarianism promoting group I know of. Does anyone know of others (affiliated with EA or not)?

[anonymous]7y3
0
0

Many antinatalists who are unaffiliated with EA have similar beliefs. (eg, David Benatar, although I'm not sure whether he's even a consequentialist at all.)

Benatar is a nonconsequentialist. At least, the antinatalist argument he gives is nonconsequentialist - grounded in rules of consent.

Not sure why that matters though. It just underscores a long tradition of nonconsequentialists who have ideas which are similar to negative utilitarianism. Austen's restriction of the question to NU just excludes obviously relevant examples such as VHEMT.

Exactly, despite the upvotes, Soeren's argument is ill-founded. It seems really important in situations like this that people vote on what they believe to be true based on reason and evidence, not based on uninformed guesses and motivated reasoning.

Soeren didn't give an argument. He wrote a single sentence pointing out that the parent comment was giving FRI an unfair and unnecessary treatment. I don't see what's "ill founded" about that.

It seems really important in situations like this that people vote on what they believe to be true based on reason and evidence, not based on uninformed guesses and motivated reasoning.

Why is it more important now than in normal discourse? If someone decides to be deliberately obtuse and disrespectful, isn't that the best time to revert to tribalism and ignore what they have to say?

He wrote a single sentence pointing out that the parent comment was giving FRI an unfair and unnecessary treatment. I don't see what's "ill founded" about that.

What's ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror, then it's obviously necessary to name the orgs. Calling it "unnecessary" to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.

Why is it more important now than in normal discourse? If someone decides to be deliberately obtuse and disrespectful, isn't that the best time to revert to tribalism and ignore what they have to say?

Our ability to discern good arguments even when we don't like them is what sets us apart from the post-fact age we're increasingly surrounded by. It's important to focus on these things when people are being tribal, because that's when it's hard. If you only engage with facts when it's easy, then you're going to end up mistaken about many of the most important issues.

What's ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror,

But they do not increase the risk of terror. Have you studied terrorism? Do you know about where it comes from and how to combat it? As someone who actually has (US military, international relations) I can tell you that this whole thing is beyond silly. Radicalization is a process, not a mere manner of reading philosophical papers, and it involves structural factors among disenfranchised people and communities as well as the use of explicitly radicalizing media. And it is used primarily as a tool for a broad variety of political ends, which could easily include the ends which all kinds of EAs espouse. Very rarely is destruction itself the objective of terrorism. Also, terrorism generally happens as a result of actors feeling that they have a lack of access to legitimate channels of influencing policy. The way that people have leapt to discussing this topic without considering these basic facts shows that they don't have the relevant expertise to draw conclusions on this topic.

Calling it "unnecessary" to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.

But Austen did not say "Not supporting terrorism should be an EA value." He said that not causing harm should be an EA value.

Our ability to discern good arguments even when we don't like them is what sets us apart from the post-fact age we're increasingly surrounded by.

There are many distinctions between EA and whatever you mean by the (new?) "post-fact age", but responding seriously to what essentially amounts to trolling doesn't seem like a necessary one.

It's important to focus on these things when people are being tribal, because that's when it's hard.

That doesn't make any sense. Why should we focus more on things just because they're hard? Doesn't it make more sense to put effort somewhere where things are easier, so that we get more return on our efforts?

If you only engage with facts when it's easy, then you're going to end up mistaken about many of the most important issues.

But that seems wrong: one person's complaints about NU, for instance, isn't one of the most important issues. At the same time, we have perfectly good discussions of very important facts about cause prioritization in this forum where people are much more mature and reasonable than, say, Austen here is. So it seems like there isn't a general relationship between how important a fact is and how disruptive commentators are when discussing it. At the very minimum, one might start from a faux clean slate where a new discussion is started separate from the original instigator - something which takes no time at all and enables a bit of a psychological restart. That seems to be strictly slightly better than encouraging trolling.

Those radicalization factors you mentioned increase the likelihood for terrorism but are not necessary. Saying that people don't commit terror from reading philosophical papers and thus those papers are innocent and shouldn't be criticized is a pretty weak argument. Of course, such papers can influence people. The radicalization process starts with philosophy, so to say that first step doesn't matter because the subsequent steps aren't yet publicly apparent shows that you are knowingly trying to allow this form of radicalization to flourish. Although, NUEs do in fact meet the other criteria you mentioned. For instance, I doubt that they have confidence in legitimately influencing policy (ie. convincing the government to burn down all the forests).

FRI and its parent EA Foundation state that they are not philosophy organizations and exists solely to incite action. I agree that terrorism has not in the past been motivated purely by destruction. That is something that atheist extremists who call themselves effective altruists are founding.

I am not a troll. I am concerned about public safety. My city almost burned to ashes last year due to a forest fire, and I don't want others to have to go through that. Anybody read about all the people in Portugal dying from a forest fire recently? That's the kind of thing that NUEs are promoting and I'm trying to prevent. If you're wondering why I don't elaborate my position on “EAs” promoting terrorism/genocide, it is for two reasons. One, it is self-evident if you read Tomasik and FRI materials (not all of it, but some articles). And two, I can easily cause a negative effect by connecting the dots for those susceptible to the message or giving them destructive ideas they may not have thought of.

kbog
7y-1
0
0

Those radicalization factors you mentioned increase the likelihood for terrorism but are not necessary

Yeah, and you probably think that being a negative utilitarian increases the likelihood for terrorism, but it's not necessary either. In the real world we deal with probabilities and expectations, not speculations and fantasies.

Saying that people don't commit terror from reading philosophical papers and thus those papers are innocent and shouldn't be criticized is a pretty weak argument. Of course, such papers can influence people. The radicalization process starts with philosophy

This is silly handwaving. The radicalization process starts with being born. It doesn't matter where things 'start' in the abstract sense, what matters is what causes the actual phenomenon of terrorism to occur.

to say that first step doesn't matter because the subsequent steps aren't yet publicly apparent shows that you are knowingly trying to allow this form of radicalization to flourish

So your head is too far up your own ass to even accept the possibility that someone who has actually studied international relations and counterinsurgency strategy knows that you are full of shit. Cool.

I am not a troll. I am concerned about public safety.

You are a textbook concern troll.

My city almost burned to ashes last year due to a forest fire, and I don't want others to have to go through that

Welcome to EA, honey. Everyone here is altruistic, you can't get special treatment.

That's the kind of thing that NUEs are promoting

But they're not. You think they're promoting it, or at least you want people to think they're promoting it. But that's your own opinion, so presenting it like this constitutes defamation.

If you're wondering why I don't elaborate my position on “EAs” promoting terrorism/genocide, it is for two reasons. One, it is self-evident if you read Tomasik and FRI materials (not all of it, but some articles).

But I have read those materials. And it's not self-evident. And other people have read those articles and they don't find them self-evident either. Actually, it's self-evident that they don't promote it, if you read some of their materials.

And two, I can easily cause a negative effect by connecting the dots for those susceptible to the message or giving them destructive ideas they may not have thought of.

What bullshit. If you actually worried about this then you wouldn't be saying that it's a direct, self-evident conclusion of their beliefs. So either you don't know what you're doing, or you're arguing in bad faith. Probably both.

[anonymous]7y4
0
0

I mostly agree with you. It honestly does worry me that the mainstream EA movement has no qualms about associating with FRI, whose values, I would wager, conflict with the those of the majority of humankind. This is one of the reasons I have drifted away from identifying with EA lately.

Self-styled “effective altruists” try to pass themselves off as benevolent, but the reality is that they themselves are one of the biggest threats to the world by promoting terrorism and anti-spirituality under the cloak of altruism.

It's a stretch to say FRI directly promotes terrorism; they make it clear on their website that they oppose violence and encourage cooperation with other (non-NU) value systems. The end result of their advocacy, however, may be less idealistic than they anticipate. (It's not too hard to imagine a negative utilitarian Kaczynski, if their movement gains traction. I think there's even a page on the FRI website where they mention that as a possible risk of advocating for suffering-focused ethics.)

I don't know what you mean by "anti-spirituality".

I know they don't actually come out and recommend terrorism publicly... but they sure go as far as they can to entice terrorism without being prosecuted by the government as a terrorist organization. Of course, if they were explicit, they'd immediately be shut down and jailed by authorities.

I promise you this – all those who endorse this mass termination of life ideology are going to pay a price. Whether by police action or public scrutiny, they will be forced to publicly abandon their position at some point. I implore them to do it now, on their volition. No one will believe them if they conveniently change their minds about no-rules negative utilitarianism after facing public scrutiny or the law. Now is the time. I warned CEA about this years ago, yet they still promote FRI.

I actually respect austere population-control to protect quality of life, even through seemingly drastic means such as forced sterilization (in extreme scenarios only, of course). However, atheists don't believe in any divine laws such as the sin of killing, are thus not bound by any rules. The type of negative utilitarianism popular in EA is definitely a brutal no-rules, mass killing-is-okay type. It is important to remember, also, that not everyone has good mental health. Some people have severe schizophrenia and could start a forest fire or kill many people to “prevent suffering” without thinking through all of the negative aspects of doing this. I think that the Future of Humanity Institute should add negative utilitarian atheism to their list of existential risks.

Anti-spirituality: Doesn't have anything to do with NU or FRI, I probably should have left it out of my comment. It just means that many EAs use EA as a means to promote atheism/atheists. Considering about 95% of the world's population are believers, they may have an issue with this aspect of the movement.

[anonymous]7y8
0
0

Of course, if they were explicit, they'd immediately be shut down and jailed by authorities.

I really don't like how you are accusing people without evidence of intentionally promoting violence. This is borderline libel. I agree that someone could take their ideology and use it to justify violence, but I see no reason to believe that they are intentionally trying to "entice" such actions.

I really don't like how you are accusing people without evidence of intentionally promoting violence. This is borderline libel. I agree that someone could take their ideology and use it to justify violence, but I see no reason to believe that they are intentionally trying to "entice" such actions.

Indeed, must focus on the battles we can win. There are two traps. One is to make false accusations. Currently, few negative utilitarians are promoting terrorism, and we should not make accusations that would suggest otherwise. Two is to stir up controversy. Telling negative utilitarians that they are terrorists could inflame them into actually behaving in a more hostile manner. It is like when people say that naming "radical islamic terrorism" is necessary to solve the problem. Perhaps, but it would be more useful to engage cooperatively with the religion of Islam to show that it is a religion of piece, and the same for utilitraianism.

The safe position that we should expect EA leaders to vigilantly oppose is not to promote values whose adoption would lead to large-scale terrorism. This is the hill that we should choose to die on. Specifically, if negative utilitarians believe in cooperation, and they believe that value-spreading is important, then they should be cooperative in the values that they spread. And this does not allow for spreading values that would lead to actions that are overwhelmingly repulsive to the vast majority of ethicists andd the general population on an astronomical scale. EA leaders must include CEA.

[anonymous]7y4
0
0

However, atheists don't believe in any divine laws such as the sin of killing, are thus not bound by any rules.

I think your gripe is with consequentialism, not atheism per se. And don't forget that there are plenty of theists who do horrible things, often in the name of their religion.

I think that the Future of Humanity Institute should add negative utilitarian atheism to their list of existential risks.

The X-Risks Institute, which is run by /u/philosophytorres, specializes in agential risks, and mentions NU as one such risk. I don't whether FHI has ever worked on agential risks.

It just means that many EAs use EA as a means to promote atheism/atheists.

It is evident that the majority of EAs are atheist/irreligious, but I am not aware of any EA organizations actively promoting atheism or opposing theism. Who uses EA as a "means to promote atheism"?

Coincidentally, the closest example I can recall is Phil Torres's work on religious eschatological fanaticism as a possible agential x-risk.

Roman Yampolskiy's shortlist of potential agents who could bring about an end to the world (https://arxiv.org/ftp/arxiv/papers/1605/1605.02817.pdf) also includes Military, Government, Corporations, Villains, Black Hats, Doomsday Cults, Depressed, Psychopaths, Criminals, AI Risk Deniers, and AI Safety Researchers.

They encourage cooperation with other value systems to further their apocalyptic goals, but mostly to prevent others from opposing them. That is different from tempering "strong NU" with other value systems to arrive at more moderate conclusions.

LOOOOL about your optimism of people not following FRI's advocacy as purely as they want! Lets hope so, eh?

Also, I am somewhat concerned that this comment has been downvoted so much. It's the only really substantive criticism of the article (admittedly it isn't great), and it is at -3, right at the bottom.

Near the top are several comments at +5 or something that are effectively just applause.

LOL. Typical of my comments. Gets almost no upvotes but I never receive any sensible counterarguments! People use the forum vote system to persuade (by social proof) without having a valid argument. I have yet to vote a comment (up or down) because I think people should think for themselves.

You can understand some of what people are downvoting you for by looking at which of your comments are most downvoted - ones where you're very critical without much explanation and where you suggest that people in the community have bad motives: http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah7 http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah6 http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8p9

Well-explained criticisms won't get downvoted this much.

Gets almost no upvotes

Actually you got 7 upvotes and 6 downvotes, I can tell from hovering over the '1 point'.

dangerous ideas of mass-termination of human and non-human life,

Specifically?