Comment author: RobBensinger 06 September 2017 08:12:51AM 1 point [-]

"Existential risk" has the advantage over "long-term future" and "far future" that it sounds like a technical term, so people are more likely to Google it if they haven't encountered it (though admittedly this won't fully address people who think they know what it means without actually knowing). In contrast, someone might just assume they know what "long-term future" and "far future" means, and if they do Google those terms they'll have a harder time getting a relevant or consistent definition. Plus "long-term future" still has the problem that it suggests existential risk can't be a near-term issue, even though some people working on existential risk are focusing on nearer-term scenarios than, e.g., some people working on factory farming abolition.

I think "global catastrophic risk" or "technological risk" would work fine for this purpose, though, and avoids the main concerns raised for both categories. ("Technological risk" also strikes me as a more informative / relevant / joint-carving category than the others considered, since x-risk and far future can overlap more with environmentalism, animal welfare, etc.)

Comment author: Austen_Forrester 09 September 2017 04:36:08AM -1 points [-]

Of course, I totally forgot about the "global catastrophic risk" term! I really like it and it doesn't only suggest extinction risks. Even its acronym sounds pretty cool. I also really like your "technological risk" suggestion, Rob. Referring to GCR as "Long term future" is a pretty obvious branding tactic by those that prioritize GCRs. It is vague, misleading, and dishonest.

Comment author: Robert_Wiblin 01 September 2017 06:47:50PM *  6 points [-]

For next year's survey it would be good if you could change 'far future' to 'long-term future' which is quickly becoming the preferred terminology.

'Far future' makes the perspective sound weirder than it actually is, and creates the impression you're saying you only care about events very far into the future, and not all the intervening times as well.

Comment author: Austen_Forrester 04 September 2017 02:25:08PM 0 points [-]

For "far future"/"long term future," you're referring to existential risks, right? If so, I would think calling them existential or x-risks would be the most clear and honest term to use. Any systemic change affects the long term such as factory farm reforms, policy change, changes in societal attitudes, medical advances, environmental protection, etc, etc. I therefore don't feel it's that honest to refer to x-risks as "long term future."

Comment author: DonyChristie 06 August 2017 03:16:42PM *  0 points [-]

I'm curious what exactly you mean by regular morals. I try (and fail) to not separate my life into separate magisteria but rather seeing every thing as making tradeoffs with every other thing, and pointing my life in the direction of the path that has the most global impact. I see EA as being regular morality, but extended with better tools that attempt to engage the full scope of the world. It seems like such a demarcation between incommensurable moral domains as you appear to be arguing for can allow a person to defend any status quo in their altruism rather than critically examining whether their actions are doing the most good they can. In the case of blood, perhaps you're talking about the fuzzies budget instead of the utilons? Perhaps your position is something like 'Regular morals are the set of actions that, if upheld by a majority of people, will not lead to society collapsing and due to anthropics I should not defect from my commitment to prevent society from collapsing. Blood donation is one of these actions', or this post?

Comment author: Austen_Forrester 06 August 2017 11:43:41PM 0 points [-]

By regular morals, I mean basic morals such as treating others how you like to be treated, ie. rules that you would be a bad person if you failed to abide by them. While I don't consider EA superorogatory, neither do I think that not practicing EA makes someone a bad person, thus, I wouldn't put it in the category of basic morals. (Actually, that is the standard I hold others to, for myself, I would consider it a moral failure if I didn't practice EA!) I think it actually is important to differentiate between basic and, let's say, more “advanced” morals because if people think that you consider them immoral, they will hate you. For instance, promoting EA as a basic moral that one is a “bad person” if she doesn't practice, will just result in backlash from people discovering EA. No one wants to be judged.

The point I was trying to make is that EAs should be aware of moral licensing, which means to give oneself an excuse to be less ethical in one department because you see yourself as being extra-moral in another. If there is a tradeoff between exercising basic morals and doing some high impact EA activity, I would go with the EA (assuming you are not actually creating harm, of course). For instance, I don't give blood because last time I did I was lightheaded for months. Besides decreasing my quality of life, it would also hurt by ability to do EA. I wouldn't say giving blood is an act of basic morality, but it still an altruistic action that few people can confidently say they are too important to consider doing. Do you not agree that if doing something good doesn't prevent you from doing something more high impact, than it would be morally preferable to do it? For instance, treating people with kindness... people shouldn't stop being kind to others because it won't result in some high global impact.

Comment author: Austen_Forrester 06 August 2017 07:13:05AM 1 point [-]

I think it may be useful to differentiate between EA and regular morals. I would put donating blood in the latter category. For instance, treating your family well isn't high impact on the margin, but people should still do it because of basic morals, see what I mean? I don't think that practicing EA somehow excuses someone from practicing good general morals. I think EA should be in addition to general morals, not replace it.

Comment author: KevinWatkinson  (EA Profile) 18 July 2017 08:23:48AM 0 points [-]

This can be an issue, but i think Matt Ball has chosen not to present a strong position because he believes that is offputting, instead he undermines the strong position and presents a sub optimal one. However, he says this is in fact optimal as it reduces more harm.

If applied to EA we would undermine a position we believe might put people off, because it is too complicated / esoteric, and present a first step that will do more good.

Comment author: Austen_Forrester 18 July 2017 07:04:23PM *  0 points [-]

My point was that EAs probably should exclusively promote full-blown EA, because that has a good chance of leading to more uptake of both full-blown and weak EA. Ball's issue with the effect of people choosing to go part-way after hearing the veg message is that it often leads to more animals being killed due to people replacing beef and pork with chicken. That's a major impetus for his direct “cut out chicken before pork and beef” message. It doesn't undermine veganism because chicken-reducers are more likely to continue on towards that lifestyle, probably more so even than someone who went vegetarian right away Vegetarians have a very high drop out rate, but many believe that those who transitioned gradually last longer.

I think that promoting effectively giving 10% of one's time and/or income (for the gainfully employed) is a good balance between promoting a high impact lifestyle and being rejected due to high demandingness. I don't think it would be productive to lower the bar on that (ie. By saying cause neutrality is optional).

Comment author: Austen_Forrester 18 July 2017 02:25:59AM 0 points [-]

One thing to keep in mind is that people often (or usually, even) choose the middle ground by themselves. Matt Ball often mentions how this happens in animal rights with people deciding to reduce meat after learning about the merits vegetarianism and mentions that Nobel laureate Herb Simon is known for this realization of people opting for sub-optimal decisions.

Thus, I think that in promoting pure EA, most people will practice weak EA (ie. not cause neutral) on their own accord, so perhaps the best way to proliferate weak EA is by promoting strong EA.

Comment author: Austen_Forrester 08 July 2017 11:04:16PM 0 points [-]

I totally understand your concern that the EA movement is misrepresenting itself by not promoting issues proportional to their representation among people in the group. However, I think that the primary consideration in promoting EA should be what will hook people. Very few people in the world care about AI as a social issue, but extreme poverty and injustice are very popular causes that can attract people. I don't actually think it should matter for outreach what the most popular causes are among community members. Outreach should be based on what is likely to attract the masses to practice EA (without watering it down by promoting low impact causes, of course). Also, I believe it's possible to be too inclusive of moral theories. Dangerous theories that incite terrorism like Islamic or negative utilitarian extremism should be condemned.

Also, I'm not sure to what extent people in the community even represent people who practice EA. Those are two very different things. You can practice EA, for example by donating a chunk of your income to Oxfam every year, but not have anything to do with others who identify with EA, and you can be a regular at EA meetups and discussing related topics often (ie. a member of the EA community) without donating or doing anything high impact. Perhaps the most popular issues acted on by those who practice EA are different from those discussed by those who like to talk about EA. Being part of the EA community doesn't give one any moral authority in itself.

Comment author: KevinWatkinson  (EA Profile) 03 July 2017 02:09:49PM *  3 points [-]

First of all we would need to accept there are different approaches, and consider what they are before evaluating effectiveness.

The issue with Effective Altruism is that it is fairly one dimensional when it comes to animal advocacy. That is it works with the system of animal exploitation rather than counter to it, so primarily welfarism and reducetarianism. In relation to these ideas we need to view the subsequent counterfactual analysis, and yet where is it? I've asked these sorts of questions and it seems that people haven't applied some fundamental aspects of Effective Altruism to these issues. They are merely assumed.

For some time it has appeared as if EA has been working off a strictly utilitarian script, and has ignored or marginalised other ideas. Partly this has arisen because of the limited pool of expertise that EA has chosen to draw upon, and this has had a self replicating effect.

Recently i read through some of Holden Karnofsky's thoughts on Hits-based Giving and something particularly chimed towards the end of the essay.

"Respecting those we interact with and avoiding deception, coercion, and other behavior that violates common-sense ethics. In my view, arrogance is at its most damaging when it involves “ends justify the means” thinking. I believe a great deal of harm has been done by people who were so convinced of their contrarian ideas that they were willing to violate common-sense ethics for them (in the worst cases, even using violence).

As stated above, I’d rather live in a world of individuals pursuing ideas that they’re excited about, with the better ideas gaining traction as more work is done and value is demonstrated, than a world of individuals reaching consensus on which ideas to pursue. That’s some justification for a hits-based approach. But with that said, I’d also rather live in a world where individuals pursue their own ideas while adhering to a baseline of good behavior and everyday ethics than a world of individuals lying to each other, coercing each other, and actively interfering with each other to the point where coordination, communication and exchange break down.

On this front, I think our commitment to being honest in our communications is important. It reflects that we don’t think we have all the answers, and we aren’t interested in being manipulative in pursuit of our views; instead, we want others to freely decide, on the merits, whether and how they want to help us in our pursuit of our mission. We aspire to simultaneously pursue bold ideas and remember how easy it would be for us to be wrong."

I think in time we will view the present EAA approach as having commonalities with Karnofsky's concerns, and steps will be taken to broaden the EAA agenda to be more inclusive. I think it is unlikely however, that these changes will be sought or encouraged by movement leaders, and even within groups such as ACE i remain concerned about bias within leadership toward the 'mainstream' approach. Unfortunately, ACE has historically been underfunded, and has not received the support it has needed to properly account for movement issues, or to increase the range of the work it undertakes. I think this is partly a leadership issue in that aims and goals have not been reasonably set and pursued, and also an EA movement issue, where a certain complacency has set in.

Comment author: Austen_Forrester 05 July 2017 12:06:02AM 2 points [-]

I don't see how TYLCS is selling out at all. They have the same maximizing impact message as other EA groups, just with a more engaging feel that also appeals to emotions (the only driver of action in almost all people).

Matt Ball is more learned and impact-focused than anyone in the animal rights field. One Step for Animals, and the Reducetarian Foundation were formed to save as many animals as possible -- complementing, not replacing, vegan advocacy. Far from selling out, One Step and Reducetarian are the exceptions from most in animal rights who have traded their compassion for animals for feelings of superiority.

Comment author: Austen_Forrester 04 July 2017 11:52:50PM -1 points [-]

I really respect the moderators of this forum for allowing me to advocate for public safety (ie. criticize NUE) and removing comments that could endanger public safety (ie. advocating suicide)!

Comment author: kbog  (EA Profile) 17 March 2017 05:21:55PM *  3 points [-]

What's ill-founded is that if you want to point out a problem where people affiliate with NU orgs that promote values which increase risk of terror,

But they do not increase the risk of terror. Have you studied terrorism? Do you know about where it comes from and how to combat it? As someone who actually has (US military, international relations) I can tell you that this whole thing is beyond silly. Radicalization is a process, not a mere manner of reading philosophical papers, and it involves structural factors among disenfranchised people and communities as well as the use of explicitly radicalizing media. And it is used primarily as a tool for a broad variety of political ends, which could easily include the ends which all kinds of EAs espouse. Very rarely is destruction itself the objective of terrorism. Also, terrorism generally happens as a result of actors feeling that they have a lack of access to legitimate channels of influencing policy. The way that people have leapt to discussing this topic without considering these basic facts shows that they don't have the relevant expertise to draw conclusions on this topic.

Calling it "unnecessary" to treat that org is then a blatant non-sequitur, whether you call it an argument or an assertion is up to you.

But Austen did not say "Not supporting terrorism should be an EA value." He said that not causing harm should be an EA value.

Our ability to discern good arguments even when we don't like them is what sets us apart from the post-fact age we're increasingly surrounded by.

There are many distinctions between EA and whatever you mean by the (new?) "post-fact age", but responding seriously to what essentially amounts to trolling doesn't seem like a necessary one.

It's important to focus on these things when people are being tribal, because that's when it's hard.

That doesn't make any sense. Why should we focus more on things just because they're hard? Doesn't it make more sense to put effort somewhere where things are easier, so that we get more return on our efforts?

If you only engage with facts when it's easy, then you're going to end up mistaken about many of the most important issues.

But that seems wrong: one person's complaints about NU, for instance, isn't one of the most important issues. At the same time, we have perfectly good discussions of very important facts about cause prioritization in this forum where people are much more mature and reasonable than, say, Austen here is. So it seems like there isn't a general relationship between how important a fact is and how disruptive commentators are when discussing it. At the very minimum, one might start from a faux clean slate where a new discussion is started separate from the original instigator - something which takes no time at all and enables a bit of a psychological restart. That seems to be strictly slightly better than encouraging trolling.

Comment author: Austen_Forrester 19 June 2017 10:39:56PM 0 points [-]

Those radicalization factors you mentioned increase the likelihood for terrorism but are not necessary. Saying that people don't commit terror from reading philosophical papers and thus those papers are innocent and shouldn't be criticized is a pretty weak argument. Of course, such papers can influence people. The radicalization process starts with philosophy, so to say that first step doesn't matter because the subsequent steps aren't yet publicly apparent shows that you are knowingly trying to allow this form of radicalization to flourish. Although, NUEs do in fact meet the other criteria you mentioned. For instance, I doubt that they have confidence in legitimately influencing policy (ie. convincing the government to burn down all the forests).

FRI and its parent EA Foundation state that they are not philosophy organizations and exists solely to incite action. I agree that terrorism has not in the past been motivated purely by destruction. That is something that atheist extremists who call themselves effective altruists are founding.

I am not a troll. I am concerned about public safety. My city almost burned to ashes last year due to a forest fire, and I don't want others to have to go through that. Anybody read about all the people in Portugal dying from a forest fire recently? That's the kind of thing that NUEs are promoting and I'm trying to prevent. If you're wondering why I don't elaborate my position on “EAs” promoting terrorism/genocide, it is for two reasons. One, it is self-evident if you read Tomasik and FRI materials (not all of it, but some articles). And two, I can easily cause a negative effect by connecting the dots for those susceptible to the message or giving them destructive ideas they may not have thought of.

View more: Next