Comment author: Austen_Forrester 29 October 2017 04:38:46PM 0 points [-]

I agree that financial incentives/disincentives result in failures (ie. social problems) of all kinds. One of the biggest reasons, as I'm sure you mention at some point in your book, is corruption. ie. the beef/dairy industry pays off environmental NGOs and government to stay quiet about their environmental impact.

But don't you think that non-financial rewards/punishment also play a large role in impeding social progress, in particular social rewards/punishment? ie. people don't wear enough to stay warm in the winter because others will tease them for being uncool, people bully others because they are then respected more, etc.

Comment author: Austen_Forrester 26 October 2017 05:28:58AM 0 points [-]

It could be a useful framing. "Optimize" to some people may imply making something already good great, such as making the countries with the highest HDI even better, or helping emerging economies to become high income, rather than helping the more suffering countries to catch up to the happier ones. It could be viewed as helping a happy person become super happy and not a sad person to become happy. I know this narrow form of altruism isn't your intention, I'm just saying that "optimize" does have this connotation. I personally prefer "maximally benefit/improve the world." It's almost the same as your expression but without the make-good-even-better connotation.

I think EA's have always thought about impact of collective action but it's just really hard, or even impossible to estimate how your personal efforts will further collective action and compare that to more predictable forms of altruism.

Comment author: RobBensinger 06 September 2017 08:12:51AM 1 point [-]

"Existential risk" has the advantage over "long-term future" and "far future" that it sounds like a technical term, so people are more likely to Google it if they haven't encountered it (though admittedly this won't fully address people who think they know what it means without actually knowing). In contrast, someone might just assume they know what "long-term future" and "far future" means, and if they do Google those terms they'll have a harder time getting a relevant or consistent definition. Plus "long-term future" still has the problem that it suggests existential risk can't be a near-term issue, even though some people working on existential risk are focusing on nearer-term scenarios than, e.g., some people working on factory farming abolition.

I think "global catastrophic risk" or "technological risk" would work fine for this purpose, though, and avoids the main concerns raised for both categories. ("Technological risk" also strikes me as a more informative / relevant / joint-carving category than the others considered, since x-risk and far future can overlap more with environmentalism, animal welfare, etc.)

Comment author: Austen_Forrester 09 September 2017 04:36:08AM -1 points [-]

Of course, I totally forgot about the "global catastrophic risk" term! I really like it and it doesn't only suggest extinction risks. Even its acronym sounds pretty cool. I also really like your "technological risk" suggestion, Rob. Referring to GCR as "Long term future" is a pretty obvious branding tactic by those that prioritize GCRs. It is vague, misleading, and dishonest.

Comment author: Robert_Wiblin 01 September 2017 06:47:50PM *  7 points [-]

For next year's survey it would be good if you could change 'far future' to 'long-term future' which is quickly becoming the preferred terminology.

'Far future' makes the perspective sound weirder than it actually is, and creates the impression you're saying you only care about events very far into the future, and not all the intervening times as well.

Comment author: Austen_Forrester 04 September 2017 02:25:08PM 0 points [-]

For "far future"/"long term future," you're referring to existential risks, right? If so, I would think calling them existential or x-risks would be the most clear and honest term to use. Any systemic change affects the long term such as factory farm reforms, policy change, changes in societal attitudes, medical advances, environmental protection, etc, etc. I therefore don't feel it's that honest to refer to x-risks as "long term future."

Comment author: DonyChristie 06 August 2017 03:16:42PM *  0 points [-]

I'm curious what exactly you mean by regular morals. I try (and fail) to not separate my life into separate magisteria but rather seeing every thing as making tradeoffs with every other thing, and pointing my life in the direction of the path that has the most global impact. I see EA as being regular morality, but extended with better tools that attempt to engage the full scope of the world. It seems like such a demarcation between incommensurable moral domains as you appear to be arguing for can allow a person to defend any status quo in their altruism rather than critically examining whether their actions are doing the most good they can. In the case of blood, perhaps you're talking about the fuzzies budget instead of the utilons? Perhaps your position is something like 'Regular morals are the set of actions that, if upheld by a majority of people, will not lead to society collapsing and due to anthropics I should not defect from my commitment to prevent society from collapsing. Blood donation is one of these actions', or this post?

Comment author: Austen_Forrester 06 August 2017 11:43:41PM 0 points [-]

By regular morals, I mean basic morals such as treating others how you like to be treated, ie. rules that you would be a bad person if you failed to abide by them. While I don't consider EA superorogatory, neither do I think that not practicing EA makes someone a bad person, thus, I wouldn't put it in the category of basic morals. (Actually, that is the standard I hold others to, for myself, I would consider it a moral failure if I didn't practice EA!) I think it actually is important to differentiate between basic and, let's say, more “advanced” morals because if people think that you consider them immoral, they will hate you. For instance, promoting EA as a basic moral that one is a “bad person” if she doesn't practice, will just result in backlash from people discovering EA. No one wants to be judged.

The point I was trying to make is that EAs should be aware of moral licensing, which means to give oneself an excuse to be less ethical in one department because you see yourself as being extra-moral in another. If there is a tradeoff between exercising basic morals and doing some high impact EA activity, I would go with the EA (assuming you are not actually creating harm, of course). For instance, I don't give blood because last time I did I was lightheaded for months. Besides decreasing my quality of life, it would also hurt by ability to do EA. I wouldn't say giving blood is an act of basic morality, but it still an altruistic action that few people can confidently say they are too important to consider doing. Do you not agree that if doing something good doesn't prevent you from doing something more high impact, than it would be morally preferable to do it? For instance, treating people with kindness... people shouldn't stop being kind to others because it won't result in some high global impact.

Comment author: Austen_Forrester 06 August 2017 07:13:05AM 1 point [-]

I think it may be useful to differentiate between EA and regular morals. I would put donating blood in the latter category. For instance, treating your family well isn't high impact on the margin, but people should still do it because of basic morals, see what I mean? I don't think that practicing EA somehow excuses someone from practicing good general morals. I think EA should be in addition to general morals, not replace it.

Comment author: KevinWatkinson  (EA Profile) 18 July 2017 08:23:48AM 0 points [-]

This can be an issue, but i think Matt Ball has chosen not to present a strong position because he believes that is offputting, instead he undermines the strong position and presents a sub optimal one. However, he says this is in fact optimal as it reduces more harm.

If applied to EA we would undermine a position we believe might put people off, because it is too complicated / esoteric, and present a first step that will do more good.

Comment author: Austen_Forrester 18 July 2017 07:04:23PM *  0 points [-]

My point was that EAs probably should exclusively promote full-blown EA, because that has a good chance of leading to more uptake of both full-blown and weak EA. Ball's issue with the effect of people choosing to go part-way after hearing the veg message is that it often leads to more animals being killed due to people replacing beef and pork with chicken. That's a major impetus for his direct “cut out chicken before pork and beef” message. It doesn't undermine veganism because chicken-reducers are more likely to continue on towards that lifestyle, probably more so even than someone who went vegetarian right away Vegetarians have a very high drop out rate, but many believe that those who transitioned gradually last longer.

I think that promoting effectively giving 10% of one's time and/or income (for the gainfully employed) is a good balance between promoting a high impact lifestyle and being rejected due to high demandingness. I don't think it would be productive to lower the bar on that (ie. By saying cause neutrality is optional).

Comment author: Austen_Forrester 18 July 2017 02:25:59AM 0 points [-]

One thing to keep in mind is that people often (or usually, even) choose the middle ground by themselves. Matt Ball often mentions how this happens in animal rights with people deciding to reduce meat after learning about the merits vegetarianism and mentions that Nobel laureate Herb Simon is known for this realization of people opting for sub-optimal decisions.

Thus, I think that in promoting pure EA, most people will practice weak EA (ie. not cause neutral) on their own accord, so perhaps the best way to proliferate weak EA is by promoting strong EA.

Comment author: Austen_Forrester 08 July 2017 11:04:16PM 0 points [-]

I totally understand your concern that the EA movement is misrepresenting itself by not promoting issues proportional to their representation among people in the group. However, I think that the primary consideration in promoting EA should be what will hook people. Very few people in the world care about AI as a social issue, but extreme poverty and injustice are very popular causes that can attract people. I don't actually think it should matter for outreach what the most popular causes are among community members. Outreach should be based on what is likely to attract the masses to practice EA (without watering it down by promoting low impact causes, of course). Also, I believe it's possible to be too inclusive of moral theories. Dangerous theories that incite terrorism like Islamic or negative utilitarian extremism should be condemned.

Also, I'm not sure to what extent people in the community even represent people who practice EA. Those are two very different things. You can practice EA, for example by donating a chunk of your income to Oxfam every year, but not have anything to do with others who identify with EA, and you can be a regular at EA meetups and discussing related topics often (ie. a member of the EA community) without donating or doing anything high impact. Perhaps the most popular issues acted on by those who practice EA are different from those discussed by those who like to talk about EA. Being part of the EA community doesn't give one any moral authority in itself.

Comment author: KevinWatkinson  (EA Profile) 03 July 2017 02:09:49PM *  3 points [-]

First of all we would need to accept there are different approaches, and consider what they are before evaluating effectiveness.

The issue with Effective Altruism is that it is fairly one dimensional when it comes to animal advocacy. That is it works with the system of animal exploitation rather than counter to it, so primarily welfarism and reducetarianism. In relation to these ideas we need to view the subsequent counterfactual analysis, and yet where is it? I've asked these sorts of questions and it seems that people haven't applied some fundamental aspects of Effective Altruism to these issues. They are merely assumed.

For some time it has appeared as if EA has been working off a strictly utilitarian script, and has ignored or marginalised other ideas. Partly this has arisen because of the limited pool of expertise that EA has chosen to draw upon, and this has had a self replicating effect.

Recently i read through some of Holden Karnofsky's thoughts on Hits-based Giving and something particularly chimed towards the end of the essay.

"Respecting those we interact with and avoiding deception, coercion, and other behavior that violates common-sense ethics. In my view, arrogance is at its most damaging when it involves “ends justify the means” thinking. I believe a great deal of harm has been done by people who were so convinced of their contrarian ideas that they were willing to violate common-sense ethics for them (in the worst cases, even using violence).

As stated above, I’d rather live in a world of individuals pursuing ideas that they’re excited about, with the better ideas gaining traction as more work is done and value is demonstrated, than a world of individuals reaching consensus on which ideas to pursue. That’s some justification for a hits-based approach. But with that said, I’d also rather live in a world where individuals pursue their own ideas while adhering to a baseline of good behavior and everyday ethics than a world of individuals lying to each other, coercing each other, and actively interfering with each other to the point where coordination, communication and exchange break down.

On this front, I think our commitment to being honest in our communications is important. It reflects that we don’t think we have all the answers, and we aren’t interested in being manipulative in pursuit of our views; instead, we want others to freely decide, on the merits, whether and how they want to help us in our pursuit of our mission. We aspire to simultaneously pursue bold ideas and remember how easy it would be for us to be wrong."

I think in time we will view the present EAA approach as having commonalities with Karnofsky's concerns, and steps will be taken to broaden the EAA agenda to be more inclusive. I think it is unlikely however, that these changes will be sought or encouraged by movement leaders, and even within groups such as ACE i remain concerned about bias within leadership toward the 'mainstream' approach. Unfortunately, ACE has historically been underfunded, and has not received the support it has needed to properly account for movement issues, or to increase the range of the work it undertakes. I think this is partly a leadership issue in that aims and goals have not been reasonably set and pursued, and also an EA movement issue, where a certain complacency has set in.

http://www.openphilanthropy.org/blog/hits-based-giving

Comment author: Austen_Forrester 05 July 2017 12:06:02AM 2 points [-]

I don't see how TYLCS is selling out at all. They have the same maximizing impact message as other EA groups, just with a more engaging feel that also appeals to emotions (the only driver of action in almost all people).

Matt Ball is more learned and impact-focused than anyone in the animal rights field. One Step for Animals, and the Reducetarian Foundation were formed to save as many animals as possible -- complementing, not replacing, vegan advocacy. Far from selling out, One Step and Reducetarian are the exceptions from most in animal rights who have traded their compassion for animals for feelings of superiority.

View more: Next