Comment author: geoffreymiller  (EA Profile) 16 August 2017 09:36:00PM 3 points [-]

I agree that growing EA in China will be important, given China's increasing wealth, clout, confidence, and global influence. If EA fails to reach a critical mass in China, its global impact will be handicapped in 2 to 4 decades. But, as Austen Forrester mentioned in another comment, the charity sector may not be the best beachhead for a Chinese EA movement.

Some other options: First, I imagine China's government would be motivated to thinking hard about X-risks, particularly in AI and bioweapons -- and they'd have the decisiveness, centralized control, and resources to really make a difference. If they can build 20,000 miles of high-speed rail in just one decade, they could probably make substantial progress on any challenge that catches the Politburo's attention. Also, they tend to take a much longer-term perspective than Western 'democracies', planning fairly far into the mid to late 21st century. And of course if they don't take AI X-risk seriously, all other AI safety work elsewhere may prove futile.

Second, China is very concerned about 'soft power' -- global influence through its perceived magnanimity. This is likely to happen through government do-gooding rather than from private charitable donations. But gov't do-gooding could be nudged into more utilitarian directions with some influence from EA insights -- e.g. China eliminating tropical diseases in areas of Africa where it's already a neocolonialist resource-extraction power, or reducing global poverty or improving governance in countries that could become thriving markets for its exports.

Third, lab meat & animal welfare: China's government knows that a big source of subjective well-being for people, and a contributor to 'social stability', is meat consumption. They consume more than half of all pork globally, and have a 'strategic pork reserve': But they plan to reduce meat consumption by 50% for climate change reasons: This probably creates a concern for the gov't: people love their pork, but if they're told to simply stop eating it in the service of reducing global warming, they will be unhappy. The solution could be lab-grown meat. If China invested heavily in that technology, they could have all the climate-change benefits of reduced livestock farming, but people wouldn't be resentful and unhappy about having to eat less meat. So that seems like a no-brainer to get the Chinese gov't interested in lab meat.

Fourth, with rising affluence, young Chinese middle-class people are likely to have the kind of moral/existential/meaning-of-life crises that hit the US baby boomers in the 1960s. They may be looking for something genuinely meaningful to do with their lives beyond workaholism & consumerism. I think 80k hours could prove very effective in filling this gap, if it developed materials suited to the Chinese cultural, economic, and educational context.

Comment author: Austen_Forrester 10 January 2018 07:11:36PM 0 points [-]

I didn't mean to imply that it was hopeless to increase charitable giving in China, rather the opposite, that it's so bad it can only go up! Besides that, I agree with all your points.

The Chinese government already provides foreign aid in Africa to make it possible to further their interests in the region. I was thinking of how we could possibly get them to expand it. The government seems almost impossible to influence, but perhaps EAs could influence African governments to try to solicit more foreign aid from China? It could have a negative consequence, however, in that receiving more aid from China may make Africa more susceptible to accepting bad trade deals, etc.

I don't know how to engage with China, but I do strongly feel that it holds huge potential for both altruism and also GCRs, which shouldn't be ignored. I like CEA's approach of seeking expertise on China generalist experts. There are a number of existing Western-China think tanks that could be useful to the movement, but I think that a "China czar" for EA is a necessity.

Comment author: [deleted] 02 December 2017 06:53:29PM *  -2 points [-]

I left most EA Facebook groups and concluded that EA will be an ineffective movement as a whole because I found basically NONE of the above being done in your organization. Ever. "Being intellectually fair can help people to resolve disagreements, so we have norms against overconfidence and fallacious reasoning." No, you have a norm of extreme overconfidence and fallacious reasoning, in the form of DEMANDS for "arguments by authority" that are the consistent response I encountered. More than half a dozen EA people "explained" to me that they would pay no attention to my claims or work until I went back to university and got a PhD, and others who had only a Bachelors in computer programming wanted to "review" my work in unrelated areas before even accepting an unpaid article for their blog from me. Others responded as if EA was some sort of popularity contest and not an effort to help others altruistically.

As constituted, EA is a practice in the wildly overblown egos of privileged young white males (mostly) who will accomplish very very little. The norm is that they believe they know literally everything and have no interest in hearing ideas that are new to them, at all.

I joined because I have knowledge to share. The "moderators" of the FB group consistently felt my knowledge was of no value and refused to permit my posts to be seen. I have shared my knowledge at three international academic conferences, but it was not deemed worthy of a single FB post on EA. The message was abundantly clear, EA does not want new ideas or knowledge, does not want to see any of their current ideas and assumptions questioned at all. My advice to anyone who wants to "Share knowledge. If you know a lot about an area, help others to learn by writing up what you’ve found" is to find a group where people might have even a slight interest, your efforts to do so at "Effective" altruism will be entirely ineffective.

It is a damned shame, the concept of EA is a good one.

Comment author: Austen_Forrester 02 January 2018 05:04:09PM *  -2 points [-]

I agree with you. "Effective altruists" are not interested in helping others, only furthering their elite white atheist demographic or showing that they are intellectually and morally superiors as individuals. They will steal my ideas and recommendations because they know they are robust while shunning me because I'm outside of their demographic and using the downvoting system to hide comments that are made by someone outside their demographic.

People use the concept of EA, especially x-risks, as a front for world destruction, their true goal. Literally. Who would suspect that the very same people who are supposedly trying to save the world are themselves the ones looking to destroy it using weapons of mass destruction? They are the most dangerous group in the world.

Comment author: Austen_Forrester 29 October 2017 04:38:46PM 0 points [-]

I agree that financial incentives/disincentives result in failures (ie. social problems) of all kinds. One of the biggest reasons, as I'm sure you mention at some point in your book, is corruption. ie. the beef/dairy industry pays off environmental NGOs and government to stay quiet about their environmental impact.

But don't you think that non-financial rewards/punishment also play a large role in impeding social progress, in particular social rewards/punishment? ie. people don't wear enough to stay warm in the winter because others will tease them for being uncool, people bully others because they are then respected more, etc.

Comment author: Austen_Forrester 26 October 2017 05:28:58AM 0 points [-]

It could be a useful framing. "Optimize" to some people may imply making something already good great, such as making the countries with the highest HDI even better, or helping emerging economies to become high income, rather than helping the more suffering countries to catch up to the happier ones. It could be viewed as helping a happy person become super happy and not a sad person to become happy. I know this narrow form of altruism isn't your intention, I'm just saying that "optimize" does have this connotation. I personally prefer "maximally benefit/improve the world." It's almost the same as your expression but without the make-good-even-better connotation.

I think EA's have always thought about impact of collective action but it's just really hard, or even impossible to estimate how your personal efforts will further collective action and compare that to more predictable forms of altruism.

Comment author: RobBensinger 06 September 2017 08:12:51AM 1 point [-]

"Existential risk" has the advantage over "long-term future" and "far future" that it sounds like a technical term, so people are more likely to Google it if they haven't encountered it (though admittedly this won't fully address people who think they know what it means without actually knowing). In contrast, someone might just assume they know what "long-term future" and "far future" means, and if they do Google those terms they'll have a harder time getting a relevant or consistent definition. Plus "long-term future" still has the problem that it suggests existential risk can't be a near-term issue, even though some people working on existential risk are focusing on nearer-term scenarios than, e.g., some people working on factory farming abolition.

I think "global catastrophic risk" or "technological risk" would work fine for this purpose, though, and avoids the main concerns raised for both categories. ("Technological risk" also strikes me as a more informative / relevant / joint-carving category than the others considered, since x-risk and far future can overlap more with environmentalism, animal welfare, etc.)

Comment author: Austen_Forrester 09 September 2017 04:36:08AM -1 points [-]

Of course, I totally forgot about the "global catastrophic risk" term! I really like it and it doesn't only suggest extinction risks. Even its acronym sounds pretty cool. I also really like your "technological risk" suggestion, Rob. Referring to GCR as "Long term future" is a pretty obvious branding tactic by those that prioritize GCRs. It is vague, misleading, and dishonest.

Comment author: Robert_Wiblin 01 September 2017 06:47:50PM *  7 points [-]

For next year's survey it would be good if you could change 'far future' to 'long-term future' which is quickly becoming the preferred terminology.

'Far future' makes the perspective sound weirder than it actually is, and creates the impression you're saying you only care about events very far into the future, and not all the intervening times as well.

Comment author: Austen_Forrester 04 September 2017 02:25:08PM 0 points [-]

For "far future"/"long term future," you're referring to existential risks, right? If so, I would think calling them existential or x-risks would be the most clear and honest term to use. Any systemic change affects the long term such as factory farm reforms, policy change, changes in societal attitudes, medical advances, environmental protection, etc, etc. I therefore don't feel it's that honest to refer to x-risks as "long term future."

Comment author: DonyChristie 06 August 2017 03:16:42PM *  0 points [-]

I'm curious what exactly you mean by regular morals. I try (and fail) to not separate my life into separate magisteria but rather seeing every thing as making tradeoffs with every other thing, and pointing my life in the direction of the path that has the most global impact. I see EA as being regular morality, but extended with better tools that attempt to engage the full scope of the world. It seems like such a demarcation between incommensurable moral domains as you appear to be arguing for can allow a person to defend any status quo in their altruism rather than critically examining whether their actions are doing the most good they can. In the case of blood, perhaps you're talking about the fuzzies budget instead of the utilons? Perhaps your position is something like 'Regular morals are the set of actions that, if upheld by a majority of people, will not lead to society collapsing and due to anthropics I should not defect from my commitment to prevent society from collapsing. Blood donation is one of these actions', or this post?

Comment author: Austen_Forrester 06 August 2017 11:43:41PM 0 points [-]

By regular morals, I mean basic morals such as treating others how you like to be treated, ie. rules that you would be a bad person if you failed to abide by them. While I don't consider EA superorogatory, neither do I think that not practicing EA makes someone a bad person, thus, I wouldn't put it in the category of basic morals. (Actually, that is the standard I hold others to, for myself, I would consider it a moral failure if I didn't practice EA!) I think it actually is important to differentiate between basic and, let's say, more “advanced” morals because if people think that you consider them immoral, they will hate you. For instance, promoting EA as a basic moral that one is a “bad person” if she doesn't practice, will just result in backlash from people discovering EA. No one wants to be judged.

The point I was trying to make is that EAs should be aware of moral licensing, which means to give oneself an excuse to be less ethical in one department because you see yourself as being extra-moral in another. If there is a tradeoff between exercising basic morals and doing some high impact EA activity, I would go with the EA (assuming you are not actually creating harm, of course). For instance, I don't give blood because last time I did I was lightheaded for months. Besides decreasing my quality of life, it would also hurt by ability to do EA. I wouldn't say giving blood is an act of basic morality, but it still an altruistic action that few people can confidently say they are too important to consider doing. Do you not agree that if doing something good doesn't prevent you from doing something more high impact, than it would be morally preferable to do it? For instance, treating people with kindness... people shouldn't stop being kind to others because it won't result in some high global impact.

Comment author: Austen_Forrester 06 August 2017 07:13:05AM 1 point [-]

I think it may be useful to differentiate between EA and regular morals. I would put donating blood in the latter category. For instance, treating your family well isn't high impact on the margin, but people should still do it because of basic morals, see what I mean? I don't think that practicing EA somehow excuses someone from practicing good general morals. I think EA should be in addition to general morals, not replace it.

Comment author: KevinWatkinson  (EA Profile) 18 July 2017 08:23:48AM 0 points [-]

This can be an issue, but i think Matt Ball has chosen not to present a strong position because he believes that is offputting, instead he undermines the strong position and presents a sub optimal one. However, he says this is in fact optimal as it reduces more harm.

If applied to EA we would undermine a position we believe might put people off, because it is too complicated / esoteric, and present a first step that will do more good.

Comment author: Austen_Forrester 18 July 2017 07:04:23PM *  0 points [-]

My point was that EAs probably should exclusively promote full-blown EA, because that has a good chance of leading to more uptake of both full-blown and weak EA. Ball's issue with the effect of people choosing to go part-way after hearing the veg message is that it often leads to more animals being killed due to people replacing beef and pork with chicken. That's a major impetus for his direct “cut out chicken before pork and beef” message. It doesn't undermine veganism because chicken-reducers are more likely to continue on towards that lifestyle, probably more so even than someone who went vegetarian right away Vegetarians have a very high drop out rate, but many believe that those who transitioned gradually last longer.

I think that promoting effectively giving 10% of one's time and/or income (for the gainfully employed) is a good balance between promoting a high impact lifestyle and being rejected due to high demandingness. I don't think it would be productive to lower the bar on that (ie. By saying cause neutrality is optional).

Comment author: Austen_Forrester 18 July 2017 02:25:59AM 0 points [-]

One thing to keep in mind is that people often (or usually, even) choose the middle ground by themselves. Matt Ball often mentions how this happens in animal rights with people deciding to reduce meat after learning about the merits vegetarianism and mentions that Nobel laureate Herb Simon is known for this realization of people opting for sub-optimal decisions.

Thus, I think that in promoting pure EA, most people will practice weak EA (ie. not cause neutral) on their own accord, so perhaps the best way to proliferate weak EA is by promoting strong EA.

View more: Next