Comment author: Kaj_Sotala 20 July 2017 11:10:53PM *  11 points [-]

This looks sensible to me. I'd just quickly note that I'm not sure if it's quite accurate to describe this as "FRI's metaphysics", exactly - I work for FRI, but haven't been sold on the metaphysics that you're criticizing. In particular, I find myself skeptical of the premise "suffering is impossible to define objectively", which you largely focus on. (Though part of this may be simply because I haven't yet properly read/considered Brian's argument for it, so it's possible that I would change my mind about that.)

But in any case, I've currently got three papers in various stages of review, submission or preparation (that other FRI people have helped me with), and none of those papers presuppose this specific brand of metaphysics. There's a bunch of other work being done, too, which I know of and which I don't think presupposes it. So it doesn't feel quite accurate to me to suggest that the metaphysics would be holding back our progress, though of course there can be some research being carried out that's explicitly committed to this particular metaphysics.

(opinions in this comment purely mine, not an official FRI statement etc.)

Comment author: casebash 17 July 2017 03:18:35AM *  2 points [-]

A few thoughts:

  • If you believe that existential risk is literally the most important issue in the world and that we will be facing possible extinction events imminently, then it follows that we can't wait to develop a mass movement and that we need to find a way to make the small, exceptional group strategy work (although, we may also spread low-level EA, but not as our focus)
  • I suspect that most EAs would agree that spreading low-level EA is worthwhile. The first question is whether this should be the focus/a major focus (as noted above). The second question is whether this should occur within EA or be a spin-off/a set of spin-offs. For example, I would really like to see an Effective Environmentalism movement.
  • Some people take issue with the name Effective Altruism because it implies that everything else is Ineffective Altruism. Your suggestion might mitigate this to a certain extent, but we really need better names!
Comment author: Kaj_Sotala 17 July 2017 10:02:17AM 3 points [-]

I agree that if one thinks that x-risk is an immediate concern, then one should focus specifically on that now. This is explicitly a long-term strategy, so assumes that there will be a long term.

Comment author: John_Maxwell_IV 17 July 2017 04:23:26AM *  6 points [-]

Does anyone know which version of your analogy early science actually looked like? I don't know very much about the history of science, but it seems worth noting that science is strongly associated with academia, which is famous for being exclusionary & elitist. ("The scientific community" is almost synonymous with "the academic science community".)

Did science ever call itself a "movement" the way EA calls itself a movement? My impression is that the the skeptic movement (the thing that spreads scientific ideas and attitudes through society at large) came well after science proved its worth. If broad scientific attitudes were a prerequisite for science, that predicts that the popular atheism movement should have come several centuries sooner than it did.

If one's goal is to promote scientific progress, it seems like you're better off focusing on a few top people who make important discoveries. There's plausibly something similar going on with EA.

I'm somewhat confused that you list the formation of many groups as a benefit of broad mindset spread, but then say that we should try to achieve the formation of one very large group (that of "low-level EA"). If our goal is many groups, maybe it would be better to just create many groups? If our goal is to spread particular memes, why not the naive approach of trying to achieve positions of influence in order to spread those particular memes?

The current situation WRT growth of the EA movement seems like it could be the worst of both worlds. The EA movement does marketing, but we also have discussions internally about how exclusive to be. So people hear about EA because of the marketing, but they also hear that some people in the EA movement think that maybe the EA movement should be too exclusive to let them in. We'd plausibly be better off if we adopted a compromise position of doing less marketing and also having fewer discussions about how exclusive to be.

Growth is a hard to reverse decision. Companies like Google are very selective about who they hire because firing people is bad for morale. The analogy here is that instead of "firing" people from EA, we're better off if we don't do outreach to those people in the first place.

[Highly speculative]: One nice thing about companies and universities is that they have a clear, well-understood inclusion/exclusion mechanism. In the absence of such a mechanism, you can get concentric circles of inclusion/exclusion and associated internal politics. People don't resent Harvard for rejecting them, at least not for more than a month or two. But getting a subtle cold shoulder from people in the EA community will produce a lasting negative impression. Covert exclusiveness feels worse than overt exclusiveness, and having an official party line that "the EA movement must be welcoming to everyone" will just cause people to be exclusive in a more covert way.

Comment author: Kaj_Sotala 17 July 2017 10:00:47AM 2 points [-]

I'm somewhat confused that you list the formation of many groups as a benefit of broad mindset spread, but then say that we should try to achieve the formation of one very large group (that of "low-level EA"). If our goal is many groups, maybe it would be better to just create many groups?

I must have expressed myself badly somehow - I specifically meant that "low-level EA" would be composed of multiple groups. What gave you the opposite impression?

For example, the current situation is that organizations like the Centre for Effective Altruism and Open Philanthropy Project are high-level organizations: they are devoted to finding the best ways of doing good in general. At the same time, organizations like Centre for the Study of Existential Risk, Animal Charity Evaluators, and Center for Applied Rationality are low-level organizations, as they are each devoted to some specific cause area (x-risk, animal welfare, and rationality, respectively). We already have several high- and low-level EA groups, and spreading the ideas would ideally cause even more of both to be formed.

If our goal is to spread particular memes, why not the naive approach of trying to achieve positions of influence in order to spread those particular memes?

This seems completely compatible with what I said? On my own behalf, I'm definitely interested in trying to achieve a position of higher influence to better spread these ideas.

Comment author: Taylor 16 July 2017 05:38:41PM *  3 points [-]

Really appreciate you taking the time to write this up! My initial reaction is that the central point about mindset-shifting seems really right.

My proposal is to explicitly talk about two kinds of EA (these may need catchier names)

It seems (to me) “low-level” and “high-level” could read as value-laden in a way that might make people practicing “low-level” EA (especially in cause areas not already embraced by lots of other EAs) feel like they’re not viewed as “real” EAs and so work at cross-purposes with the tent-broadening goal of the proposal. Quick brainstorm of terms that make some kind of descriptive distinction instead:

  1. cause-blind EA vs. cause-specific or cause-limited EA
  2. broad EA vs. narrow EA
  3. inter-cause vs. intra-cause

(Thoughts/views only my own, not my employer’s.)

Comment author: Kaj_Sotala 17 July 2017 09:49:46AM 0 points [-]

"General vs. specific" could also be one

Comment author: Carl_Shulman 17 July 2017 12:51:38AM 10 points [-]

Ian David Moss has a post on this forum arguing for things along the lines of 'EA for the rich country fine arts' and other such restricted scope versions of EA.

My biggest objection to this is that to stay in line with people's habitual activities the rationales for the restricted scope have to be very gerrymandered (perhaps too much to be credible if stated explicitly), and optimizing within that restricted objective function may be pick out things that are overall bad, e.g. the recent media discussion comparing interventions purely in terms of their carbon emissions without taking anything else into account, suggesting that the existence of a member of a society with GDP per capita of $56,000 is bad if it includes carbon emissions with a social cost of $2,000 per person.

Comment author: Kaj_Sotala 17 July 2017 09:48:53AM *  1 point [-]

Ian David Moss has a post on this forum arguing for things along the lines of 'EA for the rich country fine arts' and other such restricted scope versions of EA.

Thanks for the link! I did a quick search to find if someone had already said something similar, but missed that.

My biggest objection to this is that to stay in line with people's habitual activities the rationales for the restricted scope have to be very gerrymandered (perhaps too much to be credible if stated explicitly), and optimizing within that restricted objective function may be pick out things that are overall bad,

I'm not sure whether the first one is really an issue - just saying that "these are general tools that you can use to improve whatever it is that you care about, and if you're not sure what you care about, you can also apply the same concepts to find that" seems reasonable enough to me, and not particularly gerrymandering.

I do agree that optimizing too specifically within some narrow domain can be a problem that produces results that are globally undesirable, though.

Comment author: Ajeya 17 July 2017 04:11:39AM 8 points [-]

Views my own, not my employers.

Thanks for writing this up! I agree that it could be a big win if general EA ideas besides cause prioritization (or the idea of scope-limited cause prioritization) spread to the point of being as widely accepted as environmentalism. Some alternatives to this proposal though:

  1. It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded -- less ambiguously so than with narrow EA I think (see Carl's comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.
  2. Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I'm unsure how difficult this would be.

Both of these alternatives seem to have what is (to me) an advantage: they don't involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.

FWIW, I think I would be much more excited to evangelize broad low-level EA memes if there were some strong alternative channel to distinguish cause-neutral, super intense/obsessive EAs. Science has a very explicit distinction between science fans and scientists, and a very explicit funnel from one to the other (several years of formal education). EA doesn't have that yet, and may never. My instinct is that we should work on building a really really great "product", then build high and publicly-recognized walls around "practitioners" and "consumers" (a practical division of labor rather than a moral high ground thing), and then market the product hard to consumers.

Comment author: Kaj_Sotala 17 July 2017 09:43:15AM 3 points [-]

Thanks for the comment!

  1. It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded -- less ambiguously so than with narrow EA I think (see Carl's comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.

I agree with the "lack of obvious low-hanging fruit". It doesn't actually seem obvious to me how useful these concepts are to people in general, as opposed to more specific concrete advice (such as specific exercises for improving their social skills etc.). In particular, Less Wrong has been devoted to roughly this kind of thing, and even among LW regulars who may have spent hundreds of hours participating on the site, it's always been controversial whether the concepts they've learned from the site have translated into any major life gains. My current inclination would be that "general thinking skills" just aren't very useful for dealing with your practical life, and that concrete domain-specific ideas are much more useful.

You said that people in general care much more about concrete things in their own lives than their preferred altruistic causes, and I agree with this. But on the other hand, the kinds of people who are already committed to working on some altruistic cause are probably a different case: if you're already devoted to some specific goal, then you might have more of an interest in applying those things. If you first targeted people working in existing organizations and won them over to using these ideas, then they might start teaching the ideas to all of their future hires, and over time the concepts could start to spread to the general population more.

  1. Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I'm unsure how difficult this would be.

Maybe. One problem here is that some of these correlate only very loosely with EA: a lot of people have completed math education who aren't EAs. And I think that another problem is that in order to really internalize an idea, you need to actively use it. My thinking here is similar to Venkatesh Rao's, who wrote:

Strong views represent a kind of high sunk cost. When you have invested a lot of effort forming habits, and beliefs justifying those habits, shifting a view involves more than just accepting a new set of beliefs. You have to:

  1. Learn new habits based on the new view
  2. Learn new patterns of thinking within the new view

The order is very important. I have never met anybody who has changed their reasoning first and their habits second. You change your habits first. This is a behavioral conditioning problem largely unrelated to the logical structure and content of the behavior. Once you’ve done that, you learn the new conscious analysis and synthesis patterns.

This is why I would never attempt to debate a literal creationist. If forced to attempt to convert one, I’d try to get them to learn innocuous habits whose effectiveness depends on evolutionary principles (the simplest thing I can think of is A/B testing; once you learn that they work, and then understand how and why they work, you’re on a slippery slope towards understanding things like genetic algorithms, and from there to an appreciation of the power of evolutionary processes).

I wouldn't know how to spread something like cosmopolitanism, to a large extent because I don't know how to teach the kind of thinking habits that would cause you to internalize cosmopolitanism. And even after that, there would still be the step of getting from all of those prerequisites to applying EA principles in concepts. In contrast, teaching EA concepts by getting people to apply them to a charitable field they already care about, gets them into applying EA-ish thinking habits directly.

Both of these alternatives seem to have what is (to me) an advantage: they don't involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.

That's an interesting view, which I hadn't considered. I might view it more as a disadvantage, in that in the model that I was thinking of, people who got into low-level EA would almost automatically also be exposed to high-level EA, causing the idea of high-level EA to spread further. If you were only teaching related concepts, that jump from them to high-level EA wouldn't happen automatically, but would require some additional steps. (That said, if you could teach enough of those prerequisites, maybe the jump would be relatively automatic. But this seems challenging for the reasons I've mentioned above.)

Comment author: Ben_Todd 17 July 2017 04:47:34AM 18 points [-]

Hey Kaj,

I agree with a lot of these points. I just want to throw some counter-points out there for consideration. I'm not necessarily endorsing them, and don't intend them as a direct response, but thought they might be interesting. It's all very rough and quickly written.

1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they're already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.

It becomes particularly difficult because the leaders, who do the broad outreach, want to focus on high-level EA. It's more transparent and open to pitch high-level EA directly.

There are probably ways you could implement a division without incurring these problems, but it would need some careful thought.

2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It's what makes us different from simply "competent" do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.

3) Often the best way to promote general ideas is to live them. With your example of promoting science, people often seem to think the Royal Society was important in building the scientific culture in the UK. It was an elite group of scientists who just got about the business of doing science. Early members included Newton and Boyle. The society brought likeminded people together, and helped them to be more successful, ultimately spreading the scientific mindset.

Another example is Y Combinator, which has helped to spread norms about how to run startups, encourage younger people to do them, reduce the power of VCs, and have other significant effects on the ecosystem. The partners often say they became famous and influential due to reddit -> dropbox -> airbnb, so much of their general impact was due to having a couple of concrete successes.

Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact - finding the "airbnb of EA" or the "Newton of EA".

Comment author: Kaj_Sotala 17 July 2017 09:02:55AM *  4 points [-]

Thanks!

1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they're already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.

Yeah, agreed. Though part of what I was trying to say is that, as you mentioned, we have the high/low distinction already - "implementing" that distinction would just be giving an explicit name to something that already exists. And something that has a name is easier to refer to and talk about, so having some set of terms for the two types could make it easier to be more transparent about the existence of the distinction when doing outreach. (This would be the case regardless of whether we want to expand EA to lower-impact causes or not.)

2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It's what makes us different from simply "competent" do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.

I guess the question here is, how much would efforts to bring in low-level EAs hurt the efforts to bring in high-level EAs. My intuition would be that the net effect would be to bring in more high-level EAs overall (a smaller percentage of incoming people would become high-level EAs, but that would be offset by there being more incoming people overall), but I don't have any firm support for that intuition and one would have to test it somehow.

3) Often the best way to promote general ideas is to live them. ... Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact - finding the "airbnb of EA" or the "Newton of EA".

I agree that the best way to promote general ideas can be to live them. But I think we need to be more specific about what a "huge impact" would mean in this context. E.g. High Impact Science suggests that Norman Borlaug is one of the people who have had the biggest positive impact on the world - but most people have probably never heard of him. So for spreading social norms, it's not enough to live the ideas and make a big impact, one has to do it in a sufficiently visible way.

Comment author: Jess_Riedel 16 July 2017 04:29:16PM 4 points [-]

EAs seems pretty open to the idea of being big-tent with respect to key normative differences (animals, future people, etc). But total indifference to cause area seems too lax. What if I just want to improve my local neighborhood or family? Or my country club? At some point, it becomes silly.

It might be worth considering parallels with the Catholic Church and the Jesuits. The broader church is "high level", but the requirements for membership are far from trivial.

Comment author: Kaj_Sotala 16 July 2017 04:43:23PM 2 points [-]

"Total indifference to cause area" isn't quite how I'd describe my proposal - after all, we would still be talking about high-level EA, a lot of people would still be focused on high-level EA and doing that, etc. The general recommendation would still be to go into high-impact causes if you had no strong preference.

11

An argument for broad and inclusive "mindset-focused EA"

Summary:  I argue for a very broad, inclusive EA, based on the premise that the culture of a region is more important than any specific group within that region, and that broad and inclusive EA will help shift the overall culture of the world in a better direction. As a concrete... Read More
Comment author: AlexMennen 10 July 2017 06:11:08AM *  2 points [-]

There's a strong possibility, even in a soft takeoff, that an unaligned AI would not act in an alarming way until after it achieves a decisive strategic advantage. In that case, the fact that it takes the AI a long time to achieve a decisive strategic advantage wouldn't do us much good, since we would not pick up an indication that anything was amiss during that period.

Reasons an AI might act in a desirable manner before but not after achieving a decisive strategic advantage:

Prior to achieving a decisive strategic advantage, the AI relies on cooperation with humans to achieve its goals, which provides an incentive not to act in ways that would result in it getting shut down. An AI may be capable of following these incentives well before achieving a decisive strategic advantage.

It may be easier to give an AI a goal system that aligns with human goals in familiar circumstances than it is to give it a goal system that aligns with human goals in all circumstances. An AI with such a goal system would act in ways that align with human goals if it has little optimization power but in ways that are not aligned with human goals if it has sufficiently large optimization power, and it may attain that much optimization power only after achieving a decisive strategic advantage (or before achieving a decisive strategic advantage, but after acquiring the ability to behave deceptively, as in the previous reason).

Comment author: Kaj_Sotala 10 July 2017 06:45:32PM 4 points [-]

There's a strong possibility, even in a soft takeoff, that an unaligned AI would not act in an alarming way until after it achieves a decisive strategic advantage.

That's assuming that the AI is confident that it will achieve a DSA eventually, and that no competitors will do so first. (In a soft takeoff it seems likely that there will be many AIs, thus many potential competitors.) The worse the AI thinks its chances are of eventually achieving a DSA first, the more rational it becomes for it to risk non-cooperative action at the point when it thinks it has the best chances of success - even if those chances were low. That might help reveal unaligned AIs during a soft takeoff.

Interestingly this suggests that the more AIs there are, the easier it might be to detect unaligned AIs (since every additional competitor decreases any given AI's odds of getting a DSA first), and it suggests some unintuitive containment strategies such as explicitly explaining to the AI when it would be rational for it to go uncooperative if it was unaligned, to increase the odds of unaligned AIs really risking hostile action early on and being discovered...

View more: Prev | Next