Comment author: casebash 17 July 2017 03:18:35AM *  2 points [-]

A few thoughts:

  • If you believe that existential risk is literally the most important issue in the world and that we will be facing possible extinction events imminently, then it follows that we can't wait to develop a mass movement and that we need to find a way to make the small, exceptional group strategy work (although, we may also spread low-level EA, but not as our focus)
  • I suspect that most EAs would agree that spreading low-level EA is worthwhile. The first question is whether this should be the focus/a major focus (as noted above). The second question is whether this should occur within EA or be a spin-off/a set of spin-offs. For example, I would really like to see an Effective Environmentalism movement.
  • Some people take issue with the name Effective Altruism because it implies that everything else is Ineffective Altruism. Your suggestion might mitigate this to a certain extent, but we really need better names!
Comment author: MichaelPlant 10 July 2017 06:32:44PM *  1 point [-]

Thanks for the update. That's helpful.

However, it does seem a bit hard to reconcile GWWC's and 80k's positions on this topic. GWWC (i.e. you) seem to be saying "most EAs care about poverty, so that's what we'll emphasise" whereas 80k (i.e. Ben Todd above) seems to saying "most EAs do (/should?) care about X-risk, so that's what we'll emphasise".

These conclusions seem to be in substantial tension, which itself is may confuse new and old EAs.

Comment author: casebash 11 July 2017 12:25:15AM 0 points [-]

80k is now seperate from CEA or is in the process of being separated from CEA. They are allowed to come to different conclusions.

Comment author: casebash 09 July 2017 02:07:59AM *  10 points [-]

Effective Altruism is quite difficult to explain if you want capture all of its complexity. I think that it is a completely valid choice for an introductory talk to focus on one aspect of Effective Altruism as otherwise many people will have trouble following.

I would suggest letting people know that you are only covering one aspect of Effective Altruism, ie. "Effective Altruism is about doing the most good that you can with the resources available to you. This talk will cover how Effective Altruism has been applied to charity, but it is worth noting that Effective Altruism has also been applied to other issues like animal welfare or ensuring the long-term survival of humanity".

This reduces the confusion when they hear about these issues later and reduces the chance that they will feel mislead. At the same time, it avoids throwing too many new ideas at a person at once which may reduce their comprehension and it explains how it applies to an issue which they may already care about.

Comment author: casebash 03 July 2017 04:56:26PM 2 points [-]

Interesting strategy, focusing on building networks rather than direct activities. Will be very curious to see if this works!

Comment author: casebash 25 June 2017 02:16:05AM 2 points [-]

The funding for meta-EA still seems to potentially be a bottle-neck in the short term. This is because there are many people who already care about the concrete issues like poverty and animal rights and want to give their money away to something that will have an impact. Even existential risk has people like Musk and Tallinn funding them. On the other hand, meta seems to only be funded by Good Ventures and only to a limited extent. If you believe that this is important, then earn-to-give may be an effective strategy.

Comment author: MichaelPlant 14 June 2017 10:16:36AM 2 points [-]

This is a purposefully vague warning for reasons that should not need to be said. Unfortunately, this forces this post to discuss these issues at a higher level of generality than might be ideal, and so there is definitely merit to the claim that this post only deals in generalisations. For this reason, this post should be understood more as an outline of an argument than as an actual crystalized argument

I found this post unhelpful and this part of it particularly so. Your overall point - "don't concede too much on important topics" - seems reasonable, but as I don't know what topics you're referring to, or what would count as 'too much' on those, I can't learn anything.

More generally, I find EAs who post things of the flavour "we shouldn't do X, but I can't tell you what I mean by X for secret reasons" annoying, alienating and culty and wish people wouldn't do it.

Comment author: casebash 15 June 2017 02:04:09PM 0 points [-]

Anyway, I tried providing an example, though I still kept it away from directly discussing any concrete issues. Hopefully this makes the principle clearer, even if I haven't directly explained how EA should apply this principle.

Comment author: Evan_Gaensbauer 15 June 2017 07:15:37AM 1 point [-]

There are ongoing controversies in EA, even if they're not obvious. That is, there are lots of ongoing debates EA that flare up occasionally, but are unresolved in terms of what concessions different effective altruists think our community ought be willing to make. I might cover some of those in the near future, and I'll cite this blog post. This is valuable in that I'd be covering object-level controversies, and having the outline of an argument established on the forum here in a neutral fashion beforehand will be helpful. Thanks for writing this.

Comment author: casebash 15 June 2017 01:59:40PM 0 points [-]

Happy to hear that this post has been of use to at least one person. Anyway, I added an additional example (Andrew and Bob) to further clarify the situation as my post was too general for some people.

Comment author: Michael_Wulfsohn 14 June 2017 07:24:44AM 3 points [-]

Sounds like a really interesting and worthwhile topic to discuss. But it's quite hard to be sure I'm on the same page as you without a few examples. Even hypothetical ones would do. "For reasons that should not need to be said" - unfortunately I don't understand the reasons; am I missing something?

Anyway, speaking in generalities, I believe it's extremely tempting to assume an adversarial dynamic exists. 9 times out of 10, it's probably a misunderstanding. For example, if a condition is given that isn't palatable, it's worth finding out the underlying reasons for the condition being given, and trying to satisfy them in other ways. Since humans have a tendency towards "us vs them" tribal thinking, there's considerable value in making effort to find common ground, establish mutual understanding, and reframe the interaction as a collegiate rather than adversarial one.

This isn't meant as an argument against what you've said.

Comment author: casebash 15 June 2017 01:57:41PM *  0 points [-]

I've expanded the first paragraph and added a hypothetical example. Let me know if this clarifies the situation.

EDIT: Oh, I also added in a direct response to your comment.

Comment author: innov8tor3 14 June 2017 08:04:37AM *  1 point [-]

FYI, the subject of unification versus diversity is one the EoST community debates with great frequency and vigour: bio mimicry may suggest that diversity is nature's way of helping us survive ...

However, for unity of purpose, some useful umbrellas are: Global Abundance; Education; Health; Eco Sustainability ...

Comment author: casebash 15 June 2017 02:47:33AM 1 point [-]

EoST?

Comment author: MichaelPlant 14 June 2017 10:16:36AM 2 points [-]

This is a purposefully vague warning for reasons that should not need to be said. Unfortunately, this forces this post to discuss these issues at a higher level of generality than might be ideal, and so there is definitely merit to the claim that this post only deals in generalisations. For this reason, this post should be understood more as an outline of an argument than as an actual crystalized argument

I found this post unhelpful and this part of it particularly so. Your overall point - "don't concede too much on important topics" - seems reasonable, but as I don't know what topics you're referring to, or what would count as 'too much' on those, I can't learn anything.

More generally, I find EAs who post things of the flavour "we shouldn't do X, but I can't tell you what I mean by X for secret reasons" annoying, alienating and culty and wish people wouldn't do it.

Comment author: casebash 14 June 2017 10:40:26AM *  6 points [-]

"Annoying, alienating and culty and wish people wouldn't do it" - I would like to suggest that this is a bit of an overreaction given that this is just one post and almost no other posts on this forum are like this. It hardly seems like this forum is at risk of being overrun.

View more: Next