Comment author: itaibn 14 December 2017 04:54:50PM 6 points [-]

I've spent some time thinking and investigating what the current state of affairs is, and here's my conclusions:

I've been reading through PineappleFund's comments. Many are responses to solicitations for specific charities with him endorsing them as possibilities. One of these was for SENS foundation. Matthew_Barnett suggested that this is evidence that he particularly cares about long-term future causes, but given the diversity of other causes he endorsed I think it is pretty weak evidence.

They haven't yet commented on any of the subthreads specifically discussing EA. However, these subthreads are high up on the Reddit sorting algorithm and have many comments endorsing EA. This is already a good position and is difficult to improve: They either like what they see or they don't. It may be better if the top-level comments explicitly described and linked to a specific charity since that is what they responded well to in other comments, but I am cautious about making such surface-level generalizations which might have more to do with the distribution of existing comments than PineappleFund's tendencies.

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

Comment author: DonyChristie 14 December 2017 09:09:54PM 6 points [-]

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

Oh dear! No, I didn't explicitly realize this beyond passing thoughts. In retrospect, I'm confused why this wasn't cached in my mind as being against reddiquette. I should eat my own dogfood regarding brigading. I edited it so it's not soliciting. Let me know here or privately if there are any further fixes I should make to the post (i.e. if I should just remove the links to the known EA comments).

Comment author: DonyChristie 27 November 2017 09:48:25AM 0 points [-]

Did you look into coherence therapy or other modalities that use memory reconsolidation? It is theoretically more potent than CBT.

Comment author: DonyChristie 11 November 2017 11:31:32PM 0 points [-]

Having now installed the userstyles, in order to unblind (and re-blind) myself I need to press the Stylish icon and press 'Deactivate' on the script? This might be a trivial inconvenience.

Comment author: DonyChristie 31 October 2017 05:49:08PM 12 points [-]

To what extent have you (whoever's in charge of CHS) talked with the relevant AI Safety organizations and people?

To what extent have you researched the technical and strategic issues, respectively?

What is CHS's comparative advantage in political mobilization and advocacy?

What do you think the risks are to political mobilization and advocacy, and how do you plan on mitigating them?

If CHS turned out to be net harmful rather than net good - what process would discover that, and what would the result be?

Comment author: DonyChristie 30 October 2017 07:20:37PM 3 points [-]

Truly one of the most satiating interventions on the menu of causes!

Could you go more into the full list of what the food alternatives look like, and how tractable each of them are?

Comment author: DonyChristie 28 October 2017 01:42:18AM *  4 points [-]

That is awesome and exciting!

What made you decide to go down this path? What decision-making procedure was used? How would you advise other people determine whether they are a fit for charity entrepreneurship?

How do you plan on overcoming the lack of expertise? How does the reference class of nonprofit startups founded by non-experts compare to the reference class of nonprofit startups founded by experts?

fortify hEAlth

Is this the actual name? I personally think it's cute, but it might be confusing to those not familiar with the acronym.

I think what you're doing could be very high-impact compared to the counterfactual; indeed, it may be outright heroic. ^_^

Comment author: deluks917 26 October 2017 08:47:15PM *  10 points [-]

You made an extremely long list of suggestions. Implementing such a huge list would mean radically overhauling the EA community. Is that a good idea?

I think its important to keep in mind that the EA community has been tremendously successful. Givewell and OpenPhil now funnel tremendous amounts of money towards effective global poverty reduction efforts. EA has also made substantial progress at increasing awareness of AI-risk and promoting animal welfare. There are now many student groups in universities around the world. EA has achieved these things in a rather rapid timeframe.

Its rather rare for a group to have comparable success to the current EA community. Hence I think its very dangerous to overhaul our community and its norms. We are doing very well. We could be doing better, but we are doing well. Making changes to the culture of a high performance organization is likely to reduce performance. Hence I think you should be very careful about which changes you suggest.

In addition to being long your list of changes has many rather speculative suggestions. Here are some examples: " -- You explicitly say we should be more welcoming towards things like "dog rescue". Does this not risk diluting EA into just another ineffective community. -- You say that suing the term "AI" without explanation is too much jargon. Is that really a reasonable standard? AI is not an obscure term. If you want us to avoid the term "AI" your standards of accessibility seem rather extreme. -- You claim we should focus on making altruistic people effective instead of effective people altruistic. However Toby Ord claims he initially had the same intuition but his experience is that the later is actually easier. How many of your intuitions are you checking empirically? (This has been mentioned by other commenters)

In general I think you should focus on a much smaller list of core suggestions. It is easier to argue rigorously for a more conservative set of changes. And as I said earlier EA is doing quite well so we should be skeptical of dramatic culture shifts. Obviously we should be open to new norms, but those norms should be vetted carefully.

Comment author: DonyChristie 26 October 2017 09:56:06PM 4 points [-]

I second most of these concerns.

Does this not risk diluting EA into just another ineffective community.

The core of EA is cause-neutral good-maximization. The more we cater to people who cannot switch their chosen object-level intervention, the less ability the movement will have to coordinate and switch tracks. They will become offended by suggestions that their chosen intervention is not the best one. As it is I wish more people challenged how I prioritize things, but they probably don't for fear of offending others as a general policy.

You say that suing the term "AI" without explanation is too much jargon. Is that really a reasonable standard? AI is not an obscure term. If you want us to avoid the term "AI" your standards of accessibility seem rather extreme.

I am in favor of non-dumbed-down language as it creates an added constraint in how I can communicate when I have to keep running a check on whether a person understands a concept I am referring to. I do agree that jargon generation is sometimes fueled by the desire for weird neologisms moreso than the desire to increase clarity.

You claim we should focus on making altruistic people effective instead of effective people altruistic.

I once observed: "Effectiveness without altruism is lame; altruism without effectiveness is blind." 'Effectiveness' seems to load most of the Stuff that is needed; to Actually Do Good Things requires more of the Actually than the Good. It seems that people caring about others takes less skill than being able to accomplish consequential things. I am open to persuasion otherwise; I've experienced most people as more apathetic and nonchalant about the fate of the world, an enormous hindrance to being interested in effective altruism.

Comment author: MichaelPlant 26 October 2017 07:21:19PM 4 points [-]

Thank you very much for bringing this up. Discussion about inclusivity is really conspicuous by it's absence within EA. It's honeslty really weird we barely talk about it.

Three thoughts.

  1. I'd like to emphasise how important I think it is that members of the community trying and speak in as jargon-free a way as possible. My impression is this has been getting worse over time: there seems to be something of a jargon arms race as people (always males, typically those into 'rationality'-type stuff) actively try to drop in streams of unnecessary, technical, elitist, in-group-y words to make themselves look smart. I find this personally annoying and I assume it's unwelcoming to outsiders.

  2. You gave loads of suggestions (thanks!). There were so many suggestions though, I can't possibly remember them all. Do you think you could pick out what you think the most important 2 or 3 are and highlight them somewhere?

  3. On a personal note

young, white, cis-male, upper middle class, from men-dominated fields, technology-focused, status-driven, with a propensity for chest-beating, overconfidence, narrow-picture thinking/micro-optimization, and discomfort with emotions

Ouch. I find this a painful and mostly accurate description of myself. Except emotions. Those are fine.

Comment author: DonyChristie 26 October 2017 09:17:21PM *  22 points [-]

Discussion about inclusivity is really conspicuous by it's absence within EA. It's honeslty really weird we barely talk about it.

Are you sure? Here are some previous discussions (most of which were linked in the article above):

http://effective-altruism.com/ea/1ft/effective_altruism_for_animals_consideration_for/ http://effective-altruism.com/ea/ek/ea_diversity_unpacking_pandoras_box/ http://effective-altruism.com/ea/sm/ea_is_elitist_should_it_stay_that_way/ http://effective-altruism.com/ea/zu/making_ea_groups_more_welcoming/ http://effective-altruism.com/ea/mp/pitfalls_in_diversity_outreach/ http://effective-altruism.com/ea/1e1/ea_survey_2017_series_community_demographics/ https://www.facebook.com/groups/effective.altruists/permalink/1479443418778677/

I recall more discussions elsewhere in comments. Admittedly this is over several years. What would not barely talking about it look like, if not that?

Comment author: DonyChristie 06 August 2017 04:10:35PM 2 points [-]

It's always great to see a new cause analysis! It would be cool to see some math of the utility/importance of this (you may know Guesstimate is a great tool for this :D), and I echo the desire to see tractable steps and an analysis of what effects the intervention(s) could have. I'm also curious where you think most of the disutility from electronic ballots comes from: the candidate that is elected or the perception that the election was hacked? If the former, I'd guess that many people here including me flag an intuitive penalty on cause areas/interventions that involve partisan politics due to its zero-sum, man vs. man, mindkilling nature. It seems likely that politics will naturally try to butt its head into our business and absorb attention, more than it deserves.

Here is a site dedicated to this. Try contacting them if you desire to research the subject further!

Comment author: Austen_Forrester 06 August 2017 07:13:05AM 1 point [-]

I think it may be useful to differentiate between EA and regular morals. I would put donating blood in the latter category. For instance, treating your family well isn't high impact on the margin, but people should still do it because of basic morals, see what I mean? I don't think that practicing EA somehow excuses someone from practicing good general morals. I think EA should be in addition to general morals, not replace it.

Comment author: DonyChristie 06 August 2017 03:16:42PM *  0 points [-]

I'm curious what exactly you mean by regular morals. I try (and fail) to not separate my life into separate magisteria but rather seeing every thing as making tradeoffs with every other thing, and pointing my life in the direction of the path that has the most global impact. I see EA as being regular morality, but extended with better tools that attempt to engage the full scope of the world. It seems like such a demarcation between incommensurable moral domains as you appear to be arguing for can allow a person to defend any status quo in their altruism rather than critically examining whether their actions are doing the most good they can. In the case of blood, perhaps you're talking about the fuzzies budget instead of the utilons? Perhaps your position is something like 'Regular morals are the set of actions that, if upheld by a majority of people, will not lead to society collapsing and due to anthropics I should not defect from my commitment to prevent society from collapsing. Blood donation is one of these actions', or this post?

View more: Next