Comment author: DonyChristie 11 November 2017 11:31:32PM 0 points [-]

Having now installed the userstyles, in order to unblind (and re-blind) myself I need to press the Stylish icon and press 'Deactivate' on the script? This might be a trivial inconvenience.

Comment author: DonyChristie 31 October 2017 05:49:08PM 11 points [-]

To what extent have you (whoever's in charge of CHS) talked with the relevant AI Safety organizations and people?

To what extent have you researched the technical and strategic issues, respectively?

What is CHS's comparative advantage in political mobilization and advocacy?

What do you think the risks are to political mobilization and advocacy, and how do you plan on mitigating them?

If CHS turned out to be net harmful rather than net good - what process would discover that, and what would the result be?

Comment author: DonyChristie 30 October 2017 07:20:37PM 3 points [-]

Truly one of the most satiating interventions on the menu of causes!

Could you go more into the full list of what the food alternatives look like, and how tractable each of them are?

Comment author: DonyChristie 28 October 2017 01:42:18AM *  4 points [-]

That is awesome and exciting!

What made you decide to go down this path? What decision-making procedure was used? How would you advise other people determine whether they are a fit for charity entrepreneurship?

How do you plan on overcoming the lack of expertise? How does the reference class of nonprofit startups founded by non-experts compare to the reference class of nonprofit startups founded by experts?

fortify hEAlth

Is this the actual name? I personally think it's cute, but it might be confusing to those not familiar with the acronym.

I think what you're doing could be very high-impact compared to the counterfactual; indeed, it may be outright heroic. ^_^

Comment author: deluks917 26 October 2017 08:47:15PM *  9 points [-]

You made an extremely long list of suggestions. Implementing such a huge list would mean radically overhauling the EA community. Is that a good idea?

I think its important to keep in mind that the EA community has been tremendously successful. Givewell and OpenPhil now funnel tremendous amounts of money towards effective global poverty reduction efforts. EA has also made substantial progress at increasing awareness of AI-risk and promoting animal welfare. There are now many student groups in universities around the world. EA has achieved these things in a rather rapid timeframe.

Its rather rare for a group to have comparable success to the current EA community. Hence I think its very dangerous to overhaul our community and its norms. We are doing very well. We could be doing better, but we are doing well. Making changes to the culture of a high performance organization is likely to reduce performance. Hence I think you should be very careful about which changes you suggest.

In addition to being long your list of changes has many rather speculative suggestions. Here are some examples: " -- You explicitly say we should be more welcoming towards things like "dog rescue". Does this not risk diluting EA into just another ineffective community. -- You say that suing the term "AI" without explanation is too much jargon. Is that really a reasonable standard? AI is not an obscure term. If you want us to avoid the term "AI" your standards of accessibility seem rather extreme. -- You claim we should focus on making altruistic people effective instead of effective people altruistic. However Toby Ord claims he initially had the same intuition but his experience is that the later is actually easier. How many of your intuitions are you checking empirically? (This has been mentioned by other commenters)

In general I think you should focus on a much smaller list of core suggestions. It is easier to argue rigorously for a more conservative set of changes. And as I said earlier EA is doing quite well so we should be skeptical of dramatic culture shifts. Obviously we should be open to new norms, but those norms should be vetted carefully.

Comment author: DonyChristie 26 October 2017 09:56:06PM 3 points [-]

I second most of these concerns.

Does this not risk diluting EA into just another ineffective community.

The core of EA is cause-neutral good-maximization. The more we cater to people who cannot switch their chosen object-level intervention, the less ability the movement will have to coordinate and switch tracks. They will become offended by suggestions that their chosen intervention is not the best one. As it is I wish more people challenged how I prioritize things, but they probably don't for fear of offending others as a general policy.

You say that suing the term "AI" without explanation is too much jargon. Is that really a reasonable standard? AI is not an obscure term. If you want us to avoid the term "AI" your standards of accessibility seem rather extreme.

I am in favor of non-dumbed-down language as it creates an added constraint in how I can communicate when I have to keep running a check on whether a person understands a concept I am referring to. I do agree that jargon generation is sometimes fueled by the desire for weird neologisms moreso than the desire to increase clarity.

You claim we should focus on making altruistic people effective instead of effective people altruistic.

I once observed: "Effectiveness without altruism is lame; altruism without effectiveness is blind." 'Effectiveness' seems to load most of the Stuff that is needed; to Actually Do Good Things requires more of the Actually than the Good. It seems that people caring about others takes less skill than being able to accomplish consequential things. I am open to persuasion otherwise; I've experienced most people as more apathetic and nonchalant about the fate of the world, an enormous hindrance to being interested in effective altruism.

Comment author: MichaelPlant 26 October 2017 07:21:19PM 4 points [-]

Thank you very much for bringing this up. Discussion about inclusivity is really conspicuous by it's absence within EA. It's honeslty really weird we barely talk about it.

Three thoughts.

  1. I'd like to emphasise how important I think it is that members of the community trying and speak in as jargon-free a way as possible. My impression is this has been getting worse over time: there seems to be something of a jargon arms race as people (always males, typically those into 'rationality'-type stuff) actively try to drop in streams of unnecessary, technical, elitist, in-group-y words to make themselves look smart. I find this personally annoying and I assume it's unwelcoming to outsiders.

  2. You gave loads of suggestions (thanks!). There were so many suggestions though, I can't possibly remember them all. Do you think you could pick out what you think the most important 2 or 3 are and highlight them somewhere?

  3. On a personal note

young, white, cis-male, upper middle class, from men-dominated fields, technology-focused, status-driven, with a propensity for chest-beating, overconfidence, narrow-picture thinking/micro-optimization, and discomfort with emotions

Ouch. I find this a painful and mostly accurate description of myself. Except emotions. Those are fine.

Comment author: DonyChristie 26 October 2017 09:17:21PM *  22 points [-]

Discussion about inclusivity is really conspicuous by it's absence within EA. It's honeslty really weird we barely talk about it.

Are you sure? Here are some previous discussions (most of which were linked in the article above):

http://effective-altruism.com/ea/1ft/effective_altruism_for_animals_consideration_for/ http://effective-altruism.com/ea/ek/ea_diversity_unpacking_pandoras_box/ http://effective-altruism.com/ea/sm/ea_is_elitist_should_it_stay_that_way/ http://effective-altruism.com/ea/zu/making_ea_groups_more_welcoming/ http://effective-altruism.com/ea/mp/pitfalls_in_diversity_outreach/ http://effective-altruism.com/ea/1e1/ea_survey_2017_series_community_demographics/ https://www.facebook.com/groups/effective.altruists/permalink/1479443418778677/

I recall more discussions elsewhere in comments. Admittedly this is over several years. What would not barely talking about it look like, if not that?

Comment author: DonyChristie 06 August 2017 04:10:35PM 2 points [-]

It's always great to see a new cause analysis! It would be cool to see some math of the utility/importance of this (you may know Guesstimate is a great tool for this :D), and I echo the desire to see tractable steps and an analysis of what effects the intervention(s) could have. I'm also curious where you think most of the disutility from electronic ballots comes from: the candidate that is elected or the perception that the election was hacked? If the former, I'd guess that many people here including me flag an intuitive penalty on cause areas/interventions that involve partisan politics due to its zero-sum, man vs. man, mindkilling nature. It seems likely that politics will naturally try to butt its head into our business and absorb attention, more than it deserves.

Here is a site dedicated to this. Try contacting them if you desire to research the subject further!

Comment author: Austen_Forrester 06 August 2017 07:13:05AM 1 point [-]

I think it may be useful to differentiate between EA and regular morals. I would put donating blood in the latter category. For instance, treating your family well isn't high impact on the margin, but people should still do it because of basic morals, see what I mean? I don't think that practicing EA somehow excuses someone from practicing good general morals. I think EA should be in addition to general morals, not replace it.

Comment author: DonyChristie 06 August 2017 03:16:42PM *  0 points [-]

I'm curious what exactly you mean by regular morals. I try (and fail) to not separate my life into separate magisteria but rather seeing every thing as making tradeoffs with every other thing, and pointing my life in the direction of the path that has the most global impact. I see EA as being regular morality, but extended with better tools that attempt to engage the full scope of the world. It seems like such a demarcation between incommensurable moral domains as you appear to be arguing for can allow a person to defend any status quo in their altruism rather than critically examining whether their actions are doing the most good they can. In the case of blood, perhaps you're talking about the fuzzies budget instead of the utilons? Perhaps your position is something like 'Regular morals are the set of actions that, if upheld by a majority of people, will not lead to society collapsing and due to anthropics I should not defect from my commitment to prevent society from collapsing. Blood donation is one of these actions', or this post?

Comment author: KevinWatkinson  (EA Profile) 02 July 2017 10:13:54PM 0 points [-]

I have some doubts generally about the principle of mainstreaming. It seems to me that it utilises dominant ideologies 'strategically', thus reifying them. In terms of the animal movement this is very much the case in regard to One Step for Animals, Pro-Veg and The Vegan Strategist. All these groups and organisations have adopted a mainstream 'pragmatic' approach which concurrently undermines social justice.

This is of course one approach, but i do not believe there is sufficient evidence to pursue it, or that it stands to reason. It would be far better for these mainstream groups to avoid social justice issues completely, so that would include rights and veganism (the cessation of exploitation), rather than essentially undermining them to privilege their approach.

For example, i think it is deeply unfortunate Matt Ball recently said that we need to utilise the idea that people hate vegans in order to appeal to non-vegans and 'help' animals. I would question the ethics of this, and also whether it is in fact true that 'people' hate vegans, or that forming and perpetuating this idea would be a good thing anyway. This is one example, but in my view mainstreaming sets forth a cascade against people that are trying to do good pro-intersectional social justice work, and it is i believe also true that groups involved in 'mainstreaming' have not sufficiently evaluated their approach, so it seems unworthwhile to support it, even whilst many EAs seem to do just that.

Comment author: DonyChristie 02 July 2017 11:22:45PM 1 point [-]

What empirical tests can we make to measure which approach is more effective? What predictions can be made in advance of those tests?

In response to 2017 LEAN Statement
Comment author: DonyChristie 28 June 2017 10:31:36PM 2 points [-]

Local group seeding and activation.

Thanks for this link. I may have raised this in a private channel, but I want to take the time to point out that based on anecdotal experience, I think LEAN & CEA shouldn't be seeding groups without taking the time to make sure the groups are mentored, managed by motivated individual(s), and grown to sustainability. I found my local group last year and it was essentially dilapidated. I felt a responsibility to run it but was mostly unsuccessful in establishing contact to obtain help managing it until some time in 2017. I'm predicting these kinds of problems will diminish at least somewhat now that LEAN & Rethink have full-time staff, the Mentoring program, more group calls, etc. :)

View more: Next