Comment author: BenHoffman 31 January 2017 10:35:15PM *  2 points [-]

Depending on the circumstances, a focus on preserving EA as a movement and avoiding disruptions to existing top philanthropic opportunities may miss the most important opportunities. My guess is that we'll do better asking questions like:

  • What types of disruptions might hamper our ability to coordinate with one another and outsiders to improve the world or mitigate emerging problems? (Different sub-problems may demand very different solutions.)

  • How can we solve these problems in a way that works for EA and other individuals and groups trying to do good? (We should try to generate solutions that transfer well, not just solve the problem for ourselves.)

  • Who else is already working on similar problems RE making global cooperation more robust to war or other likely disruptive events? What can we do to help them or benefit from their help?

  • What disruptions are EAs especially well placed to mitigate?

  • Which interventions are likely to be most important in the event of various disruptions?

Comment author: Kathy 02 February 2017 01:43:56AM *  1 point [-]

Ooh. This looks interesting! To accomplish goals like these requires over ten times as much time, so this definitely requires funding. I'm now envisioning starting up a new EA org which serves the purpose of preventing disruptions to EA productivity through identifying risks and planning in advance!

I would love to do this!

Thanks for the inspiration, Ben! :D

At the current time, I suspect the largest disaster risk is war in the US or UK. That's why I'm focusing on war. I haven’t seriously looked into the emerging risks related to antibiotic resistance, but it might be a comparable source of concern (with a lower probability of harming EA, of course, but with a much higher level of severity). The most probable risk I currently see is that there are certain cultural elements in EA which appear to have resulted in various problems. For a really brief summary: there is a set of misunderstandings which is having a negative impact on inclusiveness, possibly resulting in a significantly smaller movement than we’d have otherwise and potentially damaging the emotional health and productivity of an unknown number of individual EAs. The severity of that is not as bad as disease or war could get, but the probability of this set of misunderstandings destroying productivity is much higher than the others (That this is happening is basically guaranteed, so it’s just a matter of degree.). The reason I chose to work on the risk of war is because of the combination of the probability and severity of war which I currently suspect, and the relative severity/probability compared with other issues I could have focused on.

I have done a lot of thinking about some of the questions you pose here! I wish I could dedicate my life to doing justice to questions like "What is the worst threat to productivity in the effective altruism movement?" and I have been working on interventions for some of them. I have a pretty good basis for an intervention that would help with these cultural misunderstandings I mentioned, and this would also do the world a lot of good because second biggest problem in the world, as identified by the World Economic Forum for 2017, would be helped through this contribution. Additionally, continuing my work on misunderstandings could reduce the risk of war. I really, really want to continue with pursuing that, but I’m taking a few weeks to get on top of this potentially more urgent problem.

I have been stuck with making estimations based on the amount of information I have time to gather, so, sadly, my views aren’t nearly as comprehensive as I really wish they were.

I tend to keep an eye on risks in everything that's important to me, like the effective altruism movement, because I prefer to prevent problems in my life wherever possible. Advanced notice about big problems helps me do that.

As part of this, I have worked hard to compensate for around 5-10 biases that interfere with reasoning about risks like optimism bias, normalcy bias, and affect heuristic. These three can prevent you from realising bad things will happen, cause one to fail to plan for disasters, and make you disregard information just because it is unpleasant. The one bias I saw on the list that actually supports risk identification, pessimism bias, is badly outnumbered by the 5-10 biases that interfere with reasoning about risks. That is not to say that pessimism bias is actually helpful. Given that one can get distracted by the wrong risks, I'm wary of it. I think quality reasoning about risks looks like ordering risks by priority, choosing your battles, and making progress on a manageable number of problems rather than being paralysed thinking about every single thing that could go wrong. I think it also looks like problem-solving because that's a great way to avoid paralysis. I’ve been thinking about solutions as well.

After compensating for the biases I listed and others which interfere with reasoning about risks, I found my new perspective a bit stressful, so I worked very hard to become stronger. Now, I find it easy to face most risks, and I have a really, really high level of emotional stamina when it comes to spending time thinking about stressful things in general. In 2016, I managed to spend over 500 hours reading studies about sexual violence and doing related work while being randomly attacked by seven sex offenders throughout the year. I’ve never experienced anything that intense before. I can’t claim that I was unaffected, but I can claim that I made really serious progress despite a level of stress the vast majority of people would find too overwhelming. I managed to put together a solid skeleton of a solution which I will continue to build on. In the meantime, the solution can expand as needed.

I have discovered it’s difficult to share thoughts about risks and upsetting problems because other people have these biases, too. I've upgraded my communication skills a lot to compensate for that as much as possible. That is very, very hard. To become really excellent at it, I need to do more communication experiments, but I think what I've got at this time is sufficient to get through after a few tries with a bit of effort. Considering the level of difficulty, that’s a success!

Now that I think about it, I appear to have a few valuable comparative advantages when it comes to identifying and planning for risks. Perhaps I should seek funding to start a new org. :)

Comment author: BenHoffman 31 January 2017 06:08:10PM 0 points [-]


Comment author: Kathy 01 February 2017 09:31:02AM 0 points [-]

Okay, what information do you think they need? You mentioned "directions" and "approaches" but that is very vague. I need the specific questions you think readers need answered before they will notify me of similar projects or express interest in what I'm doing.

Comment author: BenHoffman 31 January 2017 08:37:51AM *  1 point [-]

You've described a project at a fairly high level of abstraction. You've already put 20-40 hours in, so your research has already likely taken some specific directions. Sharing a brief summary of this would help people with compatible approaches who think you're doing potentially overlapping work notice that they should reach out to you. It would also help save the time of those who aren't members of that group.

Peter just suggested you mention more details about the project, in the comments. Daniel did too. As a reader, I would have benefited if you'd replied by giving them details about the project. I expect there are more readers like me, who might reach out if a project seemed like it was going in an interesting direction (even if not my preferred direction), but not without such a specific reason to think it's worth their time.

If there are specific reasons for discretion, of course, you can say so.

Comment author: Kathy 31 January 2017 11:06:22AM *  0 points [-]

I think you're saying "There isn't enough information for most readers to decide whether they want to PM you." is that right?

Comment author: Daniel_Eth 30 January 2017 04:28:09AM 1 point [-]

Yeah, I'm potentially interested but would be curious what direction you're thinking of going here.

Comment author: Kathy 30 January 2017 02:02:04PM 0 points [-]

I'm open to going in whatever direction gives the EA community the most insight into the truth, with whatever presentation encourages the most constructive use of that information. In case you're interested in specifics, I am currently working on a planning document about how specifically to accomplish all that. I can give you access if you wish (Just give me your Google Docs address via PM.).

I'm open to considering directions / direction changes. What are your thoughts so far? :)

Comment author: Peter_Hurford  (EA Profile) 29 January 2017 06:27:03PM 6 points [-]

It would be helpful when evaluating this project to see some of the work you've already done.

Comment author: Kathy 30 January 2017 10:25:35AM *  0 points [-]

I am not sure if you are requesting to see the project, or if you are making a complaint of some sort. It's easy enough for anyone to PM me and request to see the project. Just in case, I updated my post to explicitly invite people to PM me to see the project.

In case this wasn't clear, the project isn't finished yet. Before dumping a lot more hours into it, I want to see whether I'm duplicating anyone's work.

The fact that it is not yet finished is why I did not publish anything about it so far. It's not ready to be published.

The main point of this post is simply to find out whether there are others doing a similar project, and find other people who are interested in helping make sure the project gets completed.


Collaborators Wanted: Could war disrupt EA orgs in the US or UK in the next 10 years?

The effective altruism movement needs to be disaster resistant. That requires information we can use to put probabilities on potential problems that have severe consequences. Even a small chance of a large disruption to organisations that are saving so many lives is worth some of my hours. Therefore, I've been... Read More
Comment author: Telofy  (EA Profile) 20 January 2017 11:14:58AM 1 point [-]

Agreed wrt. honesty. (I’m from Germany.)

That weirdness is costly, though, is something that I’ve often heard and adopted myself, e.g., by asking friends how I can dress less weird and things like that. There’s also the typical progression that I’ve only heard challenged last year that you should first talk with people about poverty alleviation, and only when they understand basics like cost-effectiveness, triage, expected value, impartiality, etc., you can gradually lower your guard and start mentioning other animals and AIs.

Maybe Kathy doesn’t even contradict that, since the instances of weirdness that are beneficial may be a tiny fraction of all the weirdnesses that surrounds us, and finding out which tiny fraction it is (as well as employing it) will require that we first dial back all weirdnesses except for one candidate weirdness. I should just read that book.

Comment author: Kathy 20 January 2017 02:09:01PM *  2 points [-]

I agree that most people will not understand the most strange ideas until they understand the basic ideas. Ensuring they understand the foundation is a good practice.

I definitely agree that the instances of weirdness that are beneficial are only a tiny fraction of the weirdness that is present.

Regarding weirdness:

There are effective and ineffective ways to be weird.

There are several apparently contradictory guidelines in art: "use design principles", "break the conventions", and "make sure everything looks intentional".

The effective ways to be weird manage all three guidelines.

Examples: Picasso, Björk, Lady Gaga

One of the major and most observable differences between these three artists vs. many weird people is that the behavior of the artists can be interpreted as a communication about something specific, meaningful, and valuable. Art is a language. Everything strange we do speaks about us. If you haven't studied art, it might be rather hard to interpret the above three artists. The language of art is sometimes completely opaque to non-artists, and those who interpret art often find a variety of different meanings rather than a consistent one. (I guess that's one reason why they don't call it science.) Quick interpretations: In Picasso, I interpret an exploration of order and chaos. In Björk, I interpret an exploration of the strangeness of nature, the familiarity and necessity of nature, and the contradiction between the two. In Lady Gaga, I interpret an edgy exploration of identity.

These artists have the skill to say something of meaning as they follow principles and break conventions in a way that looks intentional. That is why art is a different experience from, say, looking at an odd-shaped mud splatter on the sidewalk, and why it can be a lot more special.

Ineffective weirdness is too similar to the odd-shaped mud splatter. There need to be signs of intentional communication. To interpret meaning, we need to see that combination of unbroken principles and broken conventions arranged in an intentional-looking pattern.

Comment author: DavidNash 20 January 2017 10:24:04AM 6 points [-]

This may be a community based thing but I haven't seen anyone advocating for lying in the UK and haven't heard of it much online either apart from one persons experience in California.

I agree with all the examples you have and think everyone should learn more about honest persuasion, but I'm not sure the myths to be bust are with the EA community rather than some peoples perception of the community.

Comment author: Kathy 20 January 2017 02:03:57PM *  4 points [-]

Edit: I agree that there aren't a large number of people advocating for dishonesty. My concern is that if even a small number of EAs get enough attention for doing something dishonest, this could cause us all reputation problems. Since we could be "painted with the same brush" due to the common human bias called stereotyping bias, I think it's worthwhile to make sure it's easy to find information about how to do honest promotion, and why.

I updated my post to mention some specific examples of the problems I've been seeing. Thank you, David.


3 Examples You Can Use To Promote Causes Honestly and Effectively

I've been seeing discussions about dishonest promotion recently (mostly sparked by what happened with Intentional Insights). I'm contributing the examples I have that show why we need to challenge the idea of using dishonest promotion. I also included where you can find quality information on how to do promotion in... Read More
Comment author: Kathy 02 November 2016 04:34:08AM *  3 points [-]

It would protect the movement to have a norm that organizations must supply good evidence of effectiveness to the group and only if the group accepts this evidence should they claim to be an effective altruism organization.

I think some similar norm should also extend to individual people who want to publish articles about what effective altruism is. Obviously, this cannot be required of critics, but we can easily demand it from our allies. I'm not sure what we should expect individual people to do before they go out and write articles about effective altruism on Huffington Post or whatever, but expecting something seems necessary.

To prevent startups from being utterly ostracized by this before they've got enough data / done enough experiments to show effectiveness, maybe they could be encouraged to use a different term that includes EA but modifies it in a clear way like "aspiring effective altruism organization".

View more: Next