This concerns me because "EA" is such a vaguely defined group.
Here are some clearly defined groups:
All of these have a clear definition of membership and a clear purpose. I think it is entirely sensible for groups like this to have some kinds of rules, and processes for addressing and potentially ejecting people who don't conform to those rules. Because the group has a clear membership process, I think most people will accept that being a member of the group means acceding to the rules of the group.
"EA", on the other hand, is a post hoc label for a group of people who happened to be interested in the ideas of effective altruism. One does not "apply" to be an "EA". Nor does can we meaningfully revoke membership except by collectively refusing to engage with someone.
I think that attempts to police the borders of a vague group like "EA" can degenerate badly.
Firstly, since anyone who is interested in effective altruism has a plausible claim to be a member of "EA" under the vague definition, there will continue to be many people using the label with no regards for any "official" definition.
Secondly (and I hope this won't happen), such a free-floating label is very vulnerable to political (ab)use. We open ourselves up to arguments about whether or not someone is a "true" EA, or schisms between various "official" definitions. At risk of bringing up old disagreements, the arguments about vegetarian catering at last year's EA Global were already veering in this direction.
This seems to me to have been a common fate for vague group nouns over the years, with feminism being the most obvious example. We don't want to have wars between the second- and third-wave EAs!
My preferred solution is to avoid "EA" as a noun. Apart from the dangers I mentioned above, its origin as a label for an existing group of people gives it all sorts of connotations that are only really valid historically: rationalist thinking style, frank discussion norms, appreciation of contrarianism ... not to mention being white, male, and highly educated. But practically, having such a label is just too useful.
The only other suggestion I can think of is to make a clearly defined group for which we have community norms. For lack of a better name, we could call it "CEA-style EA". Then the CEA website could include a page that describes the core values of "CEA-style EAs" and some expectations of behaviour. At that point we again have a clearly defined group with a clear membership policy, and policing the border becomes a much easier job.
In practice, you probably wouldn't want an explicit application process, with it rather being something that you can claim for yourself - unless the group arbiter (CEA) has actively decreed that you cannot. Indeed, even if someone has never claimed to be a "CEA-style EA", declaring that they do not meet the standard can send a powerful signal.
This reminds me of Bryan Caplan: http://econlog.econlib.org/archives/2016/01/the_invisible_t.html
To continue the riff, we might say that the appropriate emotion for someone arguing that funding opera in fact maximises human welfare is surprise. "Now, I know you wouldn't expect this, but amazingly it turns out that..."
Your interlocutor might believe that funding opera maximises human welfare for good reasons. But if they don't seem to think that it's a remarkable fact, that suggests the opposite.
This is probably not answerable until you've made some significant progress in your current focus, but it would be nice to get a sense of how well the pool of people available to work on technology for good projects lines up with the skills required for those problems (for example, are there a lot of machine learning experts who are willing to work on these problems, but not many projects where that is the right solution? Is there a shortage of, say, front-end web developers who are willing to work on these kinds of projects?).
Working out what skills are needed for the problems is absolutely something we want to find out. I don't know whether we can really effectively survey the pool of available talent, but we will hopefully be able to help individuals make decisions by telling them that, e.g. machine learning skills are particularly likely to be applicable to high-impact solutions
To be clear, our current objective is to find problems that can be addressed with technology, along with at least some ideas about how to go about doing it. That might be something like "build an insurance product on top of mobile money", or "build electricity distribution algorithms to allocate energy from renewable power sources" (disclaimer, these may not be good ideas!).
We then hope to actually be able to help get talented people actually doing some of these things!
Thanks for writing this. I often feel quite similar - I can find being in contact with so many amazing people either inspiring or oddly demotivating!
That's the old status-conscious monkey brain talking (everyone's grasping vaingloriousness), and we shouldn't feed it, but it's good to acknowledge that it's there from time to time.
Overall, I think the EA movement is pretty good at being positive. I've found such criticism as there is is usually self-criticism - if anything, I find people to be unusually generous with praise, which is lovely. I think you hit the nail on the head with your four sources of ego-damage. And yeah, I think the right thing to do is to try and not be bothered. For bonus points, remember to praise people when they do good things!
There are two aspects to having more impact: giving more effectively and giving more. The GWWC pledge says something about both, but I think it is only the latter that really needs the behavioral support that you get from something like a pledge. Once people have got the idea of giving effectively, I think it is unlikely that they will stop giving effectively when they give. It's a hard idea to unsee! But they might find it harder to keep giving as much.
So I think the most important bit is the 10%, when it comes to actually affecting people's behavior.
That said, there is a signaling and outreach benefit to mentioning global poverty: it makes us look more credible, and a lot of people come to effective altruism via global health and poverty. I think we could have a cause-natural pledge but still make it clear that poverty is focus of the organization:
"While GWWC believes that alleviating global poverty is the best way to hero the world today, we live in hope that poverty may one day be eliminated, maybe even within our lifetimes. Hence the pledge does not oblige you to give to global poverty charities, but rather to those that you believe will best help the world. Even if you disagree with us about what the best approach is at the moment, do consider taking the pledge anyway, as we would love to have anyone who is serious about effective giving in our community."
I think you're quite right about this, and I would identify one of the key points of disagreement as cosmopolitanism. I think that post resonated with a lot of people precisely because it highlighted something that everyone kind of knew was an unusual feature of EA arguments, but nobody had quite put their finger on.
(And you yourself say something similar in your comment on that post.)
Cosmopolitanism is quite unconventional, but it's also difficult to tackle head on, because it often involves a conflict between people's stated moral views and their behaviour. Lots of people have views that, on the face of it, would make them cosmopolitan, but they rarely act in such a way. That's partly because the implications can seem very demanding. So paying more attention to cosmopolitan arguments confronts people with a double threat - that of having an increased moral obligation, and that of being shown to be a hypocrite for not having accepted it before.
I could see this aversion just quietly diverting people away from really thinking about cosmopolitan ideas.
I worry about the "that will never happen" effect. Mandating that researchers take out the insurance prevents it being dismissed on that front, but how do we make the insurance agencies take it seriously?
It seems all too plausible that the insurer will just say "this will never happen, and if it does it will be unusual enough that we can probably hold the whole thing up in court - just give them a random number". For a big enough risk, if it happens then the insurer might expect to cease to exist in the upheaval, which also doesn't give them much incentive to give a good estimate.
In general, I'm not sure whether insurers are quite robust enough institutions to be likely to have rational decision procedures over risks that are this big and unlikely.
Many forums have a concept of a "stickied thread", which is one that stays at the top of the list of topics permanently. I don't know how good a fit that is for a forum designed like this, where there are very few visible topics, and also few subdivisions, but it does work well. It would be a good fit for things like the introductions thread.
I think LessWrong suffered from a lack of stickies - almost all the useful topics you posted saw extremely rapid declines in visibility as they dropped off the front page, which makes them effectively useless. The ones people liked - open threads, media threads, introduction threads - had to be reposted regularly for this reason. There's no reason why a stickied introduction thread can't last for a very long time (i.e. until it becomes unreasonably large).
Thanks for this post, Jess! I agree that these are important issues.
Personally, for uncertainty and decision anxiety, there are a couple of ideas that actually do help me stop that cycle when I think of them:
1. Hard decisions are (usually) the least important- "Do I want to eat or do I want to slam my head against that wall?" is an easy decision because there is a big difference between them. "What should I order off this menu?" is a harder decision because they have almost exact expected payoffs (eating delicious food. Anything on the menu should be good) and therefore it doesn't really matter what you decide.
2. Do I need to optimize this decision? (Answer: probably not.)- Recognizing that trying to optimize most your decisions means you wouldn't ever actually get anything done, and so deciding to be explicitly okay with just satisficing most your decisions. When I catch myself putting too much effort into a decision that I haven't explicitly decided to optimize I'll say out loud something like "This doesn't actually really matter." and that tend to help me make a "good enough" decision and move on.
3. I will be happier once I've made my decision and locked it in- studies have shown that if you have the ability to change your decision that you will be less satisfied with it than if you were locked in to it. Also making decisions is stressful so leaving them open is going to make you less happy. So picking a dress at a store that doesn't do returns is better (by which I mean you will both like the dress more and be generally happier) than buying a dress with the idea that you can come back and exchange it for another dress if you decide to later is better than buying both dresses with the thought that you will return the one you like last later.
Hard decisions are (usually) the least important
Hard decisions are (usually) the least important
A decision can be hard because the possible outcomes are finely balanced in expected payoff, or because you are quite lacking in knowledge about the possibly outcomes and/or their likelihood. If it's the latter then it can be hard and matter a lot! For effective altruists there can be a bit of both. "Should I buy this pen or this other one? A better pen might help me write more effectively!" is probably the former, but "What career should I choose?" is probably the latter.
Plus, the latter kind of decision holds the promise of high value of information. If only you devote a bit more time to thinking about it or researching, you might improve your estimates a lot (or not). So that's another incentive to worry about and delay such a decision.
© 2017 Effective Altruism Forum |
Powered by reddit