I have started a Discord server for near-term effective altruists. (If you haven’t used Discord before, it’s a pretty standard chat server. Most of its functions are fairly self-explanatory.)
Most of my effective altruist friends focus on the far future. While far-future effective altruists are great, being around them all the time can get pretty alienating. I don’t often argue the merits of bednets versus cash transfers, which means I get intellectually sloppy knowing I won’t be challenged. I’m slow to learn about new developments relevant to near-term effective altruism, such as discoveries in development economics. Many of the conversations I participate in work from assumptions I don’t share, such as the assumption that we have a double-digit chance of going extinct within the next twenty years.
I suspect that many other near-term effective altruists may be in the same boat, and if so I encourage them to come participate. Even if not, I hope this server can be a fun and interesting place to learn more about effective altruism and connect to other effective altruists.
“Near-term” is hard to define. I intend it to be inclusive of all effective altruists whose work and priority cause areas do not focus on the far future, whether they work on global poverty, animal welfare, mental health, politics, meta-charity, or another cause area. I ask that far-future effective altruists and people whose priority cause area is AI risk or s-risks do not participate. This runs on the honor system; I’m not going to be the Near Term EA police. There are lots of people who are edge cases and I ask them to use their best judgment.
The server is intended to be welcoming to new effective altruists, people who aren’t certain whether they want to be effective altruists or not, and people who are not currently in a place where it makes sense for them to donate, volunteer, or change careers. If you’re wondering whether you’re “not EA enough” to participate, you probably are welcome!
Well the main point of my comment is that people should not reinforce wrong practices by institutionalizing them.
What is it when money goes to Givewell or Animal Charity Evaluators? Funding scientific research. Don't poverty interventions need research? Animal advocacy campaigns? Plant-based meat? Is it only the futurists who are doing everything wrong when numerous complaints have been lodged at the research quality of Givewell and ACE?
Well I haven't claimed that the evaluation of futurist scientific research is rigorous, transparent or valid. I think you should make a compelling argument for that in a serious post. Telling us that you failed to persuade groups such as Open Phil and the EAF doesn't exactly show us that you are right.
Note: it's particularly instructive here, as we evaluate the utility of the sort of segregation proposed by the OP, how the idea that EA ought to be split along these lines is bundled with the assertion that the Other Side is doing things "wrong"; we can see that the nominally innocuous proposal for categorization is operationalized to effect the general discrediting of those with an opposing point of view, which is exactly why it is a bad thing.
Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on.
Where is this criticism? Where are the arguments on cause prioritization? Where is the review of the relevant academic literature? Where is the quantitative modeling? I see people complain that their "criticisms" aren't being met, but when I look for these criticisms, the search for the original source bottoms out either in sparse lines of assertions in web comments, or quite old arguments that have already been accepted and answered, and in either case opponents are clearly ready and willing to engage with such criticism. The claim that people are "closed towards criticism" invariably turns out to be nothing but the fact that the complainant failed to change anyone's mind, but seldom does the complainant question whether they are right at all.
wow, you really seem annoyed... didn't expect such a pissed post, but i suppose you got really annoyed by this thread or something. I provided the arguments in detail concerning OpenPhil's practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.
I have a few paper deadlines these days, so as much as I wish to respond with all the references, arguments, etc. I don't have the time. I plan on writing a post concerning EAF's funding policy as well, where I'll sum it up in a similar wa... (read more)