I have started a Discord server for near-term effective altruists. (If you haven’t used Discord before, it’s a pretty standard chat server. Most of its functions are fairly self-explanatory.)
Most of my effective altruist friends focus on the far future. While far-future effective altruists are great, being around them all the time can get pretty alienating. I don’t often argue the merits of bednets versus cash transfers, which means I get intellectually sloppy knowing I won’t be challenged. I’m slow to learn about new developments relevant to near-term effective altruism, such as discoveries in development economics. Many of the conversations I participate in work from assumptions I don’t share, such as the assumption that we have a double-digit chance of going extinct within the next twenty years.
I suspect that many other near-term effective altruists may be in the same boat, and if so I encourage them to come participate. Even if not, I hope this server can be a fun and interesting place to learn more about effective altruism and connect to other effective altruists.
“Near-term” is hard to define. I intend it to be inclusive of all effective altruists whose work and priority cause areas do not focus on the far future, whether they work on global poverty, animal welfare, mental health, politics, meta-charity, or another cause area. I ask that far-future effective altruists and people whose priority cause area is AI risk or s-risks do not participate. This runs on the honor system; I’m not going to be the Near Term EA police. There are lots of people who are edge cases and I ask them to use their best judgment.
The server is intended to be welcoming to new effective altruists, people who aren’t certain whether they want to be effective altruists or not, and people who are not currently in a place where it makes sense for them to donate, volunteer, or change careers. If you’re wondering whether you’re “not EA enough” to participate, you probably are welcome!
Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren't following your lines of reasoning (unfortunately, of course). For instance, the community practices related to far-future focus (in particular AI-risks) have adopted the assessment of scientific research and the funding thereof, which I find lacking scientific rigor, transparency and overall validity (to the point that it makes no sense to speak of "effective" charity). Moreover, there is a large consensus about such evaluative practices: they are assumed as valid by OpenPhil and the EAF, and even when I tried to exchange arguments with both of these institutions, nothing has ever changed (I've never even managed to push them into a public dialogue on this topic). I see this problem as a potential danger for the EA community in whole (just think of the press getting their hands on this problem and arguing that EAs finance scientific research which is assumed effective, where it is unclear according to which criteria it would count as such; similarly for newcomers). In view of this, I think dividing these practices would be a great idea. The fact they are connected to "far-future EA" is secondary to me, and it is unfortunate that far-future ideas turned into a bubble of its own, closed towards criticism questioning the core of their EA methodology.
That said, I agree with some of your worries (see my other comment here).
Well the main point of my comment is that people should not reinforce wrong practices by institutionalizing them.
What is it when money goes to Givewell or Animal Chari... (read more)