I have started a Discord server for near-term effective altruists. (If you haven’t used Discord before, it’s a pretty standard chat server. Most of its functions are fairly self-explanatory.)
Most of my effective altruist friends focus on the far future. While far-future effective altruists are great, being around them all the time can get pretty alienating. I don’t often argue the merits of bednets versus cash transfers, which means I get intellectually sloppy knowing I won’t be challenged. I’m slow to learn about new developments relevant to near-term effective altruism, such as discoveries in development economics. Many of the conversations I participate in work from assumptions I don’t share, such as the assumption that we have a double-digit chance of going extinct within the next twenty years.
I suspect that many other near-term effective altruists may be in the same boat, and if so I encourage them to come participate. Even if not, I hope this server can be a fun and interesting place to learn more about effective altruism and connect to other effective altruists.
“Near-term” is hard to define. I intend it to be inclusive of all effective altruists whose work and priority cause areas do not focus on the far future, whether they work on global poverty, animal welfare, mental health, politics, meta-charity, or another cause area. I ask that far-future effective altruists and people whose priority cause area is AI risk or s-risks do not participate. This runs on the honor system; I’m not going to be the Near Term EA police. There are lots of people who are edge cases and I ask them to use their best judgment.
The server is intended to be welcoming to new effective altruists, people who aren’t certain whether they want to be effective altruists or not, and people who are not currently in a place where it makes sense for them to donate, volunteer, or change careers. If you’re wondering whether you’re “not EA enough” to participate, you probably are welcome!
wow, you really seem annoyed... didn't expect such a pissed post, but i suppose you got really annoyed by this thread or something. I provided the arguments in detail concerning OpenPhil's practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.
I have a few paper deadlines these days, so as much as I wish to respond with all the references, arguments, etc. I don't have the time. I plan on writing a post concerning EAF's funding policy as well, where I'll sum it up in a similar way as I did for OpenPhil.
That said, I don't think we shouldn't criticize the research done by near-future organizations, to the contrary. And I completely agree: it'd be great to have a forum devoted only to research practices and funding thereof. But concerning far-future funding, research is the only thing that can be funded, which makes it particularly troublesome.
Err, no. Funding by academic institutions follows a whole set of criteria (take the ERC scheme, for instance), which can of course be discussed on their own, but they aim at efficient and effective research. The funding of AI-risk related projects follows... well, nobody could ever specify to me any criteria to begin with, except "an anonymous reviewer whom we trust likes the project" or "they seem to have many great publications", which once looked at don't really exist. That's as far from academic procedures as it gets.
I assumed your post to be more of a nominal attempt to disagree with me than it really was, so the failure of some of its statements to constitute specific rebuttals of my points became irritating. I've edited my comment to be cleaner. I apologize for that.
Okay, and if we look at that post, we see some pretty complete and civil responses to your arguments. Seems li... (read more)