A lot of these would be good for a small founding team, rather than individuals. What do you mean by 'good for an EA group?'
Like a local university group or local city meetup.
Many of those seem like individual projects. Does anyone have any suggestions for projects that would be particularly good for EA groups?
I'm especially excited about Effective Altruism community building. There are too many EA orgs to keep track of, all running there own fundraisers.
If morality isn't real, then perhaps we should just care about our selves.
But suppose we do decide to care about other people's interests - maybe not completely, but at least to some degree. To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible and this is what utilitarianism does.
I don't think that's true for two reasons:
(1) A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear).
(2) Research around whether to donate $100K or $10K (or how to donate $100K conditional on winning the lottery) would be useful.
"A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear)." - that's not how human psychology works.
I think others have suggested this, but have you thought about putting your 10K GBP into a donor lottery or otherwise saving up to getting a larger donation size? I'd like to see research address that question (e.g., is a 100K donation >10x better than a 10K donation?).
That would defeat the purpose of the project. I think that the purpose is to spur research and the money is there for extra encouragement.
I was definitely disappointed to see that post by Sarah. It seemed to defect from good community norms such as attempting to generously interpret people in favour of quoting people out of context. She seems to be applying such rigourous standards to other people, yet applying rather loose standards to herself.
So I'm guessing the idea is that donors will do something a bit more complex than just throwing the money over to AMF?
This seems to be an interesting approach to this question. However, for a top level post in this forum, I would like to see more of an attempt to link this directly to effective altruism, which, as many have noted, is not simply consequentialism. There is no mention of 'effective altruism', 'charity', 'career', 'poverty', 'animal' or 'existential risk' (of course effective altruism is broader than these things, but I think this is indicative).
(Writing in a personal capacity)
Effective altruism is strongly linked with consequentialism, so much so, that I don't think a more explicit link is required.
I agree that systematic change should be given more thought in EA, but there's a very specific problem that I think we need to tackle before we can do this seriously: a lot of the tools and mindsets in EA are inadequate for dealing with systematic change.
To explain what I mean, I want to quickly make reference to a chart that Caroline Fiennes uses in her book. Essentially, you can think of work on social issues as a sort of 'pyramid'. At the top of the pyramid you have very direct work (deworming, bed nets, cash transfers, etc.). This work is comparably very certain to work, and you can fairly easily attribute changes in outcomes to these programs. However, the returns are small - you only help those who you directly work with. As you go down the pyramid, you start to consider programs that focus on communities... then those that focus on changing larger policy and practice ... then changing attitudes and norms (or some types of systematic change) ... and eventually you get to things like existential risks. As you go down the pyramid, you get greater returns to scope (can impact a lot more people), but it becomes a lot more uncertain that you will have an impact, and it also becomes very hard to attribute change in any outcome to an program.
My worry is that the tools that the EA movement relies on were created with the top of the pyramid in mind - the main forms of causal research, cost effectiveness analysis, and so on that we rely on were not built with the bottom or even middle of the pyramid. Yes, members of EA have gotten very good at trying to apply these tools to the bottom and middle, but it can get a bit screwy very quickly (as someone with an econ background, I shudder whenever someone uses econ tools to try and forecast the cost effectiveness of X-risk reduction activities - it's like trying to peel a potato while blindfolded using a pencil: it's not what the pencil was made for, and even though it is technically possible I'll be damned if the blindfolded person actually has a clue if it's working or not).
We should definitely keep our commitment to these tools, but if we want to be rigorous about exploring systematic risks, we should probably start by figuring out how to expand our toolbox in order to address these issues as rigorously as possible (and, importantly, to figure out when exactly our current tools are insufficient! We already have these for a lot of our tools - basically assumptions that, when broken, break the tool - but I haven't seen people rigorously consulting them!). I'm sure that a lot of us have in mind some very clear ideas of how we can/should rigorously prioritize and evaluate various systematic risks - but I'm pretty sure we have as many opinions as we have people. We need to get on the same page first, which is why I'd suggest that we work on figuring out some basic standards and tools for moving forward, then going from there. Expanding our toolkit is key, though - perhaps someone should look into other disciplines that could help out? I'd do it, but I'm lazy and tired and probably would make a hash of it anyway.
"I'd do it, but I'm lazy and tired and probably would make a hash of it anyway." - you seem rather knowledgeable, so I doubt that. I've heard it said that the perfect is the enemy of the good and a top level approach that was maybe twice the size of the above comment and which just provided an extremely basic overview would be a great place to start and would encourage further investigation by other people.
© 2017 Effective Altruism Forum |
Powered by reddit