"Adding future possible people with positive welfare does not make the world better."
I find that claim ridiculous. How could giving the gift of a joyful life have zero value?
I'm very much in favor of this
Agree with this line of reasoning, but I think there is a caveat. A very solvable problem will likely be solved without you, assuming it doesn't stay totally neglected. So if persons A and B are very confident that hundreds of other people will start working on the problem soon, then it would make more sense for person B to work on the problem than for person A to.
I think we should align with the left on climate change, for example.
I think we should align with the left on climate change, for example.
re: climate change, it would be really nice if we could persuade the political right (and left) that climate change is apolitical and that it is just a generally sensible thing to tackle it, like building roads is apolitical and just generally sensible.
Technology is on our side here: electric cars are going mainsteam, wind and solar are getting better. I believe that we have now entered a regime where climate change will fix itself as humanity naturally switches over to clean energy, and the best thing that politics can do is get out of the way.
In an ideal world, it would be apolitical, but that's not the world we live in. Actually, the same is true about building roads - investments in infrastructure is a liberal cause. Consider how Obama proposed a massive investment in infrastructure, which Republicans rejected. When Trump proposed investing in infrastructure, Democrats implied this was one of the only areas where they would go along with him, but then other Republicans were against it and pressured him to change course on this.
Donating to FHI is still extremely safe on the weirdness spectrum. They're part of Oxford. Actual risky stuff would be paying promising researchers directly in non-tax deductible ways. But this is weird enough to trip people's alarms. You get no accolades for doing this, in fact quite the opposite, you will lose status when the 'obviously crazy' thing fails. We see the same thing in VC funding, where this supposed bastion of frontier challenging risk takers mostly engages in band-wagoning.
Is there any tax-deductible way to give promising researchers money directly (or through some 3rd party that didn't take a cut)? Seems like someone could set up a 501c3 that allowed for that pretty easily.
Key article from this forum:
Developing positive impressions of EA is much more important than near-term growth. If we align with partisan political causes, we risk greatly limiting the eventual scope and impact of EA. Because our movement goal is inherently very different (long term size and positive impression, vs. immediate policy changes), I don't think the organizing knowledge is transferable/useful.
Also, much of organizing around left the left is founded in a justice as the core value framework, rather than an impact as the core value framework. There are many posts/arguments in the social justice community explicitly arguing against impact (e.g. arguing against metrics for charity) because these can undermine more speculative causes and deprioritize grassroots/marginalized activists. If we align EA with these movements, we risk undermining the core quantitive and utilitarian values in EA.
Because of these risks to EA, I'm partial a firewall between EA and social justice themed organizing, meaning EA orgs do not endorse partisan political causes.
This isn't to say EAs should never participate in politics. As you pointed out, there is a lot in international aid that is nonpartisan or very weakly partisan, and the good from doing so is likely to overcoming the risks above.
If we engage in more controversial leftists political causes, EA work would be better spent in cause research, rather than direct political activism. Also, we can prioritize implementing laws that are already passed more effectively, rather than proposing new partisan legislation. This was the aim of the EA policy analytics project.
I echo the above comments that elevating organizing to an "obligation" is inappropriate given the speculative nature of impact and possible externalities.
I'm gonna half-agree with this. I agree that we shouldn't in general as a community align with (or against) social justice causes, at least not in America.
I think there are many issues where taking a partisan view is still a good idea, though. I think we should align with the left on climate change, for example.
I broadly agree with what's written here, but I take issue with the idea of any "moral obligation." First, it seems to suppose some threshold of morality that needs to be passed, but after which there is less imperative to do good - that doesn't align with my personal views of morality. Second, I think it's a pretty ineffective way of convincing people to do good ("hey, we have an opportunity to do a lot of good and be heroes!" seems more convincing than "you have an obligation to do good or else you're a jerk!").
I agree we should consider how other movements (like Black Lives Matter, feminism, or social justice) have grown, but I think these particular movements also point out some pitfalls we might want to avoid. In particular, it seems like value drift over time, not to mention lack of specific goals due to poor coordination, are issues some of those movements have experienced.
My 2 cents, as a scientist, currently in a PhD program:
Scientists will largely resist this. They don't want all their data to be out in the open, mostly from fear that they made a mistake that will be picked up on. "Imposter syndrome" is very common in science (especially for new scientists, who run most of the actual experiments - more established scientists spend more time writing grants for more funding). It's also just a pain in the ass to gather all your data and format it, etc.
That said, I think this would be a very good thing (for scientific progress, not for scientists themselves). In particular, I think it would be very useful for building off other work. There have been tons of times where I've wanted to know exactly how some group gathered some data, and their paper didn't quite specify.
Since this seems like something very good that vested interests will likely oppose, I agree it is a great cause to push for - it likely won't happen on its own but if we can build the proper incentive structures then we could, in theory, alter how the game is played.
For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they've done themselves, nor because of risk aversion, but rather because they've largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.
"This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably)."
I think it's true that many outsource their thinking to GW, but I think there could still be risk aversion in the thought process. Many of these people have also been exposed to arguments for higher risk higher reward charities such as X-risks or funding in-vitro meat research, and I think a common thought process is "I'd prefer to go with the safer and more established causes that GW recommends." Even if they haven't explicitly done the EV calculation themselves, qualitatively similar thought processes may still occur.
Carl already explored this question too, noting that it is relatively easy to go for PM of the UK in another 2012 article.
Far more people should read Carl's old blog posts.
Thanks for the link - hopefully 80000hours is able to convince some EAs to go into politics.
© 2017 Effective Altruism Forum |
Powered by reddit