Comment author: the_jaded_one 19 February 2017 10:39:42AM *  9 points [-]

Political organizing is a highly accessible way for many EAs to have a potentially high impact. Many of us are doing it already. We propose that as a community we recognize it more formally as way to do good within an EA framework

I agree that EAs should look much more broadly at ways to do good, but I feel like doing political stuff to do good is a trap, or at least is full of traps.

Why do humans have politics? Why don't we just fire all the politicians and have a professional civil service that just does what's good?

  • Because people have different goals or values, and if a powerful group ends up in control of the apparatus of the state and pushes its agenda very hard and pisses a lot of people off, it is better to have that group ousted in an election than in a civil war.

But the takeaway is that politics is the arena in which we discuss ideas where different people in our societies disagree on what counts as good, and as a result it is a somewhat toxic arena with relatively poor intellectual standards. It strongly resists good decision-making and good quality debate, and strongly encourages rhetoric. EA needs to take sides in this like I need more holes in my head.

I think it would be fruitful for EA to get involved in politics, but not by taking sides; I get the impression that the best thing EAs can do is try to find pareto improvements that help both sides, and by making issues that are political into nonpolitical issues by de-ideologizing them and finding solutions that make everyone happy and make the world a better place.

Take a leaf out of Elon Musks's book. The right wing in the USA is engaging in some pretty crazy irrationality and science denial about global warming. Many people might see this as an opportunity to score points against the right, but global warming will not be solved by political hot air, it will be solved by making fossil fuels economically marginal or nonviable in most applications. In particular, we need to reduce car related emissions to near zero. So Musks goes and builds fast, sexy macho cars in factories in the USA which provide tens of thousands of manufacturing jobs for blue collar US workers, and emphasizes them as innovative, forward looking and pro-US. Our new right wing president is lapping it up. This is what effective altruism in politics looks like: the rhetoric ("look at these sexy, innovative US-made cars!") is in service of the goal (eliminating gasoline cars and therefore eventually CO2 emissions), not the other way around.

And if you want to see the opposite, go look at this. People are cancelling their Tesla orders because Musk is "acting as a conduit to the rise of white nationalism and fascism in the United States". Musk has an actual solution to a serious problem, and people on the political left want to destroy it because it doesn't conform perfectly to their political ideology. Did these people stop to think about whether this nascent boycott makes sense from a consequentialist perspective? As in, "let's delay the solution to a pressing global problem in order to mildly inconvenience our political enemy"?

Collaborating with existing social justice movements

I would personally like to see EA become more like Elon Musk and less like Buzzfeed. The Trump administration and movement is a bit like a screaming toddler; it's much easier to deal with by distracting it with it's favorite toys ("Macho! Innovative! Made in the US!") than by trying to start an argument with it. How can we find ways to persuade the Trump administration - or any other popular right wing regime - that doing good is in its interest and conforms to its ideology? How can we sound right wing enough that the political right (who currently hold all the legislative power in the US) practically thinks they thought of our ideas themselves?

Comment author: Daniel_Eth 17 February 2017 06:40:18AM *  2 points [-]

I'm gonna half-agree with this. I agree that we shouldn't in general as a community align with (or against) social justice causes, at least not in America.

I think there are many issues where taking a partisan view is still a good idea, though. I think we should align with the left on climate change, for example.

Comment author: the_jaded_one 18 February 2017 06:32:05PM *  1 point [-]

I think we should align with the left on climate change, for example.

re: climate change, it would be really nice if we could persuade the political right (and left) that climate change is apolitical and that it is just a generally sensible thing to tackle it, like building roads is apolitical and just generally sensible.

Technology is on our side here: electric cars are going mainsteam, wind and solar are getting better. I believe that we have now entered a regime where climate change will fix itself as humanity naturally switches over to clean energy, and the best thing that politics can do is get out of the way.

Comment author: BenHoffman 31 January 2017 10:35:15PM *  2 points [-]

Depending on the circumstances, a focus on preserving EA as a movement and avoiding disruptions to existing top philanthropic opportunities may miss the most important opportunities. My guess is that we'll do better asking questions like:

  • What types of disruptions might hamper our ability to coordinate with one another and outsiders to improve the world or mitigate emerging problems? (Different sub-problems may demand very different solutions.)

  • How can we solve these problems in a way that works for EA and other individuals and groups trying to do good? (We should try to generate solutions that transfer well, not just solve the problem for ourselves.)

  • Who else is already working on similar problems RE making global cooperation more robust to war or other likely disruptive events? What can we do to help them or benefit from their help?

  • What disruptions are EAs especially well placed to mitigate?

  • Which interventions are likely to be most important in the event of various disruptions?

Comment author: the_jaded_one 01 February 2017 09:43:37PM *  1 point [-]

What disruptions are EAs especially well placed to mitigate?

I like this one. If you plan to do good in an uncertain future, it makes sense to take advantage of altruism's risk neutrality and put a lot of effort into scenarios that are reasonably likely but also favour your own impact.

In the event of a major disruption or catastrophe such as a war or negative political event in the EA heartland, this would mean that global health work would suddenly become pretty useless - no-one would have the will or means to help distant (in space) people. But we would suddenly have much more leverage to help people who are distant in time, by trying to positively affect any recovery of civilisation. That could be by making it happen sooner, or by giving it some form of aid that is cheap for us. Robust preservation of information is a good idea. If there were a major disaster that destroyed the internet and most servers, and then a long period of civilisational downtime, it might make sense to try and save and distribute key information, for example Wikipedia, the, certain key books, sites, courses, etc.

There might also be attempts to distort history in a very thorough way. Perhaps steps can be taken against this.

Comment author: RyanCarey 30 January 2017 04:54:41AM *  1 point [-]

I agree that people should be allowed to give criticism without talking to the critiqued organizations first. It does usually improve informativeness and persuasiveness, but if we required every critique to be of extremely high journalistic quality then we would never get any criticism done, so we have a lower standard.

By this point, though, the thread has created enough discussion that at least some of OpenPhil are probably reading it. Still you're effectively talking about them as though they're not in the room, even though they are. The fix is to email them a link, and to try to give arguments that you think they would appreciate as input for how they could improve their activities.

Comment author: the_jaded_one 01 February 2017 05:41:25PM 0 points [-]

some of OpenPhil are probably reading it

...

The fix is to email them a link, and to try to give arguments that you think they would appreciate as input for how they could improve their activities.

Those arguments are in the post.

I am writing under a pseudonym so I don't have an easy way of emailing them without it going to their spam folder. I have sent an email pointing them to the post, though.

Comment author: Kerry_Vaughan 29 January 2017 08:27:13PM 1 point [-]

I most certainly wouldn't suggest that, I would suggest that they cease recommending both of these organizations, with the caveat that Cosecha is the worse of the two and first in line for being dropped.

As far as I can tell, nothing in your post or subsequent comments warrant that conclusion. If the issue is making sensitive recommendations seem like the opinion of EA, then better caveating can solve that issue. If the issue is that the charities are in fact ineffective, then you haven't provided any direct evidence of this, only the indirect point that political charities are often ineffective.

I'd find it hard to believe that there is something problematic in transmitting a recommendation along with your epistemic status with regards to the recommendation in a post. It seems like 80K could do a better job of transmitting the epistemic status of the recommendation, but that's not an argument against recommendation the charities to begin with.

Comment author: the_jaded_one 01 February 2017 05:33:24PM 0 points [-]

If the issue is that the charities are in fact ineffective, then you haven't provided any direct evidence of this, only the indirect point that political charities are often ineffective.

Where is the direct evidence that Cosecha is highly effective?

Comment author: Linch 01 February 2017 08:38:13AM *  2 points [-]

I'm really confused by both your conclusion and how you arrived at the conclusion.

I. Your analysis suggest that if Clinton doubles her spending, her chances of winning will increase by less than 2% (!)

This seems unlikely.

II. "Hillary outspent Trump by a factor of 2 and lost by a large margin." I think this is exaggerating things. Clinton had a 2.1% higher popular vote. 538 suggests (http://fivethirtyeight.com/features/under-a-new-system-clinton-could-have-won-the-popular-vote-by-5-points-and-still-lost/) that Clinton would probably have won if she had a 3% popular vote advantage.

First of all, I dispute that losing by less than 1-in-100 of the electoral body is a "large margin." Secondly, I don't think it's very plausible that shifting order 1 million votes with $1 billion in additional funding has less than a 2% chance. ($1,000 per vote is well within the statistics I've seen on GOTV efforts, and actually seriously on the high end).

III. "I mean presumably even with 10x more money or $6bn, Hillary would still have stood a reasonable chance of losing, implying that the cost of a marginal 1% change in the outcome is something like $500,000,000 - $1,000,000,000 under a reasonable pre-election probability distribution."

I don't think this is the right way to model marginal probability, to put it lightly. :)

Comment author: the_jaded_one 01 February 2017 05:13:26PM 0 points [-]

I don't think this is the right way to model marginal probability, to put it lightly. :)

Well really you're trying to look at d/dx P(Hillary Win|spend x), and one way to do that is to model that as a linear function. More realistically it is something like a sigmoid.

For some numbers, see this

So if we assume: P(Hillary Win|total spend $300M) = 25% P(Hillary Win|total spend $3Bn) = 75%

Then the average value of d/dx P(Hillary Win|spend x) over that range is going to be 2700M/0.5 = $5.5Bn per unit of probability. Most likely the value of the derivative at the actual value isn't too far off the average.

This isn't too far from $1000/vote x 3 million votes = $3Bn.

Comment author: Linch 01 February 2017 08:38:13AM *  2 points [-]

I'm really confused by both your conclusion and how you arrived at the conclusion.

I. Your analysis suggest that if Clinton doubles her spending, her chances of winning will increase by less than 2% (!)

This seems unlikely.

II. "Hillary outspent Trump by a factor of 2 and lost by a large margin." I think this is exaggerating things. Clinton had a 2.1% higher popular vote. 538 suggests (http://fivethirtyeight.com/features/under-a-new-system-clinton-could-have-won-the-popular-vote-by-5-points-and-still-lost/) that Clinton would probably have won if she had a 3% popular vote advantage.

First of all, I dispute that losing by less than 1-in-100 of the electoral body is a "large margin." Secondly, I don't think it's very plausible that shifting order 1 million votes with $1 billion in additional funding has less than a 2% chance. ($1,000 per vote is well within the statistics I've seen on GOTV efforts, and actually seriously on the high end).

III. "I mean presumably even with 10x more money or $6bn, Hillary would still have stood a reasonable chance of losing, implying that the cost of a marginal 1% change in the outcome is something like $500,000,000 - $1,000,000,000 under a reasonable pre-election probability distribution."

I don't think this is the right way to model marginal probability, to put it lightly. :)

Comment author: the_jaded_one 01 February 2017 12:01:51PM 0 points [-]

Well if we go with $1000 per vote and we need to shift 3 million votes, that's $3bn. Now let's map $3bn to, say, a 25% increased probability of winning, under a reasonable pre-election distribution.

Then you can think of the election costing $12bn, for a benefit of 4tn, which is a factor of 400.

Comment author: Daniel_Eth 30 January 2017 05:15:34AM 3 points [-]

I have no personal knowledge of these specific charities, nor strong opinions on the effectiveness of criminal justice reform. I do think, however, that there are good reasons to consider political issues in EA circles. The size of governments are huge, and the effects of their actions can be gigantic. Even a minor improvement in the function of the US government can have an impact far greater than what almost any other organization can accomplish.

In 2016, all of my donations were to Hillary's election campaign (my logic can mostly be found here: http://thinkingofutils.com/2016/11/value-one-vote/). The gwwc pledge (which I've taken) states giving should be "to the organisations that you think can do the most good with it." Had I given anywhere else instead, I would have been breaking the pledge since I thought her campaign was the most effective use of my money on the margin.

Comment author: the_jaded_one 01 February 2017 07:59:08AM *  -2 points [-]

Hillary outspent Trump by a factor of 2 and lost by a large margin, so it's something of a questionable decision.

EDIT: I think a more realistic model might go something like this; you can tweak the figures to shift a factor of 2-3 but not much more:

P(Hillary Win|total spend $300M) = 25% P(Hillary Win|total spend $3Bn) = 75%

Then the average value of d/dx P(Hillary Win|spend x) over that range is going to be 2700M/0.5 = $5.5Bn per unit of probability. Most likely the value of the derivative at the actual value isn't too far off the average.

This isn't too far from $1000/vote x 3 million votes = $3Bn.

So we could look at something like $5Bn/unit probability at the margin, or each $1 increasing the probability of Hillary winning by 1/5,000,000,000

You could probably do a very similar analysis for any political election at roughly this level of existing funding.

We can take a first approximation to the expected disutility of a very bad Trump presidency at $4Tn or one full years' GDP. This implies a very confident belief in an extremely negative outcome about Trump.

Is it competitive with global poverty? Well it seems like it is on a fairly similar level, since for $5000 you can save a life which is typically valued at something like $5-$25M, which is a similar "rate of return" to paying 5Bn for 4Tn via the Clinton campaign.

Is this competitive with MIRI or the other AI risk orgs? Probably not, but your beliefs about AI risk factor into this quite a lot.

Comment author: jsteinhardt 29 January 2017 07:18:44PM 6 points [-]

OpenPhil made an extensive write-up on their decision to hire Chloe here: http://blog.givewell.org/2015/09/03/the-process-of-hiring-our-first-cause-specific-program-officer/. Presumably after reading that you have enough information to decide whether to trust her recommendations (taking into account also whatever degree of trust you have in OpenPhil). If you decide you don't trust it then that's fine, but I don't think that can function as an argument that the recommendation shouldn't have been made in the first place (many people such as myself do trust it and got substantial value out of the recommendation and of reading what Chloe has to say in general).

I feel your overall engagement here hasn't been very productive. You're mostly repeating the same point, and to the extent you make other points it feels like you're reaching for whatever counterarguments you can think of, without considering whether someone who disagreed with you would have an immediate response. The fact that you and Larks are responsible for 20 of the 32 comments on the thread is a further negative sign to me (you could probably condense the same or more information into fewer better-thought-out comments than you are currently making).

Comment author: the_jaded_one 29 January 2017 08:17:39PM 0 points [-]

I don't think that can function as an argument that the recommendation shouldn't have been made in the first place

I agree, and I didn't mention that document or my degree of trust in it.

I feel your overall engagement here hasn't been very productive.

I suppose it depends what you want to produce. If debates were predictably productive I presume people would just update without even having to have a debate.

it feels like you're reaching for whatever counterarguments you can think of, without considering whether someone who disagreed with you would have an immediate response

What counterarguments is one supposed to make, other than the ones one thinks of? I suppose the alternative is to not make a counterargument, or start a debate with all possible lines of play fully worked out and prepared? A high standard, to be sure. Sometimes one doesn't correctly anticipate the actual responses. Is there some tax on number of comments or responses? I mean this is valid to an extent, if someone is making really dumb arguments, but then again sometimes one has to ask the emperor why he isn't wearing any clothes.

Comment author: RyanCarey 29 January 2017 07:32:10PM 1 point [-]

I'm all for criticising organizations without having your post vetted by them. But at some point, it is useful to reach out to them to let them know your criticism, if you want it to to be useful, and it seems like you've now well-passed this point.

Comment author: the_jaded_one 29 January 2017 07:57:56PM 0 points [-]

Can you elaborate?

View more: Next