MS

Michael_S

227 karmaJoined May 2015

Posts
1

Sorted by New

Comments
63

"My view is that - for the most part - people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it."

This seems unlikely to me. I think utilitarianism broadly encourages pro-social/cooperative behaviors, especially because utilitarianism encourages caring about collective success rather than individual success. Having a positive community and trust helps achieve these outcomes. If you have universalist moralities, it's harder for defection to make sense.

Broadly, I think that worries about utilitarianism/consequentialism should lead to negative outcomes are often self defeating, because the utilitarians/consequentiality see the negative outcomes themselves.  If you went around killing people for their organs, the consequences would obviously be negative; it's the same for going around lying or being an asshole toe people all the time.

In general, I'm a big fan of approaches that are optimized around Value of Information. Given EA/longtermism's  rapidly growing resources (people and $), I expect that acquiring information to make use of resources in the future is a particularly high EV use of resources today.

Congrats!

I think part of this is about EAs recalibrating what is "crazy" within the community. In general, I think the right assumption is that if you want $ to do basically anything, there's a good chance (honestly >50%) you can get it.

If you don't want someone to do something,  makes sense not to offer a large amount of $. For the second case, I'm a bit confused by this statement:

"the uncertainty of what the people would do was the key cause in giving a relatively small amount of money"

What do you mean here? That you were uncertain in which path was best?
 

Very interesting, valuable, and  thorough overview!

I notice you mentioned providing grants of 30k and 16k that were or are likely to be turned down. Do you think this might have been due to the amounts of funding? Might levels of funding an order of magnitude higher have caused a change in preferences? 

Given the amount of funding in longtermist EA, if a project is valuable, I wonder if amounts closer to that level might be warranted. Obviously the project only had 300k in funding, so that level of funding might not have been practical here.  However, from the perspective of EA longtermist funding as a whole, routinely giving away this level of funding for projects would be practical.
 

I work in Democratic data analytics in the US and I agree that there's potentially a lot of value to EAs getting involved in the partisan side rather than just the civil service side to advance EA causes. If anyone is interested in becoming more involved in US politics, I'd love to talk to them. You can shoot me a message.

Hey; I work in US politics (in Data Analytics for the Democratic Party). Would love to chat if you think it would be useful for you.

Yes. People aren't spending much money yet because people will mostly forget about it by the election.

Independent of the desirability of spending resources on Andrew Yang's campaign, it's worth mentioning that this overstates the gains to Steyer. Steyer is running ads with little competition (which makes ad effects stronger), but the reason there is little competition is because decay effects are large; voters will forget about the ads and see new messaging over time. Additionally, Morning Consult shows higher support than all other pollsters. The average for Steyer in early states is considerably less favorable.

I'd be curious which initiatives CSER staff think would have the largest impact in expectation. The UNAIRO proposal in particular looks useful to me for making AI research less of an arms race and spreading values between countries, while being potentially tractable in the near term.

Load more