Comment author: oagr 06 May 2018 06:13:26AM 1 point [-]

I'd agree that you would have to assume that your project is 4x more efficient on the marginal dollar. However, I think this actually is the case for many of the things Open Phil funds. This could be much less the case in the animals space, but it definitely seems to be the case in the x-risk space, where there are relatively few groups in the space. My impression is that the current thinking is that groups below some "threshold" are expected to be neutral or actually net-negative, and very few groups are safely above that threshold (in the x-risk space.)

Open Phil has access to a lot of money; I'm quite sure they could safely spend a lot more than they currently do and be fine.

Comment author: oagr 06 May 2018 06:17:55AM 1 point [-]

Another point: living in the bay is pretty expensive and is becoming more so. I don't see any solutions to this on the horizon. Having a bunch of people all live & work here seems pretty efficient, at least until internet communication becomes a decent amount better.

Rent + taxes + health expenses (gym memberships, healthy food), etc, can add up pretty quickly.

Comment author: Joey 06 May 2018 05:48:12AM 1 point [-]

So I am not sure the focus on total output per person vs financial stinginess is so clear. To stick with the Open Phil example, it's not just the max they are willing to fund, it’s the counterfactual of their last donated dollar. For example, if one AR charity takes say x2 what it could run off (focusing on output per person vs frugalness) you would have to factor that counterfactual 50% of the donation going to the last charity that Lewis ends up funding with Open Phil (or maybe the last in that given year). In either of those situations the counterfactuals are definitely not 0. For example, say I personally would be 25% more effective if I was paid 50k vs 100k (x2 salary). I would have to assume my project is x4 better than the counterfactual project Lewis donates to otherwise. This could be true for one AR charity vs another, but I would say it's far from obvious and I will also note I would be quite surprised if the personal gains are that high in most cases of increased salaries, but would be super keen on more data on this.

Comment author: oagr 06 May 2018 06:15:51AM 1 point [-]

I guess the crux here is if marginal group effectiveness follows an exponential curve or similar. My impression is that it does, but accept that this is an empirical question where I am not very sure.

Outside of the space of things "Open Phil and related groups find interesting" though, all bets are off in regard to this. It seems like there are a bunch of small things that do deserve more funding.

Comment author: Joey 06 May 2018 05:48:12AM 1 point [-]

So I am not sure the focus on total output per person vs financial stinginess is so clear. To stick with the Open Phil example, it's not just the max they are willing to fund, it’s the counterfactual of their last donated dollar. For example, if one AR charity takes say x2 what it could run off (focusing on output per person vs frugalness) you would have to factor that counterfactual 50% of the donation going to the last charity that Lewis ends up funding with Open Phil (or maybe the last in that given year). In either of those situations the counterfactuals are definitely not 0. For example, say I personally would be 25% more effective if I was paid 50k vs 100k (x2 salary). I would have to assume my project is x4 better than the counterfactual project Lewis donates to otherwise. This could be true for one AR charity vs another, but I would say it's far from obvious and I will also note I would be quite surprised if the personal gains are that high in most cases of increased salaries, but would be super keen on more data on this.

Comment author: oagr 06 May 2018 06:13:26AM 1 point [-]

I'd agree that you would have to assume that your project is 4x more efficient on the marginal dollar. However, I think this actually is the case for many of the things Open Phil funds. This could be much less the case in the animals space, but it definitely seems to be the case in the x-risk space, where there are relatively few groups in the space. My impression is that the current thinking is that groups below some "threshold" are expected to be neutral or actually net-negative, and very few groups are safely above that threshold (in the x-risk space.)

Open Phil has access to a lot of money; I'm quite sure they could safely spend a lot more than they currently do and be fine.

Comment author: oagr 06 May 2018 01:52:48AM *  4 points [-]

I think I'm mostly in agreement here. This thinking can lead to cult-ish groups, but my guess is deliberate decision-making could lead to very productive and safe outcomes.

I also think that it's nice to be able to have groups of people to aspire to. This is obviously a fictional example, but I think the fact that the Jedi of Star Wars lived such strict routines made them more admirable; as opposed to being seen as "holier-than-though" individuals that others would shame for setting too good an example.

One point I'd press back against is the line: "Salaries at EA orgs have gone up significantly over time as well as more frequent retreats and other staff benefits. " My impression is that there is a lot of money out there (for groups that OpenPhil is comfortable funding), so the cost of paying these employees is relatively low. It seems to me like making more money should help your productivity and allow you to be more intense. I would be a lot more focused on total output per person than I would be financial stinginess.

My model of a very intense person is similar to one of the intense entrepreneurs here; hopefully, they have a lot of resources available to them, but they do a lot with those resources.

Comment author: oagr 06 May 2018 02:04:41AM *  2 points [-]

Related: Dragon Army kind of tried something in this vein. I think the main posts about it have been taken down. https://www.lesswrong.com/posts/KShZSJyBwY7K3hcWN/on-dragon-army

I believe it didn't work as well as they were hoping, but I don't think them failing is much evidence that intense communities can't work.

Comment author: oagr 06 May 2018 01:58:12AM 0 points [-]

One small point: The title of this is, "Should there be an EA crowdfunding platform?." You list the costs & benefits in the post, which is fantastic, but I'm not a big fan of the wording of the title. "Should" is a really messy word, what I expect this means is more specific, like "Does the expected value of this project outweigh the opportunity costs?" A similar, shorter, title could be something like "What is the net benefit of an EA crowdfunding platform?".

This comment is in part for others seeing this who may make similar posts in the future.

Comment author: oagr 06 May 2018 01:52:48AM *  4 points [-]

I think I'm mostly in agreement here. This thinking can lead to cult-ish groups, but my guess is deliberate decision-making could lead to very productive and safe outcomes.

I also think that it's nice to be able to have groups of people to aspire to. This is obviously a fictional example, but I think the fact that the Jedi of Star Wars lived such strict routines made them more admirable; as opposed to being seen as "holier-than-though" individuals that others would shame for setting too good an example.

One point I'd press back against is the line: "Salaries at EA orgs have gone up significantly over time as well as more frequent retreats and other staff benefits. " My impression is that there is a lot of money out there (for groups that OpenPhil is comfortable funding), so the cost of paying these employees is relatively low. It seems to me like making more money should help your productivity and allow you to be more intense. I would be a lot more focused on total output per person than I would be financial stinginess.

My model of a very intense person is similar to one of the intense entrepreneurs here; hopefully, they have a lot of resources available to them, but they do a lot with those resources.

Comment author: oagr 27 November 2017 02:20:17AM 1 point [-]

Thanks for taking the time to write this up, and especially for being so honest about it. Almost all new experiments are going to be failures, but the vast majority kinda get disguised as possible successes or totally swept under the rug. I would imagine that one of the most important things this point is that neither one of you get demotivated.

Comment author: oagr 25 November 2017 11:04:09PM 2 points [-]

I'm kinda surprised I haven't seen more information about deliberate practice as a manager. The specific issue mentioned here seems to be predictions made around people. Maybe in 10 years AI systems will be better than top managers are doing the sort of thing?

2

Emotion Inclusive Altruism vs. Emotion Exclusive Altruism

Summary: People discussing common acts of altruism and those arguing against it’s existence are using different definitions of altruism.   Cross-posted to my medium blog .  You smell fire, run outside, see a burning building, hear a child screaming, run inside the building, grab child, run out, and the building... Read More
Comment author: oagr 27 September 2016 05:15:22AM 3 points [-]

Great work, I'm really looking forward to following your progress!

View more: Next