Comment author: Ben_Todd 08 September 2018 03:49:09PM 1 point [-]

See our new article about this topic: https://80000hours.org/articles/comparative-advantage/

Comment author: vollmer  (EA Profile) 08 August 2018 03:27:57PM *  4 points [-]

The sources you quote seem to suggest more like 5% in real annual returns (or 7% nominal), and you wrote "2-7% nominal returns". If you're investing $2m, that would be $40k-$140k per year. I'd expect this to cost maybe one week of staff time per year, so it might easily be worth the cost. (Mission hedging and more diversification would push this up further; fees and risk aversion would push it down. Overall I don't expect these factors to be very strong though.)

To me it seems that the difficulty of explaining this to the users is the stronger reason against implementing this. (Unless the users themselves can choose but that would cost more staff time again.)

Comment author: Ben_Todd 09 August 2018 03:20:14AM 2 points [-]

Sorry, I was thinking 1-5% of real returns rather than nominal, though I agree for these purposes nominal might be more relevant.

There's a lot of room for different figures depending on how mean reverting you think valuations are. I think we should expect them to be somewhat mean reverting, so my best guess is more like 3% real.

I was also partly thinking that valuations are much higher again since I wrote that post, so I think that article is a bit optimistic today. http://www.multpl.com/shiller-pe/

With costs, I'm more thinking about the set-up cost. I wouldn't be surprised if this took several person-months, which could be used to add features that would add much larger proportional gains.

I also guess the on-going costs would be quite a bit more than one week per year, due to the reasons Rob lists.

And then there are also the costs of explaining this feature to users, which seem pretty significant - e.g. even if you write up a clear explanation of why you're doing this and speak to a bunch of people about it in-person, you'll still end up with lots of misunderstandings.

Comment author: Eli_Nathan 07 August 2018 07:09:53PM 4 points [-]

Thanks Marek,

I remember some suggestions a while back to store the EA funds cash (not crypto) in an investment vehicle rather than in a low-interest bank account. One benefit to this would be donors feeling comfortable donating whenever they wish, rather than waiting for the last possible minute when funds are to be allocated (especially if the fund manager does not have a particular schedule). Just wondering whether there's been any thinking on this front?

Comment author: Ben_Todd 08 August 2018 06:04:37AM 4 points [-]

I agree that would be ideal, but it doesn't seem like a high priority feature. The risk-free 1yr interest rate is about 2% at the minute (in treasuries), so even if the money is delayed for a whole year, we're only talking about a gain of 2%, and probably more like 1% after transaction costs.

You could invest in the stock market instead, but the expected return is still probably only 1-5% per year (as I argue here: https://80000hours.org/2015/10/common-investing-mistakes-in-the-effective-altruism-community/). Plus, then you have a major risk of losing lots of the money, which will probably be pretty hard to explain to many of the users, the press etc.

I expect the staff time spent adding and managing this feature could yield much more than a couple of percent growth to the impact of the funds in many other ways (e.g. the features Marek lists above).

Comment author: RandomEA 06 August 2018 09:48:35AM 2 points [-]

I'm aware of this and also planning on addressing it. One of the reasons that people associate the long term future with x-risk reduction is that the major EA organizations that have embraced the long term future thesis (80,000 Hours, Open Phil etc.) all consider biosecurity to be important. If your primary focus is on s-risks, you would not put much effort into biorisk reduction. (See here and here.)

Comment author: Ben_Todd 07 August 2018 04:21:43AM 2 points [-]

I agree the long-term value thesis and the aim of reducing extinction risk often go together, but I think it would be better if we separated them conceptually.

At 80k we're also concerned that there might be better ways to help the future, which is one reason why we highly prioritise global priorities research.

Comment author: RandomEA 05 August 2018 11:35:58AM 5 points [-]

I plan on posting the standalone post later today. This is one of the issues that I will do a better job addressing (as well as stating when an argument applies only to a subset of long term future/existential risk causes).

Comment author: Ben_Todd 05 August 2018 07:54:21PM 11 points [-]

As a further illustration of the difference with your first point, the idea that the future might be net negative is only reason against reducing extinction risk, but it might be more reason to focus on improving the long-term in general. This is what the s-risk people often think.

Comment author: John_Maxwell_IV 04 August 2018 02:30:39PM 23 points [-]

Maybe this is off topic, but can any near future EAs recommend something I can read to understand why they think the near future should be prioritized?

As someone focused on the far future, I'm glad to have near future EAs: I don't expect the general public to appreciate the value of the far future any time soon, and I like how the near future work makes us look good as a movement. In line with the idea of moral trade, I wish there was something that the far future EAs could do for the near future EAs in return, so that we would all gain through cooperation.

Comment author: Ben_Todd 05 August 2018 04:39:36AM 3 points [-]

See a list of reasons why not to work on reducing extinction risk here: https://80000hours.org/articles/extinction-risk/#who-shouldnt-prioritise-safeguarding-the-future

See a list of counterarguments to the long-term value thesis here: https://80000hours.org/articles/future-generations/

There are also further considerations around coordination that we're writing about in an upcoming article.

Comment author: RandomEA 04 August 2018 08:10:38PM 4 points [-]

I'll consider expanding it and converting it into its own post. Out of curiosity, to what extent does the Everyday Utilitarian article still reflect your views on the subject?

Comment author: Ben_Todd 05 August 2018 04:36:52AM 16 points [-]

It's a helpful list and I think these considerations deserve to be more well known.

If you were going to expand further, it might be useful to add in more about the counterarguments to these points. As you note in a few cases, the original proponents of some of these points now work on long-term focused issues.

I also agree with the comment above that it's important to distinguish between what we call "the long-term value thesis" and the idea that reducing extinction risks is the key priority. You can believe in the long-term value thesis but think there's better ways to help the future than reducing extinction risks, and you can reject the long-term value thesis but still think extinction risk is a top priority.

Comment author: Halstead 05 June 2018 10:43:07AM 0 points [-]

You argued that counterfactual impact may be smaller than it appears. But it may also be larger than it first appears due to leveraging other orgs away from ineffective activities. e.g. an NGO successfully advocates for a policy change P1 - the benefits of P1 is their counterfactual impact. But as a result of the proven success of this type of project, 100 other NGOs start working on similar projects where before they worked on ineffective projects. This latter effect should also be counted as the first org's counterfactual impact. This could be understood as leveraging additional money into an effective space.

Comment author: Ben_Todd 05 June 2018 06:03:55PM 1 point [-]

Makes sense. I don't think Joey would object if orgs were counting this though.

Comment author: Halstead 04 June 2018 02:06:04PM *  1 point [-]

good points. This can also go the other way though - an org could leverage money from otherwise very ineffective orgs. Especially with policy changes, it can sometimes be the case that a good org comes up with a campaign that steers the entire advocacy ecosystem to a more effective path. A good example of this is campaigns for ordinary air pollution regulations on coal plants, which were started in the 1990s by the Clean Air Task Force among others and now have hundreds of millions in funding from Bloomberg. If these campaigns weren't started, environmental NGOs in the US and Europe would plausibly be working on something much worse.

I don't think the notion of 'credit' is a useful one. At FP, when we were looking at orgs working on policy change, we initially asked them how much credit they should take for a particular policy change. They ended up saying things like "40%". I don't really understand what this means. It turned out to be best to ask them when the campaign and policy change would have happened had they not acted (obviously a very difficult question). It's best to couch things in terms of counterfactual impact throughout and not to convert into 'credit'.

Similarly with voting, if an election is decided by one vote and there are one million voters for the winning party, I think it is inevitably misleading to ask how much of the credit each voter should get. One naturally answers that they get one millionth of the credit, but this is wrong as a proposition about their counterfactual impact, which is what we really care about.

Indeed, focusing on credit can lead you to attribute impact in cases of redundant causation when an org actually has zero counterfactual impact. Imagine 100 orgs are working for a big policy change, and only 50 of them were necessary to the outcome (though this could be any combination of them and they were all equally important). In this case, funding one of the orgs had zero counterfactual impact because the change would have happened without them. But on the 'credit approach', you'd end up attributing one hundredth of the impact to each of the orgs

Comment author: Ben_Todd 05 June 2018 05:00:49AM -1 points [-]

I agree - I was talking a bit too loosely. When I said "assign credit of 30% of X" I meant "assign counterfactual impact of 30% of X". My point was just that even if you do add up all the counterfactual impacts (ignoring that this is a conceptual mistake like you point out), they rarely sum to more than 100%, so it's still not a big issue.

I'm not sure I follow the first paragraph about leveraging other groups.

Comment author: Ben_Todd 03 June 2018 10:41:58AM 3 points [-]

On the practical point, one help is that I think cases like these are fairly uncommon:

The previous example used donations because it’s easy and clear cut to make the case that this is the wrong move without getting into more difficult issues, but it generalizes to talent as well. For example, recently, Fortify Health was founded. Clearly the founders deserve 100% impact- without them, the project certainly would not have happened. But wait a second: both of them think that without Charity Science’s support, the project would definitely not have happened. So, technically, Charity Science could also take 100% credit. (Since from our perspective, if we did not help Fortify Health it would not have happened, so it is a 100% counterfactually caused by Charity Science project). But wait a second, what about the donors who funded the project early on (because of Charity Science’s recommendation)? Surely they deserve some credit for impact as well! What about the fact that without the EA movement, it would have been much less likely for Charity Science and Fortify Health to connect? With multiple organizations and individuals, you can very easily attribute a lot more impact than actually happens.

In our impact evaluations, and in my experiences talking to others in the community, we would never give 100% of the impact to each group. For instance, if Charity Science didn't exist, the founders of Fortify might well have ended up doing a similar idea anyway - it's not as if Charity Science is the only group promoting evidence-based global health charities, and if Charity Science didn't exist, another group like them probably would have sprung up eventually. What's more, even if the founders didn't do Fortify, they would probably have done something else high-impact instead. So, the impact of Charity Science should probably be much less than 100% of Fortify. And the same is true for the other groups involved.

At 80,000 Hours, we rarely claim more than 30% of the impact of an event or plan change, and we most often model our impact as a speed-up (e.g. we assume the career changer would have eventually made the same shift, but we made it come 0.5-4 years earlier). We also sometimes factor in costs incurred by other groups. All this makes it hard for credit to add up to more than 100% in practice.

View more: Next