Comment author: Austen_Forrester 19 June 2017 10:06:38PM 0 points [-]

Have you considered combining the "GiveWell for impact investing" idea with the Effective Altruism Funds idea and create an EA impact investing biz within your charity? You could hire staff to find the best impact investing opportunities and create a few funds for different risk tolerances. Theoretically, it could pay for itself (or make serious money for CEA if successful enough) with a modest management fee. I'm not sure if charities are allowed to grant to businesses, but I know they can operate their own businesses as long as it's related to their mission.

Comment author: ThomasSittler 25 June 2017 08:35:23PM 0 points [-]

Hi Austen :) You may find this page useful, especially the "further reading".

Comment author: ThomasSittler 31 May 2017 11:59:15PM 4 points [-]

It seems to me that the main reason people are sometimes insufficiently friendly or reliable is not the lack of papers detailing the benefits of considerateness.

Comment author: BenHoffman 21 May 2017 11:29:59PM 4 points [-]

Our prior strongly punishes MIRI. While the mean of its evidence distribution is 2,053,690,000 HEWALYs/$10,000, the posterior mean is only 180.8 HEWALYs/$10,000. If we set the prior scale parameter to larger than about 1.09, the posterior estimate for MIRI is greater than 1038 HEWALYs/$10,000, thus beating 80,000 Hours.

This suggests that it might be good in the long run to have a process that learns what prior is appropriate, e.g. by going back and seeing what prior would have best predicted previous years' impact.

Comment author: ThomasSittler 23 May 2017 11:18:47AM 2 points [-]

With the possible exception of StrongMinds, it's not the case that the previous years' impact is much easier to estimate than 2017's impact.

Comment author: Peter_Hurford  (EA Profile) 21 May 2017 11:09:32PM 1 point [-]

If you disagree with our prior parameters, we encourage you to try our own values and see what you come up with, in the style of GiveWell, who provide their parameters as estimated by each staff member.

Do you have these numbers published, broken down by staff member?

It also would be cool to see breakdowns of the HEWALYs/$ for each charity before and after the Bayesian update with the prior.

Comment author: ThomasSittler 23 May 2017 11:17:10AM 0 points [-]

No, we only have our group estimate published.

To see HEWALYs/$ before updating, you can look at the model outputs. You can also get them on our R Shiny app here by simply adding a line to calculate e.g. mean(miri).

Comment author: Peter_Hurford  (EA Profile) 21 May 2017 11:22:45PM 1 point [-]

we convert all estimates to “Human-equivalent well-being-adjusted life-years” (HEWALYs)

How do you do the "human equivalent" part?

Comment author: ThomasSittler 23 May 2017 11:14:50AM 0 points [-]

The process for this is described in the post for each model. I'll be happy to clarify if you still have questions after reading that.

Comment author: MichaelPlant 15 May 2017 12:37:17AM 3 points [-]

Great to see this here. Thanks Konstantin and Lovisa. A couple of thoughts.

It would be good if you'd put your headline results in the post and what, if anything, you think further flows from your conclusions (i.e. you now consider StrongMinds is more/less effective than something else, such as AMF).

Can you provide a link to the 0.658 DALY rating for depression? I can never remember how much of that is "years of life lost" and how much is "years lived with disability". There are two parts that make up DALYS and I think people should present them seperately; there are different views you can take on the badness of deaths. This is helpful to those of an Epicurean persuasion such as myself who are more concerned with making people happy than just keeping people alive (to riff off Narveson 1967).

Comment author: ThomasSittler 23 May 2017 11:07:16AM *  0 points [-]

Thanks for the comment.

Regarding epicureanism: More generally, breaking everything down into years of life lost to death, years of suffering, and years of potential future life enabled, would be a good idea for a future improvement of the models. It would enable us to see how the models work for people with different values. By the way anyone can make a copy of our models and adapt them! :)

Regarding your (separate) view that depression is much worse than consensus DALY weights account for, I think this is best thought of in relative terms. You want to make sure the HEWALY weights across all four models are consistent with your values and empirical beliefs, rather than just the DALY weight on depression in the StrongMinds model.

Comment author: Peter_Hurford  (EA Profile) 14 May 2017 08:55:32PM 1 point [-]

Wouldn't ~$660/DALY be likely, approximately less cost-effective than GiveDirectly?

Comment author: ThomasSittler 23 May 2017 11:00:07AM *  0 points [-]

Wouldn't ~$660/DALY be likely, approximately less cost-effective than GiveDirectly?

That's indeed what our current model says. I have some more comments at the bottom of this post.

Comment author: Peter_Hurford  (EA Profile) 21 May 2017 11:05:29PM *  3 points [-]

There is a fixed pool of charities that ACE will evaluate, which consists of 10-15 charities.

While I think this is a good simplifying assumption, it's incorrect and potentially makes a dramatic change to your model. The reason is that I think this assumption is what implies that "ACE will likely identify pretty-good charities very early on, and additional rounds do not lead to much change".

However, I'd view ACE as potentially still building research capacity to eventually evaluate more speculative and harder to understand options (such as the recent recommendation of The Good Food Institute) that previously could not be evaluated and may end up being cost-effective.

I also think this capacity will produce lumpy breakthroughs in evaluating cost-effectiveness and refining accuracy. Many of these breakthroughs have not happened yet and I could see them potentially dramatically changing the top charity list for ACE.

I don't have strong views on whether ACE is the best place for donations, all charities and causes considered, but I do strongly think that assuming ACE has already hit diminishing returns to research investment is a mistake and I do weakly think that building more research capacity and direct research are the most important investments in the animal-interested EA space.

(Disclaimer: I'm on the board of Animal Charity Evaluators, but only speak for myself here. I do not speak for ACE and I may have (and often do have) differing opinions than the ACE consensus.)

Comment author: ThomasSittler 23 May 2017 10:58:34AM *  1 point [-]

So there are a couple of claims here.

(i) ACE is building research capacity

(ii) ACE having more of this capacity in the future will enable them to evaluate a larger number of charities (including ones that are harder to evaluate).

(iii) ACE having more of this capacity in the future will enable them to evaluate charities with higher expected impact.

I'm not sure whether you're claiming (ii) or (iii) or both. And could you say a bit about what evidence you see for these three claims? Thanks for the useful comment!

Comment author: PeterMcCluskey 22 May 2017 02:41:16PM 5 points [-]

Can you explain your expected far future population size? It looks like your upper bound is something like 10 orders of magnitude lower than Bostrom's most conservative estimates.

That disagreement makes all the other uncertainty look extremely trivial in comparison.

Comment author: ThomasSittler 23 May 2017 10:55:02AM *  1 point [-]

Do you mean Bostrom's estimate that "the Virgo Supercluster could contain 10^23 biological humans"? This did come up in our conversations. One objection that was raised is that humanity could go extinct, or for some other reason colonisation of the Supercluster could have a very low probability. There was significant disagreement among us, and if I recall correctly we chose the median of our estimates.

Do you think Bostrom is correct here? What probability distribution would you have chosen for the expected far future population size? :)

Comment author: Peter_Hurford  (EA Profile) 21 May 2017 11:17:47PM 1 point [-]

I think you should add more uncertainty to your model around the value of an 80K career change (in both directions). While 1 impact-adjusted change is approximately the value of a GWWC pledge, that doesn't mean it is equal in both mean and standard deviation as your model suggests, since the plan changes involve a wide variety of different possibilities.

It might be good to work with 80K to get some more detail about the kinds of career changes that are being made and try to model the types of career changes separately. Certainly, some people do take the GWWC pledge, and that is a change that is straightforwardly comparable with the value of the GWWC pledge (minus concerns about the counterfactual share of 80K), but other people make much higher-risk higher-reward career changes, especially in the 10x category.

Speaking just for me, in my personal view looking at a few examples of the 80K 10x category, I've found them to be highly variable (including some changes that I'd personally judge as less valuable than the GWWC pledge)... while this certainly is not a systematic analysis on my part, it would suggest your model should include more uncertainty than it currently does.

Lastly, I think your model right now assumes 80K has 100% responsibility for all their career changes. Maybe this is completely fine because 80K already weights their reported career change numbers for counterfactuality? Or maybe there's some other good reason to not take this into account? I admit there's a good chance I'm missing something here, but it would be nice to see it addressed more specifically.

Comment author: ThomasSittler 23 May 2017 10:41:51AM 1 point [-]

One clarification is that our current model incorporates uncertainty at the stage where GWWC-donation-equivalents are converted to HEWALYs. We do not additionally have uncertainty on the value of a plan change scored "10" in terms of GWWC-donation-equivalents. We do have uncertainty on the 0.1s and 1s.

View more: Next