Comment author: SiebeRozendal 19 February 2018 02:38:28PM *  3 points [-]

I like this post Milan, I think it's the best of your series. I think that you rightly picked a very important topic to write about (cluelessness) that should receive more attention than it currently does. I do have some comments:

Although I admire new ways to think about prioritisation, I have two worries: Conceptual distinction. Wisdom and predictive power seem not conceptually distinct. Both are about our ability to identifying and predicting the probability of good and bad outcomes. Intent also seems a little tangled up in wisdom, although I can see that we want to seperate those. Furthermore, intent influences coordination capability: the more different the intentions are of a population, the more difficult coordination becomes.

This creates the second worry that this model adds only one dimension (Intent) to the 3-dimensional model of Bostrom's Technology [Capacity] - Insight [Wisdom] - Coordination. Do you think this increases to usefulness of the model enough? The advantage of Bostrom's model is that it allows for differential progress (wisdom > coordination > capacity), while you don't specify the interplay of attributes. Are they supposed to be multiplied, or are some combinations better than others, or do we want differential progress?

I was a bit confused that you write about things to prioritise, but don't refer back to the 5 attributes of the steering capacity. Some relate more strongly to specific attributes, and some attributes are not discussed much (coordination) or at all (capability).

Further our understanding of what matters

This seems to be Intent in your framework. I totally agree that this is valuable. I would call this moral (or more precisely: axiological) uncertainty, and people work on this outside of EA as well. By the way, besides resolving uncertainty, another pathway is to improve our methods to deal with moral uncertainty. (Like MacAskill argues for)

Improve governance

I am not sure to which this concept this relates to, though I suppose it is Coordination. I find the discussion a bit shallow here as it discusses only institutions, and not the coordination of individuals in e.g. the EA community, or the coordination between nation states.

Improve prediction-making & foresight

This seems to be the attribute predictive power. I agree with you that this is very important. To a large extent, this is also what science in general is aiming to do: improving our understanding so that we can better predict and alter the future. However, straight up forecasting seems more neglected. I think this could also just be called "reducing empirical uncertainty"? If we call it that, we can also consider other approaches, such as researching effects in complex systems.

Reduce existential risk

I'm not sure this was intended to relate to a specific attribute. Guess not.

Increase the number of well-intentioned, highly capable people

This seems to relate mostly to "Intent"as well. I wanted to remark that this can either be done by increasing capability and knowledge of well-intentioned people, or by improving intentions of capable (and knowledgeable) people. My observation is that so far, the focus has been on the latter in term of growth and outreach, and only some effort has been expended to develop the skills of effective altruists. (Although this is noted as a comparative advantage for EA Groups)

Lastly, I wanted to remark that hits-based giving does not imply a portfolio approach in my opinion. It just implies being more or less risk-neutral in altruistic efforts. What drives the diversification in OPP's grants seems to be worldview diversification, option value, and the possibility that high-value opportunities are spread over cause areas, rather than concentrated in one cause area. I think what would support the conclusion that we need to diversify could be that we need to hit a certain value on each of the attributes otherwise the project fails (a bit like that power-laws arise from success needing ABC instead of A+B+C).

All in all, an important project, but I'm not sure how much novel insight it has brought (yet). This is quite similar to my own experience in that I wrote a philosophy essay about cluelessness and arrived at not-so-novel conclusion. Let me know if you'd like to read the essay :)

Comment author: Milan_Griffes 19 February 2018 04:46:33PM 0 points [-]

Wisdom and predictive power seem not conceptually distinct.

I'm using "predictive power" as something like "ability to see what's coming down the pipe" and "wisdom" as something like "ability to assess whether what's coming down the pipe is good or bad, according to one's value system."

On your broader point, I agree that these attributes are all tangled up in each other. I don't think there's a useful way to draw clean distinctions here.

I was a bit confused that you write about things to prioritise, but don't refer back to the 5 attributes of the steering capacity.

This is a good point, I'll think about this more & get back to you.

quite similar to my own experience in that I wrote a philosophy essay about cluelessness

I'd like to read this. Could you link to it here, or (if private) send it to the email address on this page?

Comment author: Milan_Griffes 13 February 2018 03:18:38PM 0 points [-]

I donated 92% to the donor lottery, 8% to GiveDirectly.

Also made a "fuzzies" donation to the meditation group I attend.

Comment author: Milan_Griffes 09 February 2018 01:27:23AM 0 points [-]

When is the next round of EA grants opening?

Are you considering accepting applications on a rolling basis?

Comment author: landon 06 February 2018 04:44:29PM 0 points [-]

Hey I think this is really cool. Do you have any other recommended reading that's similar to this? I'm really interested in improving mental health and increasing joy as it compares to reducing suffering through other means, like most of EA focuses on.

Comment author: Milan_Griffes 06 February 2018 10:54:53PM 0 points [-]

Jon Kabat-Zinn's Full Catastrophe Living is the best book on MBSR. It sorta drags in the middle though – I recommend reading the first third & using the rest as a reference.

Comment author: Peter_Hurford  (EA Profile) 31 January 2018 03:41:02PM *  14 points [-]

I also find it more instructive to think of it in terms of percentages -- the Global Development fund still holds 49% of all money it has received all time, the far future fund holds 95%, and the community fund holds 71%.

There can definitely be good reasons for this (such as more merits for giving later vs. giving now, or saving up to give a larger grant in one big batch). I don't know whether it's an intentional application of one of those two things or just that Nick and Ellie are exceptionally busy and have more important priorities than the EA Funds, but it would be nice for more transparency as to why funds are distributed the way they are. (Lewis does a good job at this.)

Comment author: Milan_Griffes 01 February 2018 03:22:29AM 0 points [-]

Minor typo: it's "Elie" not "Ellie"

Comment author: Khorton 22 January 2018 09:53:04AM 2 points [-]

It might be difficult to estimate the likelihood of the N Korean population overthrowing their government, and how much of a difference each flash drive makes, but I'd be interested to see an estimate as well if someone wants to try.

Comment author: Milan_Griffes 22 January 2018 11:18:50PM 3 points [-]

Also complicated to assess the sign of a North Korean popular uprising. Depending on the geopolitical background, it seems like that could result in an overthrow of the regime, or a crackdown coupled with a more aggressive foreign policy stance (or some other outcome).

Comment author: Milan_Griffes 22 January 2018 11:06:42PM *  2 points [-]

Thanks for writing this up – the ceiling-cost estimate seems like a valuable tool for comparing interventions across different cause areas.

Second, I also assumed it would have an effect of 0.1 HALYs per person per year. This might be high, so let's assume it has a 1/10th of that impact.

0.01 HALY/person/year on average still seems quite high. We're estimating the average impact across a billion people, and any sort of systemic reform is going to have an enormous number of impacts (of varying magnitudes, in both directions). Attributing 0.01 HALY on average sorta assumes there aren't any really big negative impacts (more precisely, that any negative impacts are insubstantial compared to the positive impacts).

It also seems difficult to separate out the impacts that are appropriate to attribute to the systemic reform from all the other effects that are going on in the background.

All this is to say that I think arriving at believable average-impact estimates for systemic interventions is tricky. It's probably one of the harder parts of making good ceiling-cost estimates.

Comment author: Milan_Griffes 22 January 2018 10:34:41PM 2 points [-]

Maybe include the winners in the blurb so we don't have click through to the article?

Comment author: MichaelPlant 17 January 2018 04:14:32PM 1 point [-]

Hmm. Yes, I agree cognitive shifts could be pretty powerful from psychedelics and that IQ points probably won't change. I think I misread you.

The larger part of my scepticism is my intuitive hunch that loads of people wont suddenly start taking psychedelics if they're legal/decrimed. This isn't a strongly informed judgement and I could probably change my mind if presented with compelling reasons.

On the worldview stuff, if the idea is something like "people take drugs and this changes how they think for the better", which I actually think is pretty plausible, a particular challenge is that those who you, I expect, would most like to take such drugs, i.e. the very close-minded, are probably going to be the least likely to take them anyway.

Comment author: Milan_Griffes 22 January 2018 10:33:04PM *  0 points [-]

loads of people wont suddenly start taking psychedelics if they're legal/decrimed

I agree that liberalized drug policy is not sufficient to increase the number of people having psychedelic experiences, but it's a prerequisite of many promising interventions in this area (e.g. setting up US-based psychedelic retreat centers).

Comment author: Milan_Griffes 15 January 2018 03:40:21AM *  2 points [-]

Thanks for the thoughts.

I started looking through the CEA and thought it seemed optimistic in various ways

It would be helpful if you could point out places where our best-guess value seems optimistic. The model does include a pretty steep evidence discount – best-guess assumes just a 20% chance that each effect replicates.

I looked it up and AMF is around $1,965 per life saved, equivalent to 36 DALYs, so 1,965/36 = ~$54/DALY

As mentioned in our post, GiveWell has moved away from the DALY framework, so it's not clear that a simple conversion like this is the way to convert its model outputs into DALYs. (We've asked GiveWell for clarification on this.)

the rigors of a GiveWell CEA as well, which would definitely have less optimistic numbers, especially given the low evidence base.

Why do you think a GiveWell CEA would definitely yield less optimistic numbers?

... it might not win compared to the existing top charities.

I don't think this is a great way to think about comparing charities. Quantitative models are complicated & very sensitive to their input parameters, so a "winning" charity may only be winning because of the way a model is structured.

This isn't to say that quantitative comparisons aren't useful. Instead, I think quantitative comparisons are useful for winnowing out interventions whose cost-effect falls orders of magnitude below that of top charities. But I don't think the fidelity of any quantitative model we use today is sufficient to discern the best intervention between interventions on the same order of magnitude.

It becomes even trickier to think about when comparing interventions across very different domains. For example, x-risk interventions either dominate global health interventions (if you take cost-effectiveness estimates literally & are a total utilitarian), or aren't competitive at all (if you only believe cost-effectiveness estimates above some threshold of rigor, so aren't compelled by back-of-the-envelope estimates that massively favor x-risk).

In practice, it seems like the EA community gets by without making direct effectiveness comparisons between x-risk & global health interventions, and instead houses both as priority cause areas.

Something like this is my hope for drug policy reform – a sufficiently compelling case is articulated such that EA decides to house it as a priority cause area (already done to some extent, see Open Phil's grants to the Drug Policy Alliance: It doesn't seem necessary to "win" the cost-effectiveness comparisons, only demonstrate that its cost-effect is competitive under plausible assumptions.

Comment author: Milan_Griffes 22 January 2018 10:30:40PM 0 points [-]

Update on how to convert GiveWell's model outputs to DALYs: we asked someone familiar with GiveWell's 2018 cost-effectiveness model about this.

They weren't comfortable being quoted; the gist of their reply is they can't think of a straightforward way to convert from GW model outputs to DALYs that they'd be comfortable using formally.

View more: Next