Comment author: lukeprog 28 March 2018 03:53:06PM 0 points [-]

The core values and norms are very similar, so specific differences are relatively hard to articulate/pinpoint (which isn't to say they aren't there).

Comment author: Milan_Griffes 28 March 2018 06:32:02PM 0 points [-]

Are you saying that Open Phil & GiveWell cultures are different in hard-to-articulate but important ways? Or that the cultures are basically the same in any way that matters?

Comment author: Milan_Griffes 26 March 2018 08:51:33PM 1 point [-]

What are the most important ways in which Open Phil's culture is different from GiveWell's culture?

Comment author: SiebeRozendal 19 February 2018 02:38:28PM *  3 points [-]

I like this post Milan, I think it's the best of your series. I think that you rightly picked a very important topic to write about (cluelessness) that should receive more attention than it currently does. I do have some comments:

Although I admire new ways to think about prioritisation, I have two worries: Conceptual distinction. Wisdom and predictive power seem not conceptually distinct. Both are about our ability to identifying and predicting the probability of good and bad outcomes. Intent also seems a little tangled up in wisdom, although I can see that we want to seperate those. Furthermore, intent influences coordination capability: the more different the intentions are of a population, the more difficult coordination becomes.

This creates the second worry that this model adds only one dimension (Intent) to the 3-dimensional model of Bostrom's Technology [Capacity] - Insight [Wisdom] - Coordination. Do you think this increases to usefulness of the model enough? The advantage of Bostrom's model is that it allows for differential progress (wisdom > coordination > capacity), while you don't specify the interplay of attributes. Are they supposed to be multiplied, or are some combinations better than others, or do we want differential progress?

I was a bit confused that you write about things to prioritise, but don't refer back to the 5 attributes of the steering capacity. Some relate more strongly to specific attributes, and some attributes are not discussed much (coordination) or at all (capability).

Further our understanding of what matters

This seems to be Intent in your framework. I totally agree that this is valuable. I would call this moral (or more precisely: axiological) uncertainty, and people work on this outside of EA as well. By the way, besides resolving uncertainty, another pathway is to improve our methods to deal with moral uncertainty. (Like MacAskill argues for)

Improve governance

I am not sure to which this concept this relates to, though I suppose it is Coordination. I find the discussion a bit shallow here as it discusses only institutions, and not the coordination of individuals in e.g. the EA community, or the coordination between nation states.

Improve prediction-making & foresight

This seems to be the attribute predictive power. I agree with you that this is very important. To a large extent, this is also what science in general is aiming to do: improving our understanding so that we can better predict and alter the future. However, straight up forecasting seems more neglected. I think this could also just be called "reducing empirical uncertainty"? If we call it that, we can also consider other approaches, such as researching effects in complex systems.

Reduce existential risk

I'm not sure this was intended to relate to a specific attribute. Guess not.

Increase the number of well-intentioned, highly capable people

This seems to relate mostly to "Intent"as well. I wanted to remark that this can either be done by increasing capability and knowledge of well-intentioned people, or by improving intentions of capable (and knowledgeable) people. My observation is that so far, the focus has been on the latter in term of growth and outreach, and only some effort has been expended to develop the skills of effective altruists. (Although this is noted as a comparative advantage for EA Groups)

Lastly, I wanted to remark that hits-based giving does not imply a portfolio approach in my opinion. It just implies being more or less risk-neutral in altruistic efforts. What drives the diversification in OPP's grants seems to be worldview diversification, option value, and the possibility that high-value opportunities are spread over cause areas, rather than concentrated in one cause area. I think what would support the conclusion that we need to diversify could be that we need to hit a certain value on each of the attributes otherwise the project fails (a bit like that power-laws arise from success needing ABC instead of A+B+C).

All in all, an important project, but I'm not sure how much novel insight it has brought (yet). This is quite similar to my own experience in that I wrote a philosophy essay about cluelessness and arrived at not-so-novel conclusion. Let me know if you'd like to read the essay :)

Comment author: Milan_Griffes 19 February 2018 04:46:33PM 0 points [-]

Wisdom and predictive power seem not conceptually distinct.

I'm using "predictive power" as something like "ability to see what's coming down the pipe" and "wisdom" as something like "ability to assess whether what's coming down the pipe is good or bad, according to one's value system."

On your broader point, I agree that these attributes are all tangled up in each other. I don't think there's a useful way to draw clean distinctions here.

I was a bit confused that you write about things to prioritise, but don't refer back to the 5 attributes of the steering capacity.

This is a good point, I'll think about this more & get back to you.

quite similar to my own experience in that I wrote a philosophy essay about cluelessness

I'd like to read this. Could you link to it here, or (if private) send it to the email address on this page?


Doing good while clueless

This is the fourth (and final) post in a series exploring consequentialist cluelessness and its implications for effective altruism: The first post describes cluelessness & its relevance to EA; arguing that for many popular EA interventions we don’t have a clue about the intervention’s overall net impact. The second post... Read More
Comment author: Milan_Griffes 13 February 2018 03:18:38PM 0 points [-]

I donated 92% to the donor lottery, 8% to GiveDirectly.

Also made a "fuzzies" donation to the meditation group I attend.

Comment author: Milan_Griffes 09 February 2018 01:27:23AM 0 points [-]

When is the next round of EA grants opening?

Are you considering accepting applications on a rolling basis?

Comment author: landon 06 February 2018 04:44:29PM 0 points [-]

Hey I think this is really cool. Do you have any other recommended reading that's similar to this? I'm really interested in improving mental health and increasing joy as it compares to reducing suffering through other means, like most of EA focuses on.

Comment author: Milan_Griffes 06 February 2018 10:54:53PM 0 points [-]

Jon Kabat-Zinn's Full Catastrophe Living is the best book on MBSR. It sorta drags in the middle though – I recommend reading the first third & using the rest as a reference.

Comment author: Peter_Hurford  (EA Profile) 31 January 2018 03:41:02PM *  14 points [-]

I also find it more instructive to think of it in terms of percentages -- the Global Development fund still holds 49% of all money it has received all time, the far future fund holds 95%, and the community fund holds 71%.

There can definitely be good reasons for this (such as more merits for giving later vs. giving now, or saving up to give a larger grant in one big batch). I don't know whether it's an intentional application of one of those two things or just that Nick and Ellie are exceptionally busy and have more important priorities than the EA Funds, but it would be nice for more transparency as to why funds are distributed the way they are. (Lewis does a good job at this.)

Comment author: Milan_Griffes 01 February 2018 03:22:29AM 0 points [-]

Minor typo: it's "Elie" not "Ellie"

Comment author: Khorton 22 January 2018 09:53:04AM 2 points [-]

It might be difficult to estimate the likelihood of the N Korean population overthrowing their government, and how much of a difference each flash drive makes, but I'd be interested to see an estimate as well if someone wants to try.

Comment author: Milan_Griffes 22 January 2018 11:18:50PM 3 points [-]

Also complicated to assess the sign of a North Korean popular uprising. Depending on the geopolitical background, it seems like that could result in an overthrow of the regime, or a crackdown coupled with a more aggressive foreign policy stance (or some other outcome).

Comment author: Milan_Griffes 22 January 2018 11:06:42PM *  2 points [-]

Thanks for writing this up – the ceiling-cost estimate seems like a valuable tool for comparing interventions across different cause areas.

Second, I also assumed it would have an effect of 0.1 HALYs per person per year. This might be high, so let's assume it has a 1/10th of that impact.

0.01 HALY/person/year on average still seems quite high. We're estimating the average impact across a billion people, and any sort of systemic reform is going to have an enormous number of impacts (of varying magnitudes, in both directions). Attributing 0.01 HALY on average sorta assumes there aren't any really big negative impacts (more precisely, that any negative impacts are insubstantial compared to the positive impacts).

It also seems difficult to separate out the impacts that are appropriate to attribute to the systemic reform from all the other effects that are going on in the background.

All this is to say that I think arriving at believable average-impact estimates for systemic interventions is tricky. It's probably one of the harder parts of making good ceiling-cost estimates.

View more: Next