Comment author: Owen_Cotton-Barratt 13 June 2017 11:18:26AM 7 points [-]

I was a bit confused by some of these. Posting questions/comments here in case others have the same thoughts:

Earning-to-give buy-out

You're currently earning to give, because you think that your donations are doing more good than your direct work would. It might be that we think that it would be more valuable if you did direct work. If so we could donate a proportion of the amount that you were donating to wherever you were donating it, and you would move into work.

This made more sense to me after I realised that we should probably assume the person doesn't think CEA is a top donation target. Otherwise they would have an empirical disagreement about whether they should be doing direct work, and it's not clear how the offer helps resolve that (though it's obviously worth discussing).

Anti-Debates / Shark Tank-style career choice discussions / Research working groups

These are all things that might be good, but it's not obvious how funding would be a bottleneck. Might be worth saying something about that?

For those with a quantitative PhD, it could involve applying for the Google Brain Residency program or AI safety fellowship at ASI.

Similarly I'm confused what the funding is meant to do in these cases.

I'd be keen to see more people take ideas that we think we already know, but haven't ever been put down in writing, and write them up in a thorough and even-handed way; for example, why existential risk from anthropogenic causes is greater than the existential risk from natural causes

I think you were using this as an example of the type of work, rather than a specific request, but some readers might not know that there's a paper forthcoming on precisely this topic (if you mean something different from that paper, I'm interested to know what!).

Comment author: William_MacAskill 13 June 2017 05:34:00PM 5 points [-]

Thanks Owen!

Re Etg buy-out - yes, you're right. For people who think that CEA is a top donation target, hopefully we could just come to agreement as a trade wouldn't be possible, or would be prohibitively costly (if there were only slight differences in our views on which places were best to fund).

Re local group activities: These are just examples of some of the things I'd be excited about local groups doing, and I know that at least some local groups are funding constrained (e.g. someone is running them part-time, unpaid, and will otherwise need to get a job).

Re AI safety fellowship at ASI - as I understand it, that is currently funding constrained (they had great applicants who wanted to take the fellowship but ASI couldn't fund it). For other applications (e.g. Google Brain) it could involve, say, spending some amount of time during or after a physics or math PhD in order to learn some machine learning and be more competitive.

Re anthropogenic existential risks - ah, I had thought that it was only in presentation form. In which case: that paper is exactly the sort of thing I'd love to see more of.

Comment author: riceissa  (EA Profile) 11 June 2017 02:38:30AM 6 points [-]

How does this compare to EA Ventures?

Comment author: William_MacAskill 13 June 2017 11:19:07AM 4 points [-]

It is a successor to EA Ventures, though EA Grants already has funding, and is more focused on individuals than start-up projects.

Comment author: joshjacobson  (EA Profile) 13 June 2017 07:16:07AM 2 points [-]

Can you address the unanswered question in the announcement thread regarding EA Ventures?

Additionally, is the money already raised for this? That was the major shortcoming with the previous iteration.

Comment author: William_MacAskill 13 June 2017 11:18:12AM 6 points [-]

Yes, the money is raised; we have a pot of £500,000 in the first instance.

It is a successor to EA Ventures, though EA Grants already has funding, and is more focused on individuals than start-up projects.

26

Projects I'd like to see

We've just launched the Effective Altruism Grants program to help people put promising ideas into practice. I'm hoping that the program will enable some people to transition onto higher-impact paths that they otherwise wouldn't have been able to pursue. Here I'm going to list some applications I'd personally like to... Read More
Comment author: the_jaded_one 06 April 2017 05:37:09PM *  2 points [-]

these bottom lines remain in every estimate of the global income distribution I’ve seen so far... Many people in the world live in serious absolute poverty, surviving on as little as one hundredth the income of the upper-middle class in the US.

But is this bottom line really approximately true?

A salary of $70,000 could be considered upper-middle-class. 1/100th of $70,000 is $700.

According to the chart, that is slightly greater than the income of the median Indian, adjusted for PPP.

Since these figures have been adjusted, that should mean that $700 in Western Europe or the US will afford you the same quality of life as the median Indian person, without you getting any additional resources such as extra meals from sympathetic passers-by or free accommodation in a shelter (because otherwise, to be 100 times richer you would have to have 100 units per day of these additional resources - i.e. $70,000 plus 100 meals/day plus owning low-quality accommodation for 100 people).

However, $700/year (= $1.91/day, =€1.80/day, =£1.53 /day) (without gifts or handouts) is not a sufficient amount of money to be alive in the west. You would be homeless. You would starve to death. In many places, you would die of exposure in the winter without shelter. Clearly, the median person in India is better off than a dead person.

A realistic minimum amount of money to not die in the west is probably $2000-$5000/year, again without gifts or handouts, implying that to be 100 times richer than the average Indian, you have to be earning at least $200,000-$500,000 net of tax (or at least net of that portion of tax which isn't spent on things that benefit you - which at that level is almost all of it, unless you are somehow getting huge amounts of government money spent on you in particular).

The reality is that a PPP conversion factor is trying to represent a nonlinear mapping with a single straight line, and it fails badly at the extremes. But the extremes are exactly where one is getting this (misleading) factor of 100 from.

Comment author: William_MacAskill 11 April 2017 08:18:03PM *  6 points [-]

"However, $700/year (= $1.91/day, =€1.80/day, =£1.53 /day) (without gifts or handouts) is not a sufficient amount of money to be alive in the west. You would be homeless. You would starve to death. In many places, you would die of exposure in the winter without shelter."

One could live on that amount of money per day in the West. You'd live in a second-hand tent, you'd scavenge food from bins (which would count towards your 'expenditure', because we're talking about consumption expenditure, but wouldn't count that much). Your life expectancy would be considerably lower than others in the West, but probably not lower than the 55 years which is the life expectancy in Burkina Faso (as an example comparison, bear in mind that number includes infant mortality). Your life would suck very badly, but you wouldn't die, and it wouldn't be that dissimilar to the lives of the millions of people who live in makeshift slums or shanty towns and scavenge from dumps to make a living. (Such people aren't representative of all extremely poor people, but they are a notable fraction.)

Comment author: RobBensinger 31 March 2017 08:43:25PM *  2 points [-]

And it's framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren't GCRs - like humanity having the right values, for example.

I don't have much to add to what Rob W and Carl said, but I'll note that Bostrom defined "existential risk" like this back in 2008:

A subset of global catastrophic risks is existential risks. An existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or to reduce its quality of life (compared to what would otherwise have been possible) permanently and drastically.

Presumably we should replace "intelligent" here with "sentient" or similar. The reason I'm quoting this is that on the above definition, it sounds like any potential future event or process that would cost us a large portion of the future's value counts as an xrisk (and therefore as a GCR). 'Humanity's moral progress stagnates or we otherwise end up with the wrong values' sounds like a global catastrophic risk to me, on that definition. (From a perspective that does care about long-term issues, at least.)

I'll note that I think there's at least some disagreement at FHI / Open Phil / etc. about how best to define terms like "GCR", and I don't know if there's currently a consensus or what that consensus is. Also worth noting that the "risk" part is more clearly relevant than the "global catastrophe" part -- malaria and factory farming are arguably global catastrophes in Bostrom's sense, but they aren't "risks" in the relevant sense, because they're already occurring.

Comment author: William_MacAskill 04 April 2017 07:00:51PM 0 points [-]

"counts as an xrisk (and therefore as a GCR)"

My understanding: GCR = (something like) risk of major catastrophe that kills 100mn+ people

(I think the GCR book defines it as risk of 10mn+ deaths, but that seemed too low to me).

So, as I was using the term, something being an x-risk does not entail it being a GCR. I'd count 'Humanity's moral progress stagnates or we otherwise end up with the wrong values' as an x-risk but not a GCR.

Interesting (/worrying!) how we're understanding widely-used terms so differently.

Comment author: Carl_Shulman 31 March 2017 07:23:54PM *  14 points [-]

"if you are only considering the impact on beings alive today...factory farming"

The interventions you are discussing don't help any beings alive at the time, but only affect the conditions (or existence) of future ones. In particular cage-free campaigns, and campaigns for slower growth-genetics and lower crowding among chickens raised for meat are all about changing the conditions into which future chickens will be born, and don't involve moving any particular chickens from the old to new systems.

I.e. the case for those interventions already involves rejecting a strong presentist view.

"That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:"

Suppose there's an intelligence explosion in 30 years (not wildly unlikely in expert surveys), and expansion of population by 3-12 orders of magnitude over the next 10 years (with AI life of various kinds outnumbering both human and non-human animals today, with vastly more total computation). Then almost all the well-being of the next 50 years lies in that period.

Also in that scenario existing beings could enjoy accelerated subjective speed of thought and greatly enhanced well-being, so most of the QALY-equivalents for long-lived existing beings could lie there.

Comment author: William_MacAskill 04 April 2017 06:55:55PM 5 points [-]

Mea culpa that I switched from "impact on beings alive today" to "benefits over the next 50 years" without noticing.

Comment author: Robert_Wiblin 31 March 2017 12:04:07AM *  14 points [-]

Someone taking a hard 'inside view' about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. I'm thinking something like:

1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.

$1 billion / (0.1 risk * reduced by 1% * 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.

This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.

I'm probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime).

Comment author: William_MacAskill 31 March 2017 05:13:07PM 1 point [-]

That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:

"We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent." "One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved]." http://www.openphilanthropy.org/blog/worldview-diversification

And to clarify my first comment, "unlikely to be optimal" = I think it's a contender, but the base rate for "X is an optimal intervention" is really low.

Comment author: RobBensinger 30 March 2017 09:47:22PM *  13 points [-]

To some, it’s just obvious that future lives have value and the highest priority is fighting existential threats to humanity (‘X-risks’).

I realize this is just an example, but I want to mention as a side-note that I find it weird what a common framing this is. AFAIK almost everyone working on existential risk think it's a serious concern in our lifetimes, not specifically a "far future" issue or one that turns on whether it's good to create new people.

As an example of what I have in mind, I don't understand why the GCR-focused EA Fund is framed as a "long-term future" fund (unless I'm misunderstanding the kinds of GCR interventions it's planning to focus on), or why philosophical stances like the person-affecting view and presentism are foregrounded. The natural things I'd expect to be foregrounded are factual questions about the probability and magnitude over the coming decades of the specific GCRs EAs are most worried about.

Comment author: William_MacAskill 30 March 2017 11:34:06PM *  9 points [-]

Agree that GCRs are a within-our-lifetime problem. But in my view mitigating GCRs is unlikely to be the optimal donation target if you are only considering the impact on beings alive today. Do you know of any sources that make the opposite case?

And it's framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren't GCRs - like humanity having the right values, for example.

Comment author: William_MacAskill 08 March 2017 01:59:04AM *  10 points [-]

In my previous post I wrote: “The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.” I now think that’s an incorrect statement. EA, currently, is all of the following: an idea/movement, a community, and a small group of organisations. On the ‘movement’ understanding of EA, analogies of EA don’t have a community panel similar to what I suggested, and only some have ‘guiding principles’. (Though communities and organisations, or groups of organisations, often do.)

Julia created a list of potential analogies here:

[https://docs.google.com/document/d/1aXQp_9pGauMK9rKES9W3Uk3soW6c1oSx68bhDmY73p4/edit?usp=sharing].

The closest analogy to what we want to do is given by the open source community: many but not all of the organisations within the open source community created their own codes of conduct, many of them very similar to each other.

View more: Next