Comment author: Elizabeth 24 June 2017 05:23:01PM 0 points [-]

I think costly signaling is the wrong phrase here. Costly signaling is about gain for the signaler. This seems better modeled as people trying to indirectly purchase the good "rich people donate lots to charity.". Similar to people who are unwilling to donate to the government (so they don't think the government is better at spending money than they are) but do advocate for higher taxes (meaning they think the government is better at spending money than other people are). They're trying to purchase the good "higher taxes for everyone".

Comment author: Owen_Cotton-Barratt 24 June 2017 07:53:31PM 1 point [-]

Seems like it's suggesting it as costly signalling at the level of the movement rather than the individuals. It's a stretch from normal use, but that's kind of the strength of analogies?

Comment author: jpaddison 18 June 2017 03:50:07PM 2 points [-]

Does anyone have recommendations for activities that are valuable for people considering their values? Or people considering / doing action?

Comment author: Owen_Cotton-Barratt 24 June 2017 02:29:39PM 0 points [-]

This is a great question and I think deserves further thought.

Helping people consider their values was one of the major goals Daniel Kokotajlo and I had in designing this flowchart. One possible activity would be to read through and/or discuss parts of that.


What is valuable about effective altruism? Implications for community building

If we’re interested in building the best version of effective altruism, it’s natural to spend some time thinking about why people should join. Presumably people should join because it’s valuable, but how is it valuable? In fact there are a couple of different versions of the question, according to whose... Read More
Comment author: Owen_Cotton-Barratt 13 June 2017 11:18:26AM 9 points [-]

I was a bit confused by some of these. Posting questions/comments here in case others have the same thoughts:

Earning-to-give buy-out

You're currently earning to give, because you think that your donations are doing more good than your direct work would. It might be that we think that it would be more valuable if you did direct work. If so we could donate a proportion of the amount that you were donating to wherever you were donating it, and you would move into work.

This made more sense to me after I realised that we should probably assume the person doesn't think CEA is a top donation target. Otherwise they would have an empirical disagreement about whether they should be doing direct work, and it's not clear how the offer helps resolve that (though it's obviously worth discussing).

Anti-Debates / Shark Tank-style career choice discussions / Research working groups

These are all things that might be good, but it's not obvious how funding would be a bottleneck. Might be worth saying something about that?

For those with a quantitative PhD, it could involve applying for the Google Brain Residency program or AI safety fellowship at ASI.

Similarly I'm confused what the funding is meant to do in these cases.

I'd be keen to see more people take ideas that we think we already know, but haven't ever been put down in writing, and write them up in a thorough and even-handed way; for example, why existential risk from anthropogenic causes is greater than the existential risk from natural causes

I think you were using this as an example of the type of work, rather than a specific request, but some readers might not know that there's a paper forthcoming on precisely this topic (if you mean something different from that paper, I'm interested to know what!).

Comment author: DavidNash 01 June 2017 08:55:49AM 9 points [-]

If there was any community that it might apply to, it's probably effective altruists.

Comment author: Owen_Cotton-Barratt 01 June 2017 10:21:00PM 7 points [-]

Not as pithy, but just a flag that I think the question implicitly raised by Tom's comment and the answer in David's are pretty important. This is a community which is willing to update actions based on theoretical arguments about what's important. Of course I don't expect an article to totally change people's beliefs -- let alone behaviours -- but if it has a fraction of that effect I'd count it as cheap.

Comment author: MichaelPlant 01 June 2017 12:01:06AM 9 points [-]

Thanks for this. I think I strongly agree with what you've said. I've often noticed/got the impression that lots of EAs seem to be quite interested in pursuing their own projects and don't help each other very much. I worry this results in an altruistic tragedy of the commons problem; it would be better if people helped each other, but instead we chose to do our own good in our own way, resulting in less good done overall. Now I think of it, I've probably done this myself.

The real challenge, as you noted, is the following:

Being considerate often makes others happier to interact with you. That is normally good, but in some circumstances may not be desirable. If people find you extremely helpful when they ask you about frivolous matters, they will be incentivized to keep asking you about such matters. If you would prefer them not to, you should not be quite so helpful.

This seems to be quite a common problem, at least in academia. VIPs (very important people) will often deliberately make themselves unavailable so they have time for their own projects. Presumably, this has some reciprocal costs to the VIP too: if they had helped you, you would be more inclined to help them in future.

Relatedly, suppose people accept more considerate norms and so are reluctant to bother some VIP in case it's annoying to the VIP. We can imagine this backfiring. Take an extreme scenario where considerate people dont ask VIPs (or indeed anyone) else for help. This means people don't get help from the VIPs, and VIPs only get requests from inconsiderate people. Presuming these VIPs do grant some requests for help and the requests from considerate people would have done more good, this is now a worse situation overall. Extreme considerateness, call it 'meekness', seems bad.

It strikes me that it would be important to develop some community norms for navigating this difficulty. Perhaps people asking for help should be encouraged to do so, ask once or twice and leave the other person plenty of room to turn the request down. Perhaps receipients of requests should make a habit of replying to them but being polite and honest about their current capacity to help.

Comment author: Owen_Cotton-Barratt 01 June 2017 09:18:22AM 4 points [-]

I think you're right that there's a failure mode of not asking people for things. I don't think that not-asking is in general the more considerate action, though -- often people would prefer to be given the opportunity to help (particularly if it feels like an opportunity rather than a demand).

I suppose the general point is: avoid the trap of overly-narrow interpretations of considerateness (just like it was good to avoid the trap of overly-narrow interpretations of consequences of actions).

Comment author: adom_hartell 25 May 2017 02:10:14AM 2 points [-]

Hey Max, thanks for linking these.

I have a question about an argument for the benefit of reserves made in the second link:

Assuming that core programmes are roughly as effective next year, additional funding mostly reduces the funding needs of the organisation next year, thereby freeing up money for those donors who would have given next year. Assuming those donors still donate that money somewhere else, then their alternate donations are likely to produce at least almost as great value as this organisations’ core programmes.

I read this as saying that the benefit of donating to Organization A this year is that it will free up money for Organization B next year. But if Organization B is almost as good (as assumed in the quoted text), then why not donate to them directly this year?

On this reading, it seems like the impact of reserves for Organization A is whatever benefit Org A draws from the other arguments you offer (potential for capacity-building, freeing up staff-time from fundraising efforts next year) minus something like a discount rate / the cost of Organization B getting resources one year later. It's not obvious to me that this will always, or usually, be positive.

Am I missing something here?

Comment author: Owen_Cotton-Barratt 25 May 2017 10:24:17AM 1 point [-]

Fair question. This argument is all conditioned on A not actually having good ways to expand capacity -- the case is that even then the funds are comparably good given to A as elsewhere. The possibility of A in fact having useful expansion might make it noticeably better than the alternative, which is what (to my mind) drives the asymmetry.

Comment author: Benito 16 May 2017 09:57:21AM 0 points [-]

Well I don't understand that at all, and it seems to contradict my guess.

I thought DALYs had a more rigorous conversion than "we took our median estimate" and I thought a life was a full life, not just preventing death one time. Strike me wrong on this count.

Comment author: Owen_Cotton-Barratt 16 May 2017 10:37:59AM 2 points [-]

DALYs do use a more defensible analysis; GiveWell aren't using DALYs. This has some good and some bad aspects (related to the discussion in this post, although in this case the downside of defensibility is more that it doesn't let you incorporate considerations that aren't fully grounded).

The problem with just using DALYs is that on many views they overweigh infant mortality (here's my view on some of the issues, but the position that they overweigh infant mortality is far from original). With an internal agreement that they significantly overweigh infant mortality, it becomes untenable to just continue using DALYs, even absent a fully rigorous alternative. Hence falling back on more ad hoc but somewhat robust methods like asking people to consider it and using a median.

[I'm just interpreting GW decision-making from publicly available information; this might easily turn out to be a misrepresentation.]

Comment author: RyanCarey 05 May 2017 05:40:10AM *  3 points [-]

A clear problem with this model is that AFAICT, it assumes that (i) the size of the research community working on safety when AI is developed is independent of (ii) the the degree to which adding a researcher now will change the total number of researchers.

Both (i) and (ii) can vary by orders of magnitude, at least on my model, but are very correlated, because they depend on timelines. This means I get an oddly high chance of averting existential risk. If the questions where combined together into "what fraction of the AI community will the community by enlarged by adding an extra person" then I think my chance of averting existential risk would come out much lower.

Comment author: Owen_Cotton-Barratt 08 May 2017 01:42:36PM 2 points [-]

Yes, I think this is a significant concern with this version of the model (somewhat less so with the original cruder version using something like medians, but that version also fails to pick up on legitimate effects of "what if these variables are all in the tails"). Combining the variables as you suggest is the easiest way to patch it. More complex would be to add in explicit time-dependency.

Comment author: RyanCarey 25 April 2017 06:19:18PM 3 points [-]

I expect that if anything it is broader than lognormally distributed.

It might depend what we're using the model for.

In general, it does seem reasonable that direct (expected) net impact of interventions should be broader than lognormal, as Carl argued in 2011. On the other hand, it seems like the expected net impact all things considered shouldn't be broader than lognormal. For one argument, most charities probably funge against each other by at least 1/10^6. For another, you can imagine that funding global health improves the quality of research a bit, which does a bit of the work that you'd have wanted done by funding a research charity. These kinds of indirect effects are hard to map. Maybe people should think more about them.

AFAICT, the basic thing for a post like this one to get right is to compare apples with apples. Tom is trying to evaluate various charities, of which some are evaluators. If he's evaluating the other charities on direct estimates, and is not smoothing the results over by assuming indirect effects, then he should use a broader than lognormal assumption for the evaluators too (and they will be competitive). If he's taking into account that each of the other charities will indirectly support the cause of one another (or at least the best ones will), then he should assume the same for the charity evaluators.

I could be wrong about some of this. A couple of final remarks: it gets more confusing if you think lots of charities have negative value e.g. because of the value of technological progress. Also, all of this makes me think that if you're so convinced that flow-through effects cause many charities to have astronomical benefits, perhaps you ought to be studying these effects intensely and directly, although that admittedly does seem counterintuitive to me, compared with working on problems of known astronomical importance directly.

Comment author: Owen_Cotton-Barratt 26 April 2017 11:45:34AM 1 point [-]

I largely agree with these considerations about the distribution of net impact of interventions (although with some possible disagreements, e.g. I think negative funging is also possible).

However, I actually wasn't trying to comment on this at all! I was talking about the distribution of people's estimates of impact around the true impact for a given intervention. Sorry for not being clearer :/

View more: Next