Comment author: Robert_Wiblin 17 February 2017 12:12:10AM *  4 points [-]

I broadly agree with this and am often pleased to see people go into party politics, government bureaucracies or advocacy on particular policy areas. The skills and connections they gain will hopefully be useful in the long term.

The interesting questions remaining to me here are: i) how much leverage do you get through political engagement vs direct work, aiming to include in your sample people who try and fail; ii) how worrying is it to find yourself working on a controversial issue, both because you'll have to fight against opponents and because you might be on the wrong side. Tough questions to answer!

Comment author: Robert_Wiblin 15 February 2017 08:57:05PM 7 points [-]

Always pleased to see people collating information like this!

Comment author: Robert_Wiblin 08 February 2017 05:47:15AM 3 points [-]

Looks spot on to me, nice work folks! :)

Comment author: Ben_Todd 06 February 2017 03:27:59PM 3 points [-]

We've considered wrapping it into the problem framework in the past, but it can easily get confusing. Informativeness is also more of a feature of how you go about working on the cause, rather than which cause you're focused on.

The current way we show that we think VOI is important is by listing Global Priorities Research as a top area (though I agree that doesn't quite capture it). I also talk about it often when discussing how to coordinate with the EA community (VOI is a bigger factor when considering the community perspective than individual perspective).

Comment author: Robert_Wiblin 06 February 2017 06:49:25PM *  3 points [-]

The 'Neglectedness' criteria gets you a pretty big tilt in favour of working on underexplored problems already. But value of information is an important factor in choosing what project to work on within a problem area.

Comment author: Gregory_Lewis 29 January 2017 08:12:34PM *  6 points [-]

I don't see the merit of upbraiding 80k for aggregation various sources of 'EA philanthropic advice' because one element of this relies on political views one may disagree with. Not including Cockburn's recommendations whilst including all the other OpenPhil staffers also implies political views others would find disagreeable. It's also fairly clear from the introduction the post (at least for non-animal charities) was canvassing all relevant recommendations rather than editorializing.

That said, it is perhaps unwise to translate 'advice from OpenPhil staffers' into 'EA recommendations'. OpenPhil is clear about how it licenses itself to try and 'pick hits' which may involve presuming or taking a bet on a particular hot button political topic (i.e. Immigration, criminal justice, abortion), being willing to take a particular empirical bet in the face of divided expertise, and so forth. For these reasons OpenPhil are not a 'Givewell for everything else', and their staffer's recommendations, although valuable for them to share and 80k to publicise, should carry the health warning that they are often conditional on quite large and non-resilient conjunctions of complicated convictions - which may not represent 'expert consensus' on these issues.

Comment author: Robert_Wiblin 29 January 2017 10:22:36PM 2 points [-]

Note that we say when describing this source at the beginning of the post that:

"[We refer to] Open Philanthropy Project’s suggestions for individual donors. ... Though note that “These are reasonably strong options in causes of interest, and shouldn’t be taken as outright recommendations.”"

We then consistently throughout the post refer to these as 'suggestions' only, rather than 'recommendations', as for the other sources.

Comment author: Larks 29 January 2017 04:13:11PM 7 points [-]

If you want to get these charities taken off of our article during next year's giving season, then you'd need to speak with Chloe.

In general the EA movement has an admirable history of public cost-benefit analysis of different groups, which 80k has supported and should continue to do so. But in this instance 80k is instead deferring to the opinion of a single expert who has provided only the most cursory of justification. It's true that 80k isn't responsible for what Chloe says, but 80k is responsible for the choice to defer to her on the subject. And the responsibility is even greater if you present her work as representing the views of the effective altruism movement.

Comment author: Robert_Wiblin 29 January 2017 05:02:55PM *  5 points [-]

Our process here involves deferring to the project officers of the Open Philanthropy Project in their area of expertise (unless we can find an equivalent authority in the area who disagrees). OpenPhil seems to have a good record for making grants in line with EA values, and we trust the people involved in that institution, so this seems like a good process.

It's true, we could carve out an exception in this one case based on our own opinions. But I'd rather stick with a sound survey process that is i) generally reliable (and avoids errors based on our ignorance), and ii) scales well as the number of authorities and problem areas being reviewed increases.

The superior solution here is just for those who disagree with one of OpenPhil's ideas to speak with the relevant staff and convince them to change their minds. OpenPhil directs far more in grants than that blog post will move in donations, so making sure they get it right is much more valuable. If the arguments are convincing to Chloe or another relevant staff member, then I'll edit the blog post to reflect their latest thinking. I don't really have a dog in this fight.

Comment author: the_jaded_one 29 January 2017 10:37:59AM 2 points [-]

based on a misconception about how we produced the list and our motivations.

I would disagree; to me it seems irrelevant whether 80,000 hours is "just syndicating content", or whether your organisation has a "direct view or goal".

It's on your website, as a recommendation. If it's a bad recommendation, it's your problem.

Comment author: Robert_Wiblin 29 January 2017 03:48:31PM 4 points [-]

Perhaps, but the article is peculiar because it's directed at 80,000 Hours rather than the ultimate source of the advice - when you just as easily could have addressed it to OpenPhil. It would be as though you had a problem with AMF and criticised 80,000 Hours over it (wondering what specifics could have caused us to recommend it), when you could just as easily direct it as GiveWell.

This leads you to speculation like "maybe [80,000 Hours] likes left-wing social justice causes". Had you reached out you wouldn't have had to speculate, and I could have told you right away that the list was designed to a follow a process that minimised the influence of my personal opinions. Had it been based on my personal views rather than a survey of experts and institutions, it probably wouldn't have included the Criminal Justice Reform category.

Anyway, I do think if you're writing a lengthy piece about a person or a group speaking with them to ask clarificatory questions is wise - it can save you from wasting time going down rabbit holes.

Comment author: jsteinhardt 29 January 2017 03:35:26AM *  10 points [-]

Instead of writing this like some kind of expose, it seems you could get the same results by emailing the 80K team, noting the political sensitivity of the topic, and suggesting that they provide some additional disclaimers about the nature of the recommendation.

I don't agree with the_jaded_one's conclusions or think his post is particularly well-thought-out, but I don't think raising the bar on criticism like this is very productive if you care about getting good criticism. (If you think the_jaded_one's criticism is bad criticism, then I think it makes sense to just argue for that rather than saying that they should have made it privately.)

My reasons are very similar to Benjamin Hoffman's reasons here.

Comment author: Robert_Wiblin 29 January 2017 06:30:30AM 1 point [-]

The original post is partly based on a misconception about how we produced the list and our motivations. That's the kind of thing that could have been clarified if the author contacted us before publishing (or indeed, after publishing).

Comment author: Robert_Wiblin 28 January 2017 11:59:16PM *  19 points [-]

Thanks for your interest in our work.

As we say in the post, on this and most problem areas 80,000 Hours defers charity recommendations to experts on that particular cause (see: What resources did we draw on?). In this case our suggestion is based entirely on the suggestion of Chloe Cockburn, the Program Officer for Criminal Justice Reform at the Open Philanthropy Project, who works full time making grants on this particular problem area and knows much more than any of us about what is likely to work.

To questions like "does 80,000 Hours have view X that would make sense of this" or "is 80,000 Hours intending to do X" - the answer is that we don't really have independent views or goals on any of these things. We're just syndicating content from someone we perceive to be an authority (just as we do when we include GiveWell's recommended charities without having independently investigated them). I thought the article was very clear about this, but perhaps we needed to make it even more so in case people skipped down to a particular section without going through the preamble.

If you want to get these charities taken off of our article during next year's giving season, then you'd need to speak with Chloe. If she changes her suggestions - or another similar authority on this topic arises and offers a contrary view - then that would change what we include.

Regarding why we didn't recommend the Center for Criminal Justice Reform: again, that is entirely because it wasn't on the Open Philanthropy Project's list of suggestions for individual donors. Presumably that is because they felt their own grant - which you approve of - had filled their current funding needs.

All the best,

Rob

Comment author: TruePath 02 December 2016 04:19:21PM 2 points [-]

I think this post is confused on a number of levels.

First, as far as ideal behavior is concerned integrity isn't a relevant concept. The ideal utilitarian agent will simply always behave in the manner that optimizes expected future utility factoring in the effect that breaking one's word or other actions will have on the perceptions (and thus future actions) of other people.

Now the post rightly notes that as a limited human agent we aren't truly able to engage in this kind of analysis. Both because of our computational limitations and our inability to perfectly deceive it is beneficial to adopt heuristics about not lying, stabbing people in the back etc.. (which we may judge to be worth abandoning in exceptional situations).

However, the post gives us no reason to believe it's particular interpretation of integrity "being straightforward" is the best such heuristic. It merely asserts the author's belief that this somehow works out to be the best.

This brings us to the second major point, even though the post acknowledges the very reason for considering integrity is that, "I find the ideal of integrity very viscerally compelling, significantly moreso than other abstract beliefs or principles that I often act on." the post proceeds to act as if it was considering what kind of integrity like notion would be appropriate to design into (or socially construct) in some alternative society of purely rational agents.

Obviously, the way we should act depends hugely on the way in which others will interpret our actions and respond to them. In the actual world WE WILL BE TRUSTED TO THE EXTENT WE RESPECT THE STANDARD SOCIETAL NOTIONS OF INTEGRITY AND TRUST. It doesn't matter if some other alternate notion of integrity might have been better to have if we don't show integrity in the traditional manner we will be punished.

In particular, "being straightforward" will often needlessly imperil people's estimation of our integrity. For example, consider the usual kinds of assurances we give to friends and family that we "will be there for them no matter what" and that "we wouldn't ever abandon them." In truth pretty much everyone, if presented with sufficient data showing their friend or family member to be a horrific serial killer with every intention of continuing to torture and kill people, would turn them in even in the face of protestations of innocence. Does that mean that instead of saying "I'll be there for you whatever happens" we should say "I'll be there for you as long as the balance of probability doesn't suggest that supporting you will cost more than 5 QALYs" (quality adjusted life years)?

No, because being straightforward in that sense causes most people to judge us as weird and abnormal and thereby trust us less. Even though everyone understands at some level that these kind of assurances are only true ceterus parabus actually being straightforward about that fact is unusual enough that it causes other people to suspect that they don't understand our emotions/motivations and thus give us less trust.


In short: yes, the obvious point that we should adopt some kind of heuristic of keeping our word and otherwise modeling integrity is true. However, the suggestion that this nice simple heuristic is somehow the best one is completely unjustified.

Comment author: Robert_Wiblin 20 January 2017 08:54:42PM 0 points [-]

"WE WILL BE TRUSTED TO THE EXTENT WE RESPECT THE STANDARD SOCIETAL NOTIONS OF INTEGRITY AND TRUST"

I think there is a lot to this, but I feel it can be subsumed into Paul's rule of thumb:

  • You should follow a standard societal notion of what is decent behaviour (unless you say ahead of time that you won't in this case) if you want people to have always thought that you are the kind of person who does that.

Because following standard social rules that everyone assumes to exist is an important part of being able to coordinate with others without very high communication and agreement overheads, you want to at least meet that standard (including following some norms you might have reservations about). Of course this doesn't preclude you meeting a higher standard if having a reputation for going above and beyond would be useful to you (as Paul argues it often is for most of us).

View more: Next