Comment author: JoshuaFox 27 February 2017 11:06:56AM *  2 points [-]

Outreach can be valuable, although it is rare to have high-value opportunities. If you can publish, lecture or talk 1-on-1 with highly relevant audiences, then you may sway the Zeitgeist a little and so contribute towards getting donors or researchers on board.

Relevant audiences include:

  • tech moguls and other potential big donors; people who may have the potential to become or at influence those moguls.

  • researchers in relevant areas such as game theory; smart people in elite educational tracks who may have the potential to become or influence such researchers.

Comment author: jsteinhardt 28 February 2017 06:48:15AM 10 points [-]

I already mention this in my response to kbog above, but I think EAs should approach this cautiously; AI safety is already an area with a lot of noise, with a reputation for being dominated by outsiders who don't understand much about AI. I think outreach by non-experts could end up being net-negative.

Comment author: kbog  (EA Profile) 27 February 2017 06:52:45AM *  2 points [-]

What about online activism? There are lots of debates in various corners of the Internet over AI which often involve people in various areas of academia and tech. It seems like it could be valuable and feasible for people who are sufficiently educated on the basic issues of AI alignment to correct misconceptions and spread good ideas.

As another idea, there are certain kinds of information which would be worth collecting: surveys of relevant experts, taxonomies of research ideas and developments in the field, information about the political and economic sides of AI research. I suppose this could fall into gruntwork for safety orgs, but they don't comprehensively ask for every piece of information and work which could be useful.

Also - this might sound strange, but if someone wants to contribute then it's their choice: students and professionals might be more productive if they had remote personal assistants to handle various tasks which are peripheral to one's primary tasks and responsibilities, and if someone is known to be an EA, value aligned on cause priorities, and moderately familiar with the technical work, then having someone do this seems very feasible.

Comment author: jsteinhardt 28 February 2017 06:46:01AM 11 points [-]

In general I think this sort of activism has a high potential for being net negative --- AI safety already has a reputation as something mainly being pushed by outsiders who don't understand much about AI. Since I assume this advice is targeted at the "average EA" (who presumably doesn't know much about AI), this would only exacerbate the issue.

Comment author: Kerry_Vaughan 29 January 2017 08:12:35PM 7 points [-]

I don't agree with thejadedone's conclusions or think his post is particularly well-thought-out, but I don't think raising the bar on criticism like this is very productive if you care about getting good criticism. (If you think thejadedone's criticism is bad criticism, then I think it makes sense to just argue for that rather than saying that they should have made it privately.)

I agree with this and wasn't trying to say something to the contrary. What I was trying to do is note that the post makes a relatively minor issue into an expose on EA and on 80K. I think this is unnecessary and unwarranted by the issue. What is was trying to do is note one way of handling the issue if your goal is merely to gain more information or see that a problem gets fixed.

I think public criticism is fine. I think a good, but not required, practice is to show the criticism to the organization ahead of publishing it so that they can correct factual inaccuracies. I think that would have improved the criticism substantially in this case.

Comment author: jsteinhardt 30 January 2017 06:08:29AM 1 point [-]

Thanks for clarifying; your position seems reasonable to me.

Comment author: the_jaded_one 29 January 2017 01:36:47PM *  3 points [-]

One way to resolve our initial skepticism would be to have a trusted expert in the field

And in what field is Chloe Cockburn a "trusted expert"?

If we go by her twitter, we might say something like "she is an expert left-wing, highly political, anti-trump, pro-immigration activist"

Does that seem like a reasonable characterization of Chloe Cockburn's expertise to you?

Characterizing her as "Trusted" seems pretty dishonest in this context. Imagine someone who has problems with EA and Cosecha, for example because they were worried about political bias in EA. Now imagine that they got to know her opinions and leanings, e.g. on her twitter. They wouldn't "trust" her to make accurate calls about the effectiveness of donations to a left-wing, anti-Trump activist cause, because she is a left-wing anti-Trump activist. She is almost as biased as it is possible to be on this issue, the exact opposite of the kind of person whose opinion you should trust. Of course, she may have good arguments since those can come from biased people, but she is being touted as a "trusted expert", not a biased expert with strong arguments, so that is what I am responding to.

Comment author: jsteinhardt 29 January 2017 07:18:44PM 6 points [-]

OpenPhil made an extensive write-up on their decision to hire Chloe here: http://blog.givewell.org/2015/09/03/the-process-of-hiring-our-first-cause-specific-program-officer/. Presumably after reading that you have enough information to decide whether to trust her recommendations (taking into account also whatever degree of trust you have in OpenPhil). If you decide you don't trust it then that's fine, but I don't think that can function as an argument that the recommendation shouldn't have been made in the first place (many people such as myself do trust it and got substantial value out of the recommendation and of reading what Chloe has to say in general).

I feel your overall engagement here hasn't been very productive. You're mostly repeating the same point, and to the extent you make other points it feels like you're reaching for whatever counterarguments you can think of, without considering whether someone who disagreed with you would have an immediate response. The fact that you and Larks are responsible for 20 of the 32 comments on the thread is a further negative sign to me (you could probably condense the same or more information into fewer better-thought-out comments than you are currently making).

Comment author: Kerry_Vaughan 29 January 2017 12:50:09AM *  10 points [-]

It seems to me that your basic argument is something like this

1) Working on highly political topics isn't very effective 2) The charities recommended by 80K are highly political 3) Therefore, the charities recommended by 80K aren't very effective.

Maybe I've missed it, but the only support I see for 1) is this allusion to Hanson:

There are standard arguments, for example this by Robin Hanson from 10 years ago about why it is not smart or "effective" to get into these political tugs-of-war if one wants to make a genuine difference in the world.

But, the evidence from that quote and from Hanson's post, doesn't provide enough support for the premise. I don't think Hanson is saying that all cases of highly political topics are ineffective, and if he were, it seems clear that the general heuristic is nowhere near strong enough to support such a conclusion.

Instead, he's saying that we ought to be skeptical of highly political causes. Fair enough. One way to resolve our initial skepticism would be to have a trusted expert in the field recommend a highly political intervention. This is exactly what we have here.

It could be that 80K shouldn't recommend highly political charities even if they're effective. If so, that seems like a PR/communication problem which could be fixed by distancing themselves from the recommendations. They seem to have already done this, but could do it further by making it as clear as possible that they've outsourced this recommendation to OpenPhil.

The fact that they let Cosecha (and to a lesser extent The Alliance for Safety and Justice) through reduces my confidence in 80,000 hours and the EA movement as a whole. Who thought it would be a good idea to get EA into the culture war with these causes, and also thought that they were plausibly among the most effective things you can do with money? Are they taking effectiveness seriously? What does the political diversity of meetings at 80,000 hours look like? Were there no conservative altruists present in discussions surrounding The Alliance for Safety and Justice and Cosecha, and the promotion of them as "beneficial for everyone" and "effective"?

This is needlessly hyperbolic. Criminal Justice Reform is one among many causes 80K mentions as options in their research. They outsourced this recommendation to an expert in the field. Maybe they did a poor job of outsourcing this, but inferring some widespread problems seems entirely unjustified.

Instead of writing this like some kind of expose, it seems you could get the same results by emailing the 80K team, noting the political sensitivity of the topic, and suggesting that they provide some additional disclaimers about the nature of the recommendation.

So basically, Chloe likes organizing, and she likes Carlos.

I would expect significantly more detailed analysis. Why does Chloe like organizing? What type of organizing does she like? What evidence is there it work? What has Cosecha done in the past? How much money did they spend? How strong is the evidence of policy impact? How strong is the evidence for the desirability of the policies? What are the negative effects? What are the relevant counterfactuals?

@Larks: The recommendation is not intended to be a full-fledged write-up of the organization's effectiveness. It's a quick note of support from an expert in the field. We could debate whether 80K should trust this kind of quick recommendation, but asking that Chloe explores the issue in significantly more details seems unfair given the context.

So basically, Chloe likes organizing, and she likes Carlos.

This is pretty unfair. She provides quite a few lines of evidence in favor of this particular organizer. Liking him is not the root cause of recommending him here.

Disclosure: I work for CEA which houses 80K. I also know nearly everyone on the 80K team.

Comment author: jsteinhardt 29 January 2017 03:35:26AM *  10 points [-]

Instead of writing this like some kind of expose, it seems you could get the same results by emailing the 80K team, noting the political sensitivity of the topic, and suggesting that they provide some additional disclaimers about the nature of the recommendation.

I don't agree with the_jaded_one's conclusions or think his post is particularly well-thought-out, but I don't think raising the bar on criticism like this is very productive if you care about getting good criticism. (If you think the_jaded_one's criticism is bad criticism, then I think it makes sense to just argue for that rather than saying that they should have made it privately.)

My reasons are very similar to Benjamin Hoffman's reasons here.

Comment author: Brian_Tomasik 14 January 2017 05:45:08PM 5 points [-]

I'm surprised to hear that people see criticizing EA as incurring social costs. My impression was that many past criticisms of EA have been met with significant praise (e.g., Ben Kuhn's). One approach for dealing with this could be to provide a forum for anonymous posts + comments.

Comment author: jsteinhardt 14 January 2017 07:24:21PM 4 points [-]

In my post, I said

anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

I would expect that conditioned on spending a large amount of time to write the criticism carefully, it would be met with significant praise. (This is backed up at least in upvotes by past examples of my own writing, e.g. Another Critique of Effective Altruism, The Power of Noise, and A Fervent Defense of Frequentist Statistics.)

Comment author: Ben_Todd 12 January 2017 09:17:08PM 8 points [-]

though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better

Interesting. Which groups could we learn the most from?

Comment author: jsteinhardt 13 January 2017 08:30:50AM 4 points [-]

I think parts of academia do this well (although other parts do it poorly, and I think it's been getting worse over time). In particular, if you present ideas at a seminar, essentially arbitrarily harsh criticism is fair game. Of course, this is different from the public internet, but it's still a group of people, many of whom do not know each other personally, where pretty strong criticism is the norm.

My impression is that criticism has traditionally been a strong part of Jewish culture, but I'm not culturally Jewish so can't speak directly.

I heard that Bridgewater did a bunch of stuff related to feedback/criticism but again don't know a ton about it.

Of course, none of these examples address the fact that much of the criticism of EA happens over the internet, but I do feel that some of the barriers to criticism online also carry over in person (though others don't).

Comment author: jsteinhardt 12 January 2017 07:19:44PM 19 points [-]

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

While I'm sympathetic to the fact that there's also a lot of low-quality / lazy criticism of EA, I don't think responses that involve setting a high bar for high-quality criticism are the right way to go.

(Note that I don't think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)

9

Individual Project Fund: Further Details

[Cross-posted from my blog .] In my post on where I plan to donate in 2016 , I said that I would set aside $2000 for funding promising projects that I come across in the next year: The idea behind the project fund is … [to] give in a low-friction... Read More
Comment author: Peter_Hurford  (EA Profile) 29 December 2016 07:14:04PM 2 points [-]

Curious to hear more about why you're using the donor lottery - that seems to be the only part you did not explain.

Also, while I did not expect it to be the case going in, I found your explanation for splitting your donation to be compelling.

Comment author: jsteinhardt 30 December 2016 08:46:07PM 2 points [-]

Thanks. I think my reasons are basically the same as those in this post: http://effective-altruism.com/ea/14d/donor_lotteries_demonstration_and_faq/.

View more: Next