5

Double Crux prompts for Effective Altruists

At the most recent Effective Altruism Global in San Francisco, I presented CFAR's Double Crux technique for resolving disagreements.  For the "practice" part of the talk, I handed out a series of prompts on EA topics, to generate disagreements to explore. Several people liked the prompts a lot and asked... Read More
Comment author: beah 04 October 2018 08:59:08PM 1 point [-]

Might there be sweet spots to be found somewhere along the continuum of "quality" of giving (the effectiveness of each dollar given) stopping short of, say, GiveWell recommend charities or even EA approved causes?

Most ordinary people don't give because they generally feel charitable and want to do something, anything, so long as it's charitable. The are compelled to give by an event or a narrative that tugs on them. Most EA instruments don't really do this, of course. At least not in the initial consumer interaction. Say someone was compelled to give by the recent family separation crisis in the US. The cause wouldn't ever land on an EA list but I imagine that within it there are some charities 10x or maybe 100x more effective than others. It would be valuable to help people chose those charities, given that there's essentially a 0% chance that the money in question will be funneled to bed nets or the long term future.

In short, I'm interested in whether there are ways to bring a watered down version of EA to a mass audience, with a net positive effect on effective dollars given.

Comment author: Elityre 08 October 2018 04:36:07PM 4 points [-]

I'm not sure how much having a "watered down" version of EA ideas in the zeitgeist helps because, I don't have a clear sense of how effective most charities are.

If the difference between the median charity and the most impactful charity is 4 orders of magnitude ($1 to the most impactful charities does as much good as $1000 to the the median charity), then even a 100x improvement from the median charity is not very impactful. It's still only 1% as good a donating to the best charity. If that were the case, it's probably more efficient to just aim to get more people to adopt the whole EA mindset.

On the other hand, if the variation is much smaller, it might be the case that a 100x improvement get's you to about half of the impact per dollar of the best charities.

Which world we're living in matters a lot for whether we should invest in this strategy.

That said, promotion of EA principles, like cost effectiveness and EV estimates, separate from the EA brand, seem almost universally good, and extend far beyond people's choice of charities.

Comment author: Elityre 08 October 2018 02:57:26PM *  6 points [-]

In the short term, senior hires are most likely to come from finding and onboarding people who already have the required skills, experience, credentials and intrinsic motivation to reduce x-risks.

Can you be more specific about the the required skills and experience are?

Skimming the report, you say "All senior hires require exceptionally good judgement and decision-making." Can you be more specific about what that means and how it can be assessed?

Comment author: Elityre 08 October 2018 03:09:44PM *  8 points [-]

It seems to me that in many cases the specific skills that are needed are both extremely rare and not well captured by the standard categories.

For instance, Paul Christiano seems to me to be an enormous asset to solving the core problems of AI safety. If "we didn't have a Paul" I would be willing to trade huge amounts of EA resources to have him working on AI safety, and I would similarly trade huge resources to get another Paul-equivalent working on the problem.

But it doesn't seem like Paul's skillset is one that I can easily select for. He's knowledgeable about ML, but there are many people with ML knowledge (about 100 new ML PhDs each year). That isn't the thing that distinguishes him.

Nevertheless, Paul has some qualities, above and beyond his technical familiarity, that allow him to do original and insightful thinking about AI safety. I don't understand what those qualities are, or know how to assess them, but they seem to me to be much more critical than having object level knowledge.

I have close to no idea how to recruit more people that can do the sort of work that Paul can do. (I wish I did. As I said, I would give up way more than my left arm to get more Pauls).

But, I'm afraid there's a tendency here to goodhart on the easily measurable virtues, like technical skill or credentials.

Comment author: Elityre 08 October 2018 02:57:26PM *  6 points [-]

In the short term, senior hires are most likely to come from finding and onboarding people who already have the required skills, experience, credentials and intrinsic motivation to reduce x-risks.

Can you be more specific about the the required skills and experience are?

Skimming the report, you say "All senior hires require exceptionally good judgement and decision-making." Can you be more specific about what that means and how it can be assessed?

Comment author: Elityre 08 October 2018 02:18:33PM 3 points [-]

Intellectual contributions to the rationality community: including CFAR’s class on goal factoring

Just a note. I think this might be a bit missleading. Geoff, and other members of Leverage research taught a version of goal factoring at some early CFAR workshops. And Leverage did develop a version of goal factoring inspired by CT. But my understanding is that CFAR staff independently developed goal factoring (starting from an attempt to teach applied consequentialism), and this is an instance of parallel development.

[I work for CFAR, though I had not yet joined the EA or rationality community in those early days. I am reporting what other long standing CFAR staff told me.]