Comment author: Evan_Gaensbauer 17 October 2018 11:48:56PM *  1 point [-]

Upvoted. Questions:

  1. What's the definition of expertise in x-risk? Unless someone has an academic background in a field where expertise is well-defined by credentials, there doesn't appear to be any qualified definition for expertise in x-risk reduction.

  2. What are considered the signs of a value-misaligned actor?

  3. What are the qualities indicating "exceptionally good judgement and decision-making skills" in terms of x-risk reduction orgs?

  4. Where can we find these numerous public lists of project ideas produced by x-risk experts?

Comments:

  1. While 'x-risk' is apparently unprecedented in large parts of academia, and may have always been obscure, I don't believe it's unprecedented in academia or in intellectual circles as a whole. Prevention of nuclear war and and once-looming environmental catastrophes like the ozone holes posed arguably existential risks that were academically studied. The development of game theory was largely motivated by a need for better analysis of war scenarios between the U.S. and Soviet Union during the Cold War.

  2. An example of a major funder for small projects in x-risk reduction would be the Long-Term Future EA Fund. For a year its management was characterized by Nick Beckstead, a central node in the trust network of funding for x-risk reduction, not providing much justification for grants made mostly to x-risk projects the average x-risk donor could've very easily identified themselves. The way the issue of the 'funding gap' is framed seems to imply patches to the existing trust network may be sufficient to solve the problem, when it appears the existing trust network may be fundamentally inadequate.

Comment author: Jon_Behar 16 October 2018 06:58:19PM 1 point [-]

You’re in charge of outreach for EA. You have to choose one demographic to focus on for introducing EA concepts to, and bringing into the movement. What single demographic do you prioritize?

What sort of discussions does this question generate? Do people mostly discuss demographics that are currently overrepresented or underrepresented in EA? If there’s a significant amount of discussion around how and why EA needs more of groups that are already overrepresented, it probably wouldn’t feel very welcoming to someone from an underrepresented demographic. You may want to consider tweaking it to something like “What underrepresented demographic do you think EA most needs more of on the margins?”

FWIW, I have similar concerns that people might interpret the question about lying/misleading as suggesting EA doesn’t have a strong norm against lying.

Comment author: Evan_Gaensbauer 17 October 2018 10:49:39PM 1 point [-]

I made different points, but in this comment I'm generally concerned doing something like this at big EA events could publicly misrepresent and oversimplify a lot of issues EA deals with.

Comment author: Evan_Gaensbauer 17 October 2018 10:47:08PM 4 points [-]

I think the double crux game can be good for dispute resolution. But I think generating disagreement even in a sandbox environment can be counterproductive. It's similar to how having public debates on its face appears seems like it can better resolve a dispute, but if one isn't willing to debate entirely in good faith, they can ruin the debate to the point it shouldn't have happened in the first place. Even if a disagreement isn't socially bad in that it will persist as a conflict after a failed double crux game, it could limit effective altruists to black-and-white thinking after the fact. This lends itself to an absence of the creative problem-solving EA needs.

Perhaps even more than collaborative truth-seeking, the EA community needs individual EAs to learn to think for themselves more to generate possible solutions that the community's core can't solve themselves. There are a lot of EAs who have spare time on their hands that could be better used without something to put it towards. I think starting independent projects an be a valuable use of that time. Here are some of these questions reframed to prompt effective altruists to generate creative solutions.

Imagine you've been given discretion of 10% of the Open Philanthropy Project's annual grantmaking budget. How would you distribute it?

How would solve what you see as the biggest cultural problem in EA?

Under what conditions do you think the EA movement would be justified in deliberately deceiving or misleading the public?

How should EA address our outreach blindspots?

At what rate should EA be growing? How should that be managed?

These questions are reframed to be more challenging. But that's my goal. I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start. Especially if they start from a place of ignorance on current thinking on these issues in EA[1], I don't think in the span of only a couple minutes either side of a double crux game will generate an excellent but controversial hypothesis worth challenging.

The examples in the questions provided are open questions in EA EA organizations don't themselves have good answers to, and I'm sure they'd appreciate additional thinking and support building off their ideas. These aren't binary questions with just one of two possible solutions. I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should. There is no problem with the double crux game, but maybe EAs should learn it without using EA examples.

[1] This sounds callous, but I think it's a common coordination problem we need to fix. It isn't hard, as it's actually quite easy to miss important theoretical developments that make the rounds among EA orgs but aren't broadcast to the broader movement.

7

Reducing Wild Animal Suffering Literature Library: Original Research and Cause Prioritization

These reading modules are put together by the members of the group  Wild Animal Welfare Project Discussion  as part of the  RWAS Literature Library Project .  This series of articles and essays together lay out crucial considerations explaining and underpinning the reduction of wild animal suffering (RWAS) as a potential focus area for effective... Read More
4

Reducing Wild Animal Suffering Literature Library: Consciousness and Ecology

These reading modules are put together by the members of the group  Wild Animal Welfare Project Discussion  as part of the  RWAS Literature Library Project .  This series of articles and essays together lay out crucial considerations explaining and underpinning the reduction of wild animal suffering (RWAS) as a potential focus area for effective... Read More
Comment author: 80000_Hours 12 October 2018 07:47:04PM 3 points [-]

Hi Evan,

Responses to the survey do help to inform our advice but it’s only considered as one piece of data alongside all the other research we’ve done over the years. Our writeup of the survey results definitely shouldn’t be read as our all-things-considered view on any issue in particular.

Perhaps we could have made that clearer in the blog post but we hope that our frank discussion of the survey’s weaknesses and our doubts about many of the individual responses gives some sense of the overall weight we put on this particular source.

Comment author: Evan_Gaensbauer 13 October 2018 12:08:54AM 0 points [-]

Oh, no, that all makes sense. I was just raising questions I had about the post as I came across them. But I guess I should've have read the whole post first. I haven't finished it yet. Thanks.

Comment author: Peter_Hurford  (EA Profile) 11 October 2018 09:34:47PM 3 points [-]

there is just a smaller talent pool of both extremely skilled and dedicated potential employees to draw from

We have been screening fairly selectively on having an EA mindset, though, so I'm not sure how much larger our pool is compared to other EA orgs. In fact, you could maybe argue the opposite -- given the prevalence of long-termism among the most involved EAs, it may be harder to convince them to work for us.

So the data seems to imply leaders at EA orgs which already have a dozen staff would pay 20%+ of their budget for the next single marginal hire.

From my vantage point, though, their actions don't seem consistent with this view.

Comment author: Evan_Gaensbauer 11 October 2018 10:01:02PM 0 points [-]

Yeah, I'm still left with more questions than answers.

Comment author: Evan_Gaensbauer 11 October 2018 09:39:09PM 3 points [-]

I've volunteered to submit a comment to the EA Forum from a couple anonymous observes which I believe deserves to be engaged.

The model this survey is based on implicitly creates something of an 'ideal EA,' which is somebody young, quantitative, elite, who has the means and opportunities to go to an elite university, and has the personality to hack very high-pressure jobs. In other words, it paints a picture of EA that is quite exclusive.

Comment author: Evan_Gaensbauer 11 October 2018 09:28:02PM 1 point [-]

We surveyed managers at organisations in the community to find out their views. These results help to inform our recommendations about the highest impact career paths available.

How much weight does 80,000 Hours give to these survey results relative for other factors which together form 80k's career recommendations?

I ask because I'm not sure managers at EA organizations know what in the near future their focus area as a whole will need, and I think 80k might be able to exercise better independent judgement than the aggregate opinion of EA organization leaders. For example, there was an ops bottleneck in EA that is a lot better now. It seemed like orgs like 80k and CEA spotted this problem, and drove operations talent to a variety of EA orgs. But independent of one another I don't recall other EA orgs which benefited from this push helping to solve this coordination problem in the first place.

In general, I'm impressed with 80k's more formal research. I imagine there might be pressure for 80k to give more weight to softer impressions like what different EA org managers think the EA movement needs. But I think 80k's career recommendations will remain better if they're built off a harder research methodology.

Comment author: Peter_Hurford  (EA Profile) 10 October 2018 11:47:59PM *  14 points [-]

I’d really like to hear more about other EA orgs experience with hiring staff. I’ve certainly had no problem finding junior staff for Rethink Priorities, Rethink Charity, or Charity Science (Note: Rethink Priorities is part of Rethink Charity but both are entirely separate from Charity Science)… and so far we’ve been lucky enough to have enough strong senior staff applications that we’re still finding ourselves turning down really strong applicants we would otherwise really love to hire.

I personally feel much more funding constrained / management capacity constrained / team culture “don’t grow too quickly” constrained than I feel “I need more talented applicants” constrained. I definitely don’t feel a need to trade away hundreds of thousands or millions of dollars in donations to get a good hire and I’m surprised that 80K/CEA has been flagging this issue for years now. …And experiences like this one suggest to me that I might not be alone in this regard.

So…

1.) Am I just less picky? (possible)

2.) Am I better at attracting the stronger applicants? (doubtful)

3.) Am I mistaken about the quality of our applicants such that they’re actually lower than they appear? (possible but doubtful)

Maybe my differences in cause prioritization (not overwhelmingly prioritizing the long-term future but still giving it a lot of credence) contributes toward getting a different and stronger applicant pool? …But how precise of a cause alignment do you need from hires, especially in ops, as long as people are broadly onboard?

I’m confused.

Comment author: Evan_Gaensbauer 11 October 2018 08:55:31PM 0 points [-]

One possibility is because the EA organizations you hire for are focused on causes which also have a lot of representation in the non-profit sector outside of the EA movement, like global health and animal welfare, it's easier to attract talent which is both very skilled and very dedicated. Since a focus on the far-future is more limited to EA and adjacent communities, there is just a smaller talent pool of both extremely skilled and dedicated potential employees to draw from.

Far-future-focused EA orgs could be constantly suffering from this problem of a limited talent pool, to the point they'd be willing to pay hundreds of thousands of dollars to find an extremely talented hire. In AI safety/alignment, this wouldn't be weird as AI researchers can easily take a salary of hundreds of thousands at companies like OpenAI or Google. But this should only apply to orgs like MIRI or maybe FHI, which are far from the only orgs 80k surveyed.

So the data seems to imply leaders at EA orgs which already have a dozen staff would pay 20%+ of their budget for the next single marginal hire. So it still doesn't make sense that year after year a lot of EA orgs apparently need talent so badly they'll spend money they don't have to get it.

View more: Next