Comment author: Evan_Gaensbauer 17 October 2018 11:48:56PM *  1 point [-]

Upvoted.

Questions:

  1. What's the definition of expertise in x-risk? Unless someone has an academic background in a field where expertise is well-defined by credentials, there doesn't appear to be any qualified definition for expertise in x-risk reduction.

  2. What are considered the signs of a value-misaligned actor?

  3. What are the qualities indicating "exceptionally good judgement and decision-making skills" in terms of x-risk reduction orgs?

  4. Where can we find these numerous public lists of project ideas produced by x-risk experts?

Comments:

  1. While 'x-risk' is apparently unprecedented in large parts of academia, and may have always been obscure, I don't believe it's unprecedented in academia or in intellectual circles as a whole. Prevention of nuclear war and and once-looming environmental catastrophes like the ozone holes posed arguably existential risks that were academically studied. The development of game theory was largely motivated by a need for better analysis of war scenarios between the U.S. and Soviet Union during the Cold War.

  2. An example of a major funder for small projects in x-risk reduction would be the Long-Term Future EA Fund. For a year its management was characterized by Nick Beckstead, a central node in the trust network of funding for x-risk reduction, not providing much justification for grants made mostly to x-risk projects the average x-risk donor could've very easily identified themselves. The way the issue of the 'funding gap' is framed seems to imply patches to the existing trust network may be sufficient to solve the problem, when it appears the existing trust network may be fundamentally inadequate.

Comment author: Jon_Behar 16 October 2018 06:58:19PM 1 point [-]

You’re in charge of outreach for EA. You have to choose one demographic to focus on for introducing EA concepts to, and bringing into the movement. What single demographic do you prioritize?

What sort of discussions does this question generate? Do people mostly discuss demographics that are currently overrepresented or underrepresented in EA? If there’s a significant amount of discussion around how and why EA needs more of groups that are already overrepresented, it probably wouldn’t feel very welcoming to someone from an underrepresented demographic. You may want to consider tweaking it to something like “What underrepresented demographic do you think EA most needs more of on the margins?”

FWIW, I have similar concerns that people might interpret the question about lying/misleading as suggesting EA doesn’t have a strong norm against lying.

Comment author: Evan_Gaensbauer 17 October 2018 10:49:39PM 1 point [-]

I made different points, but in this comment I'm generally concerned doing something like this at big EA events could publicly misrepresent and oversimplify a lot of issues EA deals with.

Comment author: Evan_Gaensbauer 17 October 2018 10:47:08PM 5 points [-]

I think the double crux game can be good for dispute resolution. But I think generating disagreement even in a sandbox environment can be counterproductive. It's similar to how having public debates on its face appears seems like it can better resolve a dispute, but if one isn't willing to debate entirely in good faith, they can ruin the debate to the point it shouldn't have happened in the first place. Even if a disagreement isn't socially bad in that it will persist as a conflict after a failed double crux game, it could limit effective altruists to black-and-white thinking after the fact. This lends itself to an absence of the creative problem-solving EA needs.

Perhaps even more than collaborative truth-seeking, the EA community needs individual EAs to learn to think for themselves more to generate possible solutions that the community's core can't solve themselves. There are a lot of EAs who have spare time on their hands that could be better used without something to put it towards. I think starting independent projects an be a valuable use of that time. Here are some of these questions reframed to prompt effective altruists to generate creative solutions.

Imagine you've been given discretion of 10% of the Open Philanthropy Project's annual grantmaking budget. How would you distribute it?

How would solve what you see as the biggest cultural problem in EA?

Under what conditions do you think the EA movement would be justified in deliberately deceiving or misleading the public?

How should EA address our outreach blindspots?

At what rate should EA be growing? How should that be managed?

These questions are reframed to be more challenging. But that's my goal. I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start. Especially if they start from a place of ignorance on current thinking on these issues in EA[1], I don't think in the span of only a couple minutes either side of a double crux game will generate an excellent but controversial hypothesis worth challenging.

The examples in the questions provided are open questions in EA EA organizations don't themselves have good answers to, and I'm sure they'd appreciate additional thinking and support building off their ideas. These aren't binary questions with just one of two possible solutions. I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should. There is no problem with the double crux game, but maybe EAs should learn it without using EA examples.

[1] This sounds callous, but I think it's a common coordination problem we need to fix. It isn't hard, as it's actually quite easy to miss important theoretical developments that make the rounds among EA orgs but aren't broadcast to the broader movement.

Comment author: 80000_Hours 12 October 2018 07:47:04PM 3 points [-]

Hi Evan,

Responses to the survey do help to inform our advice but it’s only considered as one piece of data alongside all the other research we’ve done over the years. Our writeup of the survey results definitely shouldn’t be read as our all-things-considered view on any issue in particular.

Perhaps we could have made that clearer in the blog post but we hope that our frank discussion of the survey’s weaknesses and our doubts about many of the individual responses gives some sense of the overall weight we put on this particular source.

Comment author: Evan_Gaensbauer 13 October 2018 12:08:54AM 0 points [-]

Oh, no, that all makes sense. I was just raising questions I had about the post as I came across them. But I guess I should've have read the whole post first. I haven't finished it yet. Thanks.

Comment author: Peter_Hurford  (EA Profile) 11 October 2018 09:34:47PM 3 points [-]

there is just a smaller talent pool of both extremely skilled and dedicated potential employees to draw from

We have been screening fairly selectively on having an EA mindset, though, so I'm not sure how much larger our pool is compared to other EA orgs. In fact, you could maybe argue the opposite -- given the prevalence of long-termism among the most involved EAs, it may be harder to convince them to work for us.

So the data seems to imply leaders at EA orgs which already have a dozen staff would pay 20%+ of their budget for the next single marginal hire.

From my vantage point, though, their actions don't seem consistent with this view.

Comment author: Evan_Gaensbauer 11 October 2018 10:01:02PM 0 points [-]

Yeah, I'm still left with more questions than answers.

Comment author: Evan_Gaensbauer 11 October 2018 09:39:09PM 3 points [-]

I've volunteered to submit a comment to the EA Forum from a couple anonymous observes which I believe deserves to be engaged.

The model this survey is based on implicitly creates something of an 'ideal EA,' which is somebody young, quantitative, elite, who has the means and opportunities to go to an elite university, and has the personality to hack very high-pressure jobs. In other words, it paints a picture of EA that is quite exclusive.

Comment author: Evan_Gaensbauer 11 October 2018 09:28:02PM 1 point [-]

We surveyed managers at organisations in the community to find out their views. These results help to inform our recommendations about the highest impact career paths available.

How much weight does 80,000 Hours give to these survey results relative for other factors which together form 80k's career recommendations?

I ask because I'm not sure managers at EA organizations know what in the near future their focus area as a whole will need, and I think 80k might be able to exercise better independent judgement than the aggregate opinion of EA organization leaders. For example, there was an ops bottleneck in EA that is a lot better now. It seemed like orgs like 80k and CEA spotted this problem, and drove operations talent to a variety of EA orgs. But independent of one another I don't recall other EA orgs which benefited from this push helping to solve this coordination problem in the first place.

In general, I'm impressed with 80k's more formal research. I imagine there might be pressure for 80k to give more weight to softer impressions like what different EA org managers think the EA movement needs. But I think 80k's career recommendations will remain better if they're built off a harder research methodology.

Comment author: Peter_Hurford  (EA Profile) 10 October 2018 11:47:59PM *  14 points [-]

I’d really like to hear more about other EA orgs experience with hiring staff. I’ve certainly had no problem finding junior staff for Rethink Priorities, Rethink Charity, or Charity Science (Note: Rethink Priorities is part of Rethink Charity but both are entirely separate from Charity Science)… and so far we’ve been lucky enough to have enough strong senior staff applications that we’re still finding ourselves turning down really strong applicants we would otherwise really love to hire.

I personally feel much more funding constrained / management capacity constrained / team culture “don’t grow too quickly” constrained than I feel “I need more talented applicants” constrained. I definitely don’t feel a need to trade away hundreds of thousands or millions of dollars in donations to get a good hire and I’m surprised that 80K/CEA has been flagging this issue for years now. …And experiences like this one suggest to me that I might not be alone in this regard.

So…

1.) Am I just less picky? (possible)

2.) Am I better at attracting the stronger applicants? (doubtful)

3.) Am I mistaken about the quality of our applicants such that they’re actually lower than they appear? (possible but doubtful)

Maybe my differences in cause prioritization (not overwhelmingly prioritizing the long-term future but still giving it a lot of credence) contributes toward getting a different and stronger applicant pool? …But how precise of a cause alignment do you need from hires, especially in ops, as long as people are broadly onboard?

I’m confused.

Comment author: Evan_Gaensbauer 11 October 2018 08:55:31PM 0 points [-]

One possibility is because the EA organizations you hire for are focused on causes which also have a lot of representation in the non-profit sector outside of the EA movement, like global health and animal welfare, it's easier to attract talent which is both very skilled and very dedicated. Since a focus on the far-future is more limited to EA and adjacent communities, there is just a smaller talent pool of both extremely skilled and dedicated potential employees to draw from.

Far-future-focused EA orgs could be constantly suffering from this problem of a limited talent pool, to the point they'd be willing to pay hundreds of thousands of dollars to find an extremely talented hire. In AI safety/alignment, this wouldn't be weird as AI researchers can easily take a salary of hundreds of thousands at companies like OpenAI or Google. But this should only apply to orgs like MIRI or maybe FHI, which are far from the only orgs 80k surveyed.

So the data seems to imply leaders at EA orgs which already have a dozen staff would pay 20%+ of their budget for the next single marginal hire. So it still doesn't make sense that year after year a lot of EA orgs apparently need talent so badly they'll spend money they don't have to get it.

Comment author: Dunja 02 August 2018 10:17:10AM *  0 points [-]

Hi Evan, Here's my response to your comments (including another post of yours from above). By the way, that's a nice example of an industry-compatible research, I agree that such and similar cases can indeed fall into what EAs wish to fund, as long as they are assessed as effective and efficient. I think this is an important debate, so let me challenge some of your points.

Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I've misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one's motivation for the given research topic, I don't see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.

Whether a certain group of people can conduct a given project in an effective and efficient way shouldn't primarily depend on their ethical and political mindset (though this may play a motivating role as I've mentioned above), but on the methodological prospects of the given project, on its programmatic character and the capacity of the given scientific group to make an impact. I don't see why EAs --as such-- would qualify for such values anymore than an expert in the given domain can, when placed within the framework of the given project. It is important to keep in mind that we are not talking here about a political activity of spreading EA ideas, but about scientific research which has to be conducted with a necessary rigor in order to make an impact in the scientific community and wider (otherwise nobody will care about the output of the given researchers). This is the kind of criteria that I wished would be present in the assessment of the given grants, rather than who is an EA and who not.

Second, by prioritizing a certain type of group in the given domain of research, the danger of confirmation bias gets increased. This is why feminist epistemologists have been arguing for diversity across the scientific community (rather than for the claim that only feminists should do feminist-compatible scientific research).

Finally, if there is a worry that academic projects focus too much on other issues, the call for funding can always be formulated in such a way that it specifies the desired topics. In this way, academic project proposals can be formulated having EA goals in mind.

Comment author: Evan_Gaensbauer 06 August 2018 06:24:11AM -1 points [-]

Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I've misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one's motivation for the given research topic, I don't see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.

I think it's a common perception in EA effective altruists can often do work as efficiently and effectively as academics not explicitly affiliated with EA. Often EAs also think academics can do some if not most EA work than a random non-academic EA. AI safety is more populated with and stems from the rationality community. On average it's more ambivalent towards academia than EA. It's my personal opinion there are a variety of reasons why EA may often have a comparative advantage of doing the research in-house. There are a number of reasons for this.

One is practical. Academics would often have to divide their time doing EA-relevant research with teaching duties. EA tends to focus on unsexy research topics, so academics may be likelier to get grants for focusing on irrelevant research. Depending on the field, the politics of research can distort the epistemology of academia so it won't work for EA's purposes. These are constraints effective altruists working full-time at NPOs funded by other effective altruists don't face, allowing them to dedicate all their attention to their organization's mission.

Personally, my confidence in EA to make progress on research and other projects for a wide variety of goals is bolstered by some original research in multiple causes being lauded by academics as some of the best on the subject they've seen. Of course, these are NPOs focused on addressing neglected problems in global poverty, animal advocacy campaigns, and other niche areas. Some of the biggest successes in EA come from close collaborations with academia. I think most EAs would encourage more cooperation between academia and EA. I've pushed in the past for EA making more grants to academics doing sympathetic research. Attracting talent with an academic research background to EA can be difficult. I agree with you overall EA's current approach doesn't make sense.

I think you've got a lot of good points. I'd encourage you to make a post out of some of the comments I made here. I think one reason your posts might be poorly received is because some causes in EA, especially AI safety/alignment, have received a lot of poor criticism in the past merely for trying to do formal research outside of academia. I could review a post before you post it to the EA Forum to suggest edits so it would be better received. Either way, I think EA integrating more with academia is a great idea.

Comment author: ea247 03 August 2018 08:27:12PM 8 points [-]

I think EA Forum karma isn't the best because a lot of the people who are particularly engaged in EA do not spend much time on the forum and instead focus on more action-relevant things for their org. The EA Forum will be biased towards people more interested in research and community related things as opposed to direct actions. For example, New Incentives is a very EA aligned org in direct poverty, but they spend most of their time doing cash transfers in Nigeria instead of posting on the forum.

To build on your idea though, I think forming some sort of index of involvement would get away from any one particular thing biasing the results. I think including karma in the index makes sense, along with length of involvement, hours per week involved in EA, percent donated, etc.

Comment author: Evan_Gaensbauer 06 August 2018 04:10:52AM 0 points [-]

I'm working on a project to scale up volunteer work opportunities with all kinds of EA organizations. Part of what I wanted to do is develop a system for EA organizations to delegate tasks to volunteers, including writing blog posts. This could help EA orgs like New Incentives get more of their content up on the EA Forum, such as research summaries and progress updates. Do you think orgs would find this valuable.

View more: Next