Comment author: SiebeRozendal 13 October 2018 10:03:41AM 11 points [-]

The narrative that "EA is talent constrained" has had, as mentioned, some negative and unintended consequences. One more I'd like to add is that on this narrative the advised action "go work for a larger organization, like the government" might feel for many people as failure or "not living up to their goals." I'm afraid this leads to more value drift - people's values siding away from EA-aligned values - because they feel they're not "living EA" anyway.

Comment author: Denise_Melchin 18 October 2018 07:46:49AM 10 points [-]

The problem here is that people in the EA movement overtly associate being EA not with 'doing high-impact things' but with 'do EA-approved work, ideally at an EA org'.

It is not obvious to me how this is fixable. It doesn't help that recommendations change frequently, so that entering paths that were 'EA-approved' once aren't any longer. As Greg said, people won't want to risk that. It's unfortunate that we punish people for following previous recommendations. This also doesn't exactly incentivize people to follow current recommendations and leads to EAs being flakey, which is bad for long-term impact.

I think one thing that would be good for people is to have a better professional & do-gooding network outside of EA. If you are considering entering a profession, you can find dedicated people there and coordinate. You can also find other do-gooding communities. In both cases you can bring the moral motivations and the empirical standards to other aligned people.

Comment author: SiebeRozendal 14 October 2018 06:14:53PM 4 points [-]

I just want to note that not every rejected application has been burnt value for me and most have actually been positive, especially in terms of things learned. In the ones I got far it has resulted in more rather than less motivation. In the case I had to do work-related tasks (write research proposal, or execute a sample of typical research) I learned a lot.

On the other hand, increasing the applicants:hired-ratio would mostly increase the proportion of people not getting far in the application process which is where least of the value positive factors are and most of the negative.

Comment author: Denise_Melchin 14 October 2018 08:47:29PM 2 points [-]

Oh I agree people will often learn useful things during application processes. I just think the opportunity cost can be very high, especially when processes take months and people have to wait to figure out whether they got into their top options. I also think those costs are especially high in the top applicants - they have to invest the most and might learn the most useful things, but they also lose the most due to higher opportunity costs.

And as you said, people who get filtered out early lose less time and other resources on application processes. But they might still feel negatively about it, especially given the messaging. Maybe their equally rejected friends feel just as bad, which in the future could dissuade other friends who might be potential top hires to even try.

Comment author: Ben_Todd 13 October 2018 06:23:11PM 1 point [-]

No, I don't think it's that simple. I think it's sometimes true that the main issue is absorbing talent, but there are also situations when the top hire is much better than the second best, so generates a large excess value. This is because many of these roles require a very unusual skill-set.

I think the message to take away is more like "it's hard to infer what to do based on the survey figures".

Personally, I still think it would be very useful to find more talented people and for more people to consider applying to these roles; we just need to bear in mind that these roles require a very unusual skill-set, so people should always have a good back-up plan.

Comment author: Denise_Melchin 13 October 2018 10:37:03PM 13 points [-]

Personally, I still think it would be very useful to find more talented people and for more people to consider applying to these roles; we just need to bear in mind that these roles require a very unusual skill-set, so people should always have a good back-up plan.

I'm curious what your model of the % value increase in the top hire is when you, say, double current hiring pools. It needs to be high enough to offset the burnt value from people's investments in those application processes. This is not only expensive for individual applicants in the moment, but also carries the long term risk of demotivating people - and thereby having a counterfactually smaller hiring pool in future years.

EA seems to be already at the point where lots of applicants are frustrated and might value drift, thereby dropping out of the hiring pool. I am not keen on making this situation worse. It might cause permanent harm.

Do you agree there's a trade-off here? If so, I'm not sure whether our disagreement comes from different assessments of value increases in the top hire or burnt value in the hiring pool.

Comment author: Peter_Hurford  (EA Profile) 11 October 2018 06:58:41PM *  5 points [-]

The median view was that the Long-Term Future fund was twice as effective as the EA Community fund

This strikes me as an odd statement to make, given that - so far - the two funds have essentially operated as the same fund and have given donations to the exact same organizations with the exact same stated purposes. That being said, I agree it’s reasonable to expect the grantmaking of the funds to diverge under the forthcoming new management and maybe this expectation is what is being priced in here.

Comment author: Denise_Melchin 11 October 2018 09:15:34PM 1 point [-]

I had written the same comment, but then deleted it once I found out that it wasn't quite as true as I thought it was. In Nick's writeup the grants come from different funds according to their purpose. (I had previously thought the most recent round of grants granted money to the exact same organisations.)

Comment author: Denise_Melchin 10 October 2018 07:08:37PM *  9 points [-]

Echoing David, I'm somewhat sceptical of the responses to "what skills and experience they think the community as a whole will need in the future". Does the answer refer to high impact opportunities in general in the world or only the ones who are mostly located at EA organisations?

I'm also not sure about the relevance to individual EA's career decisions. I think implying it might be relevant might be outright dangerous if this answer is built on the needs of jobs that are mostly located at EA organisations. From what I understand, EA organisations have had a sharp increase in not only the number, but also the quality of applications in recent times. That's great! But pretty unfortunate for people who took the arguments about 'talent constraints' seriously and focused their efforts on finding a job in the EA Community. They are now finding out that they may have little prospects, even if they are very talented and competent.

There's no shortage of high impact opportunities outside EA organisations. But the EA Community lacks the knowledge to identify them and resources to direct its talent there.

There are only a few dozen roles at EA orgs each year, nevermind roles that are a good fit for individual EA's skillset. Even if we only look at the most talented people, there are more capable people the EA Community isn't able to allocate among its own organizations. And this will only get worse - the EA Community is growing faster than jobs at EA orgs.

If we don't have the knowledge and connections to allocate all our talent right now, that's unfortunate, but not necessarily a big problem if this is something that is communicated. What is a big problem is to accidentally mislead people into thinking it's best to focus their career efforts mostly on EA orgs, instead of viewing them as a small sliver in a vast option space.

Comment author: Denise_Melchin 25 August 2018 03:52:56PM 2 points [-]

Cool study! I wish there were more people who went out and just tested assumptions like this. One high level question:

People in the EA community are very concerned about existential risk, but what is the perception among the general public? Answering this question is highly important if you are trying to reduce existential risk.

Why is this question highly important for reducing extinction risks? This doesn't strike me as obvious. What kind of practical implications does it have if the general public either assigns existential risks either a very high or very low probability?

You could make an argument that this could inform recruiting/funding efforts. Presumably you can do more recruiting and receive more funding for reducing existential risks if there are more people who are concerned about extinction risks.

But I would assume the percentage of people who consider reducing existential risks to be very important to be much more relevant for recruiting and funding than the opinion of the 'general public'.

Though the opinion of those groups has a good chance of being positively correlated, this particular argument doesn't convince me that the opinion of the general public matters that much.

Comment author: Darius_Meissner 10 August 2018 01:03:12PM *  5 points [-]

Given how incredibly positive I see the influence that EA has had on my own life, this post is a fantastic opportunity for me to say ‘thank you’. Thanks to all of you for your contributions to building such an awesome community around (the) ‘one thing you’ll never regret’ – altruism (I got this quote from Ben Todd). I have never before met a group of people this smart, caring and dedicated to improving the world, and I am deeply, deeply grateful that I can be a part of this.


I remember that in elementary school was the first time I was confronted with other students believing in what they referred to as ‘GOD’. Having grown up in a secular family myself, I was at first confused by their belief, and then started debating them. This went on to the point when one day I screamed insults at the sky to prove that there was no one up there listening and no lightning would strike to pulverize me. My identity started to grow, and after reading the Wikipedia article on atheism in early middle school, ‘agnostic-atheist’ was the first of a number of ‘-isms’ that I added to my identity over the years (though, as I will describe, some of these ‘-isms’ were only temporary). Unsurprisingly, when I encountered the writings and speeches of Richard Dawkins in my teens, I quickly became a staunch fan (let it be pointed out that I am more critical nowadays about his communication style and some of his content).

I can contribute my early political socialization to attending summer camps and weekend seminars of a socialist youth organisation in Germany in middle school. There, for the first time, I met people who really cared about improving the world, and I learned about social problems such as racism, sexism, homophobia, and – the mother of all problems, from the socialist perspective – capitalism. Furthering this process of ideological adaptation, I learned that the supposed solutions for these and other social problems were creating a socialist, communist or possibly anarchist world-order – if need be, by means of violent revolution. In hindsight, it’s interesting for me to look back and see that this belief in a violent revolution required an element of consequentialist thinking (along with very twisted empirical beliefs largely grounded in Marxism): to create a better society for the rest of all time, we might need to make sacrifices today and fight. I always had a great time with the other young socialists, made friends, had my first kiss, went to various left-wing protests and sat around camp fires where we sang old socialist workers’ songs. (A note on the songs: I remember how powerful and determined they would make me feel in my identity as a social-ist, connected to a cause that was larger than myself and celebrating those ‘partisans’ who were killed fighting (violently) in socialist revolutions. Hopefully, this was a lasting lesson with regards to methods of ideological indoctrination). The most long-lasting and positive effect this part of my life had on my personality, was in igniting a strong dedication to improving the world – I had found my ultimate and main goal in life (provided and hoping that won’t change again).

During my last lesson in ethics class in middle school, we (around 30 omnivore students) debated the ethics of eating animals. The (to me at the time) surprising conclusion we reached was that, in the absence of an existential necessity for humans to eat meat to survive, it was ethically wrong to raise, harm and slaughter animals. On this day, I decided to try vegetarianism. I began to look into the issue of animal farming, animal ethics, vegetarianism and veganism, and I was shocked by the tremendous suffering endured by billions of non-human animals around the world, and that I had contributed to my whole life. Greedy for knowledge, I read as much as I could about these topics. It still took me a year to decide to be vegan henceforth. I read Peter Singer’s ‘Animal Liberation’ only after I went vegan, but it certainly increased my motivational drive to dedicate my life to reducing the suffering of non-human animals – what I then perceived as the most pressing ethical problem in the world (+ the book was my first real exposure to utilitarian thought). Throughout my high school years, I would write articles about veganism for our school’s student magazine, organise public screenings of the animal-rights movie ‘Earthlings’, distribute brochures of animal rights organisations, debate other students on the ethics of eating meat and supply our school’s cafeteria with plant-based milk alternatives. Later, as part of my high school graduation exams I wrote a 40-page philosophical treaty on animal ethics.

In high school I also learned about environmental degradation – caused, of course, by evil multinationals and, ultimately, capitalism – and started caring about environmental preservation (considering myself an environmental-ist). Reasoning that changing only my own consumer behaviour would have limited effects, once again I started taking actions to affect the behaviour of others. For instance, I started a shop from my room in the boarding school, reselling environmentally-friendly products, such as recycled toilet paper, to other students (I would sell the goods at the market price, without making a profit). I also decided that after my graduation from school, I would take a gap year and go to India to volunteer for a small environmental non-profit organisation. (Perhaps unsurprisingly, in hindsight I don't think that my work as a volunteer had a big impact).

And then I attended the single most transformational event of my life: an introductory talk on effective altruism, brilliantly presented by the EA Max Kocher, who at the time interned with the predecessor organisation of what would later become the Effective Altruism Foundation. I was immediately attracted by the EA perspective on reducing animal suffering (though I remember finding the ‘risks to the far-future from emerging technologies’ part of the presentation weird). Previously, I had read a lot of stuff online written by vegans and animal rights activists, but somehow I had never come across a group of people who were thinking as rationally and strategically about achieving their ethical goals as EAs. Once again, I became greedy for knowledge, and – in reading many EA articles, books, listening to podcasts and watching talks – felt like a whole new world was opening up to me. A world that I couldn’t get enough of. And in the process of engaging with EA, I encountered a great many arguments that challenged some of my dearly held beliefs – many of which I subsequently abandoned.

Some of the major ways I changed my mind through EA include:

  • I got convinced that what ultimately counts morally are the conscious experiences of sentient beings, and thus stopped caring about ‘the environment’ for its own sake. Learning about the prevalence and magnitude of the suffering of animals living in the wild, I left behind my beliefs in environmental preservation, the protection of species over individuals, and the intrinsic importance of biodiversity.

  • The most important normative change I underwent is growing closer to hedonistic utilitarianism, and totalism in population ethics. In parallel to this process, I engaged more with arguments like Bostrom’s astronomical waste argument, and ultimately accepted the long-term value hypothesis. That said, keeping epistemic modesty in mind and the wild divergence in favoured moral theories among moral philosophers, I do attempt to take moral uncertainty seriously.

  • The most important change in my empirical worldview came with learning more about the benefits and achievements of market economies and the tremendous historical failures of its so-called socialist and communist alternatives. I stopped attributing everything that was going wrong in the world to ‘capitalism’ and adopted (what I now think of as) a much more nuanced view on the costs and benefits of adopting particular economic policies.

  • Relatedly, I became much more uncertain with regards to many political questions, due to giving up many of my former tribe-determined answers to policy questions. In particular, I have reduced my certainty in policies with strong factual disagreement among relevant experts.

After having engaged with EA intensely, though passively for more than one year in India, upon my return to Germany I was aching to get active and finally meet other EAs in person. Subsequently, I completed two internships with EAF in Berlin, started and led an EA university chapter at the University of Bayreuth, before ultimately transitioning to the University of Oxford, where I am now one of the co-presidents of EA Oxford.

The philosophy and community behind effective altruism have transformed my life in a myriad of beneficial ways. I am excited about all the achievements of EA since its inception and look forward to contributing to its future success!

Comment author: Denise_Melchin 10 August 2018 01:44:00PM 1 point [-]

Some parts of this sound very similar to me, down to 'left-wing youth political organisation who likes to sing socialist songs' (want to PM me which one it was?).

I have noticed before how much more common activist backgrounds are in German EAs vs. Anglo-Saxon EAs. When I talked about it with other people, the main explanation we could come up with was different base rates of sociopolitical activism in the different countries, but I've never checked the numbers on that.

Comment author: MvdSteeg 08 August 2018 04:49:30PM 0 points [-]

I guess when I say "more impactful" I mean "higher output elasticity".

We can go with the example of x-risk vs poverty reduction (as mentioned by Carl as well). If we were to think that allocating resources to reduce x-risk has an output elasticity 100,000 higher than poverty reduction, but reducing poverty improves the future, and reducing x-risk makes reducing poverty more valuable, then you ought to handle them multiplicatively instead of additively, like you said.

If you'd have 100,001 resources to spend, that'd mean 100,000 units against x-risk and 1 unit for poverty reduction, as opposed to the 100,001 for x-risk and 0 for poverty reduction when looking at them independently(/additively). Sam implies the additive reasoning in such situations is erroneous, after mentioning an example with such a massive discrepancy in elasticity. I'm pointing out that this does not seem to really make a difference in such cases, because even with proportional allocation it is effectively the same as going all in on (in this example) x-risk.

Anyway, not claiming that this makes the multiplicative approach incorrect (or rather, less correct than additive), just saying that in this case which is mentioned as one of the motivations for this, it really doesn't make much of a difference (though things like diminishing returns would). Maybe this would have been more fitting as a reply to Sam than you, though!

Comment author: Denise_Melchin 10 August 2018 01:21:15PM *  0 points [-]

What you're saying is correct if you're assuming that so far zero resources have been spent on x-risk reduction and global poverty. (Though that isn't quite right either: You can't compute an output elasticity if you have to divide by 0.)

But you are supposed to compare the ideal output elasticity ratio with how resources are being spent currently, those ratios are supposed to be equal locally. So using your example, if there were currently more than 1mil times as many resources spent on x-risk than global poverty, global poverty should be prioritised.

When I was running the numbers, my impression was that global wellbeing increases had a much bigger output elasticity than x-risk reduction. I found it a bit tricky to find numbers for global (not just EA) x-risk reduction efforts, so I'm not confident and also not confident how large the gap in resource spending is. 80k quotes $500 billion per year for resources spent on global wellbeing increases.

In response to When causes multiply
Comment author: MvdSteeg 07 August 2018 04:03:47PM *  2 points [-]

While it's hard to disagree with the math, would it not be fairly unlikely for the current allocation of resources to be close enough to the actual allocation of resources that this would realistically lead to allocating an agent's resources to more than one cause area? Like you mention, the allocation within the community-building cause area itself is one of the more likely candidates, as we have a large piece of the pie in our hands (if not all of it). However, the community is not one agent, so we would need to funnel the money through e.g. EA Funds, correct?

Alternatively, there could be top-level analysis of what the distribution -ought- to be, and what it -currently is-, and suggest people donate to close that gap. But is this really different from arguments in terms of marginal impact and neglectedness? I agree your line of thinking ought to be followed in such analysis, but am not convinced that this isn't incorporated already.

It also doesn't solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area. I think that even in less extreme cases than this, we should actually be far more "egalitarian" in our distribution of resources than multiplicative causes (and especially additive causes) suggest, as statistically speaking, the higher the expected value of a cause area is, the more likely that it is overestimated.

I do think this is a useful framework on a smaller scale. E.g. your example of focusing on new talent or improving existing talent within the EA community. For local communities where a small group of agents plays a determining role on where the focus lies, this can be applied much more easily than in global cause area resource allocations.

Comment author: Denise_Melchin 08 August 2018 03:41:59PM *  0 points [-]

I address the points you mention in my response to Carl.

It also doesn't solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area.

I don't think this is understanding the issue correctly, but it's hard to say since I am a bit confused what you mean by 'more impactful' in the context of multiplying variables. Could you give an example?

In response to When causes multiply
Comment author: Carl_Shulman 07 August 2018 09:31:59PM 7 points [-]

"Note also that while we’re looking at such large pools of funding, the EA community will hardly be able to affect the funding ratio substantially. Therefore, this type of exercise will often just show us which single cause should be prioritised by the EA community and thereby act additive after all. This is different if we look at questions with multiplicative factors in which the decisions by the EA community can affect the input ratios like whether we should add more talent to the EA community or focus on improving existing talent."

I agree that multiplicative factors are a big deal for areas where we collectively have strong control over key variables, rather than trying to move big global aggregates. But I think it's the latter that we have in mind when talking about 'causes' rather than interventions or inputs working in particular causes (e.g. investment in hiring vs activities of current employees). For example:

"Should the EA community focus to add its resources on the efforts to reduce GCRs or to add them to efforts to help humanity flourish?"

If you're looking at global variables like world poverty rates, or total risk of extinction it requires quite a lot of absolute impact before you make much of a proportional change.

E.g. if you reduce the prospective risk of existential catastrophe from 10% to 9%, you might increase the benefits of saving lives through AMF by a fraction of a percent, as it would be more likely that civilization would survive to see benefits of the AMF donations. But a 1% change would be unlikely to drastically alter allocations between catastrophic risks and AMF. And a 1% change in existential risk is an enormous impact: even in terms of current humans (relevant for comparison to AMF) that could represent tens of millions of expected current lives (depending on the timeline of catastrophe), and immense considering other kinds of beings and generations. If one were having such amazing impact in a scalable fashion it would seem worth going further at that point.

Diminishing returns of our interventions on each of these variables seems a much more important consideration that multiplicative effects between these variables: cost per percentage point of existential risk reduced is likely to grow many times as one moves along the diminishing returns curve.

"We could also think of the technical ideas to improve institutional decision making like improving forecasting abilities as multiplying with those institution’s willingness to implement those ideas."

If we're thinking about institutions like national governments changing willingness to implement the ideas seems much less elastic than improving the methods. If we look at a much narrower space, e.g. the EA community or a few actors in some core areas, the multiplicative factors key fields and questions.

If I was going to look for cross-cause multiplicative effects it would likely be for their effects on the EA community (e.g. people working on cause A generate some knowledge or reputation that helps improve the efficiency of work on cause B, which has more impact if cause B efforts are larger).

Comment author: Denise_Melchin 08 August 2018 03:20:15PM 3 points [-]

Great comment, thank you. I actually agree with you. Perhaps I should have focussed less on discussing the cause-level and more the interventions level, but I think it is still good to encourage more careful thinking on a cause-wide level even if it won't affect the actual outcome of the decision-making. I think people rarely think about e.g. reducing extinction risks benefiting AMF donations as you describe it.

Let's hope people will be careful to consider multiplicative effects if we can affect the distribution between key variables.

View more: Next