R

RobertHarling

343 karmaJoined

Comments
32

EA Oxford and Cambridge are looking for new full-time organisers!

We’re looking for motivated, self-driven individuals with excellent communication and interpersonal skills, the ability to manage multiple projects, and think deeply about community strategy. 

  • You’d lead a variety of projects, such as community retreats, large intro fellowships, and career support and mentorship for promising new group members. 
  • These roles are a great way to grow your leadership skills, build a portfolio of well-executed projects, and develop your own understanding of EA cause areas. 
  • By building large, thriving communities at some of the world’s top universities, you’re able to support many talented people to go on to do highly impactful work.

New organisers would start by September 2024- find out more here. Deadline 28th April 2024.
 

Apply now

ERA is hiring for an Ops Manager and multiple AI Techincal and Governance Research Managers - Remote or in Cambridge, Part and Full-time, ideally starting in March, apply by Feb 21.

The Existential Risk Alliance (ERA) is hiring for various roles for our flagship Summer Research Programme. This year, we will have a special focus on AI Safety and AI Governance. With the support of our networks, we will host ~30 ERA fellows, and you could be a part of the team making this happen!

Over the past 3 years, we have supported over 60 early career researchers from 10+ countries through our summer programme. You can find out more about ERA at www.erafellowship.org. In 2023, we ran 35+ events over 8 weeks to facilitate the fellow's research goals. Our alumni have published their work in peer-reviewed journals, launched their own projects based on their research, or started jobs at impactful organisations after their time at ERA.

The specific roles we are currently hiring for include:

We are looking for people who can ideally start in March 2024. In-person participation in some or all of the 8-week summer fellowship programme in Cambridge is highly encouraged, and all travel, visa, accommodation, and meal costs will be covered for in-person participation.

Applications will be reviewed on a rolling basis, and we encourage early applications. Unless suitable candidates are found earlier and specific roles are taken down, we will accept applications until February 21, 2024, at the end of the day in your local time zone. 

TL;DR: A 'risky' career “failing” to have an impact doesn’t mean your career has “failed” in the conventional sense, and probably isn’t as bad it intuitively feels.

 

  • You can fail to have an impact with your career in many ways. One way to break it down might be:
    • The problem you were trying to address turns out to not be that important
    • Your method for addressing the problem turns out to not work
    • You don’t succeed in executing your plan
  • E.g. you could be aiming to have an impact by reducing the risk of future pandemics, and you do this by aiming to become a leading academic to bring lots of resources and attention to improving vaccine development pipelines. There are several ways you could end up not having much of an impact: pandemic risk could turn out to not be that high; advances in testing and PPE mean we can identify and contain pandemics very quickly, and vaccines aren’t as important; industry labs advance vaccine development very quickly and your lab doesn’t end up affecting things; you don’t succeed at becoming a leading academic, and become a mid-tier researcher instead.
  • People often feel risk averse with their careers- we’re worried about taking “riskier” options that might not work out, even if they have higher expected impact. However there are some reasons to think most of the expect impact could come from the tail scenarios where you're really successful.
  • I think we neglect is that there are different ways your career plan can not work out. In particular, many of the scenarios where you don’t succeed to have a large positive impact, you still succeed in the other values you have for your career- e.g. you’re still a conventionally successful researcher, you just didn’t happen to save the world. 
  • And even if your plan “fails” because you don’t reach the level in the field you were aiming for, you likely still end up in a good position e.g. not a senior academic, just a mid-tier academic or a researcher in industry, or not a senior civil servant but mid-tier civil servant. This isn’t true in every area- in some massively oversubscribed areas like professional sports failing can mean not having any job. Or when doing a start-up. But I’d guess this isn’t the majority of impactful careers that people consider.
  • I also can imagine myself finding the situation of having tried and failed somewhat comforting in that I can think to myself “I did my bit, I tried, it didn’t work out, but it was a shot worth taking, and now I just have this normally good life to live”. Of course I ‘should’ keep striving for impact, but if that relaxing after I fail makes me more likely to take the risk initially, maybe it’s worth it.

Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart.  I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
 

  • Extinction versus Global Catastrophic Risks (GCRs)
    • It seems likely that a short-termist with the high estimates of risks that Scott describes would focus on GCRs not extinction risks, and these might come apart.
    • To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
  • Sensitivity to views of risk
    • Some people may be more sceptical of x-risk estimates this century, but might still reach the same prioritisation under the long-termist framing as the cost is so much higher. 
    • This maybe depends how hard you think the “x-risk is really high" pill is to swallow compared to the “future lives matter equally” pill.
  • Suspicious Convergence
    • Going from not valuing future generations to valuing future generations seems initially like a huge change in values where you’re adding this enormous group into your moral circle. It seems suspicious that this shouldn’t change our priorities.
    • It’s maybe not quite as bad as it sounds as it seems reasonable to expect some convergence between what makes lives today good and what makes future lives good. However especially if you’re optimising for maximum impact, you would expect these to come apart.
  • The world could be plausibly net negative
    • To the extent you think farmed animals suffer, and that wild animals live net negative lives, a large scale extinction event might not reduce welfare that much in the short-term. This maybe seems less true for a pandemic that would kill all humans (although presumably substantially reduce the number of animals in factory farms). But for example a failed alignment situation where all becomes paperclips doesn’t seem as bad if all the animals were suffering anyway.
  • The future might be net negative
    • If you think that, given no deadly pandemic, the future might be net negative (E.g. because of s-risks, or potentially "meh" futures, or you’re very sceptical about AI alignment going well) then preventing pandemics doesn’t actually look that good under a longtermist view.
  • General improvements for future risks/Patient Philanthropy
    • As Scott mentions, other possible long-termist approaches such as value spreading, improving institutions, or patient philanthropic investment doesn’t come up under the x-risk view. I think you should be more inclined to these approaches if you expect new risks to appear in the future, providing we make it past current risks.

It seems that a possible objection to all these points is that AI risk is really high and we should just focus on AI alignment (as it’s more than just an extinction risk like bio).


 

Thanks for this interesting analysis! Do you have a link to  Foster's analysis of MindEase's impact?

How do you think the research on MindEase's impact compares to that of GiveWell's top charities? Based on your description of Hildebrandt's analysis for example, it seems less strong than e.g. the several randomized control trials supporting distributing bed nets.  Do you think discounting based on this could substantially effect the cost-effectiveness? (Given how much lower Foster's estimate of impact is though and that this is more heavily used in the overall cost-effectiveness, I would be interested to see whether this has a stronger evidence base?)

Thanks for this post Jack, I found it really useful as I haven't got round yet to reading the updated paper. This break down in the cluelessness section was a new arrangement to me. Does anyone know if this break down has been used elsewhere? If not this seems like useful progress in better defining the cluelessness objections to longtermism. 

Thanks very much for your post! I think this a really interesting idea and it's really useful to learn from your experience in this area. 

What would you think of the concern that these types of ads would be a "low fidelity" way of spreading EA that could risk misinforming people about EA?   I think from my experience community building, it's really useful to be able to describe and discuss EA ideas in detail, and that there are risks to giving someone an incorrect view of EA. These risks include someone being critical of what they believe EA is, and spreading this critique, as well as discouraging them from getting involved when they may have done so at a later time. The risk is probably lower if someone clicks on a short ad that takes them to say effectivealtruism.com where the various ideas are carefully explained and introduced. But someone who only saw the ads and didn't click could end up with an incorrect view of EA.

I would be interested to see discussion about what would and wouldn't make a good online ad for EA e.g. how to intrigue people without being inaccurate or over-sensationalizing parts of EA. 

There might also be an interesting balance between how much interest we want to someone to have shown in EA-related topics before advertising to them. E.g. every university student in the US is probably too wide a net, but everyone who's searching "effective altruism" or "existential risk" are probably already on their way to EA resources without the need for an advert.

I know lots of university EA groups make use of Facebook advertising and some have found this useful to promote events. I don't know whether Google/Youtube ads allow targeting at the level of students of a specific university?

I think I would have some worry that if external evaluations of individual grant recipients became common, this could discourage people from applying from grants in future, for fear of being negatively judged should the project not work out. 

Potential grant recipients might worry that external evaluators may not have all the information about their project or the grant makers reasoning for awarding the grant. This lack of information could then lead to unfair or incorrect evaluations. This would be more a risk if it becomes common for people to write low quality evaluations that are weakly reasoned, uncharitable or don't respect privacy. I'm unsure whether it would be easy to encourage high quality evaluations (such as your own) without also increasing the risk of low quality evaluations. 

The risk of discouraging grant applications would probably be greater for more speculative funds such as the Long Term Future Fund (LTFF), as it's easier for projects to not work out and look like wasted funds to uninformed outsiders.

There could also be an opposite risk that by seeking to discourage low quality evaluations, we discourage people too much from evaluating and criticizing work. It might be useful to establish key principles for writing evaluations that enable people to right respectful and useful evaluations, even with limited knowledge or time.

I'm unsure where the right trade-off between usefully evaluating projects, and not discouraging grant applications would be. Thank you for your review of the LTFF recipients and for posting this question, I found both really interesting. 

Thanks for sharing this paper, I had not heard of it before and it sounds really interesting.

Thanks for your comment Jack, that's a really great point. I suppose that we would seek to influence AI slightly differently for each reason:

  1. Reduce chance of unaligned/uncontrolled AI
  2. Increase chance of useful AI
  3. Increase chance of exactly aligned AI

e.g. you could reduce the chance of AI risk by stopping all AI development but then lose the other two benefits, or you could create a practically useful AI but not one that would guide humanity towards an optimal future. That being said I reckon in practice a lot of work to improve the development of AI would hit all 3. Though maybe if you view one reason as much more important than the others then you focus on a specific type of AI work.

Load more